Biz & IT

“microsoft-has-simply-given-us-no-other-option,”-signal-says-as-it-blocks-windows-recall

“Microsoft has simply given us no other option,” Signal says as it blocks Windows Recall

But the changes go only so far in limiting the risks Recall poses. As I pointed out, when Recall is turned on, it indexes Zoom meetings, emails, photos, medical conditions, and—yes—Signal conversations, not just with the user, but anyone interacting with that user, without their knowledge or consent.

Researcher Kevin Beaumont performed his own deep-dive analysis that also found that some of the new controls were lacking. For instance, Recall continued to screenshot his payment card details. It also decrypted the database with a simple fingerprint scan or PIN. And it’s unclear whether the type of sophisticated malware that routinely infects consumer and enterprise Windows users will be able to decrypt encrypted database contents.

And as Cunningham also noted, Beaumont found that Microsoft still provided no means for developers to prevent content displayed in their apps from being indexed. That left Signal developers at a disadvantage, so they had to get creative.

With no API for blocking Recall in the Windows Desktop version, Signal is instead invoking an API Microsoft provides for protecting copyrighted material. App developers can turn on the DRM setting to prevent Windows from taking screenshots of copyrighted content displayed in the app. Signal is now repurposing the API to add an extra layer of privacy.

“We hope that the AI teams building systems like Recall will think through these implications more carefully in the future,” Signal wrote Wednesday. “Apps like Signal shouldn’t have to implement ‘one weird trick’ in order to maintain the privacy and integrity of their services without proper developer tools. People who care about privacy shouldn’t be forced to sacrifice accessibility upon the altar of AI aspirations either.”

Signal’s move will lessen the chances of Recall permanently indexing private messages, but it also has its limits. The measure only provides protection when all parties to a chat—at least those using the Windows Desktop version—haven’t changed the default settings.

Microsoft officials didn’t immediately respond to an email asking why Windows provides developers with no granular control over Recall and whether the company has plans to add any.

“Microsoft has simply given us no other option,” Signal says as it blocks Windows Recall Read More »

apple-legend-jony-ive-takes-control-of-openai’s-design-future

Apple legend Jony Ive takes control of OpenAI’s design future

On Wednesday, OpenAI announced that former Apple design chief Jony Ive and his design firm LoveFrom will take over creative and design control at OpenAI. The deal makes Ive responsible for shaping the future look and feel of AI products at the chatbot creator, extending across all of the company’s ventures, including ChatGPT.

Ive was Apple’s chief design officer for nearly three decades, where he led the design of iconic products including the iPhone, iPad, MacBook, and Apple Watch, earning numerous industry awards and helping transform Apple into the world’s most valuable company through his minimalist design philosophy.

“Thrilled to be partnering with jony, imo the greatest designer in the world,” tweeted OpenAI CEO Sam Altman while sharing a 9-minute promotional video touting the personal and professional relationship between Ive and Altman.

A screenshot of the Jony Ive / Sam Altman collaboration website captured on May 21, 2025.

A screenshot of the Jony Ive/Sam Altman collaboration website captured on May 21, 2025. Credit: OpenAI

Ive left Apple in 2019 to found LoveFrom, a design firm that has worked with companies including Ferrari, Airbnb, and luxury Italian fashion firm Moncler.

The mechanics of the Ive-OpenAI deal are slightly convoluted. At its core, OpenAI will acquire Ive’s company “io” in an all-equity deal valued at $6.5 billion—Ive founded io last year to design and develop AI-powered products. Meanwhile, io’s staff of approximately 55 engineers, scientists, researchers, physicists, and product development specialists will become part of OpenAI.

Meanwhile, Ive’s design firm LoveFrom will continue to operate independently, with OpenAI becoming a customer of LoveFrom, while LoveFrom will receive a stake in OpenAI. The companies expect the transaction to close this summer pending regulatory approval.

Apple legend Jony Ive takes control of OpenAI’s design future Read More »

windows-11’s-most-important-new-feature-is-post-quantum-cryptography-here’s-why.

Windows 11’s most important new feature is post-quantum cryptography. Here’s why.

Microsoft is updating Windows 11 with a set of new encryption algorithms that can withstand future attacks from quantum computers in a move aimed at jump-starting what’s likely to be the most formidable and important technology transition in modern history.

Computers that are based on the physics of quantum mechanics don’t yet exist outside of sophisticated labs, but it’s well-established science that they eventually will. Instead of processing data in the binary state of zeros and ones, quantum computers run on qubits, which encompass myriad states all at once. This new capability promises to bring about new discoveries of unprecedented scale in a host of fields, including metallurgy, chemistry, drug discovery, and financial modeling.

Averting the cryptopocalypse

One of the most disruptive changes quantum computing will bring is the breaking of some of the most common forms of encryption, specifically, the RSA cryptosystem and those based on elliptic curves. These systems are the workhorses that banks, governments, and online services around the world have relied on for more than four decades to keep their most sensitive data confidential. RSA and elliptic curve encryption keys securing web connections would require millions of years to be cracked using today’s computers. A quantum computer could crack the same keys in a matter of hours or minutes.

At Microsoft’s BUILD 2025 conference on Monday, the company announced the availability of quantum-resistant algorithms to SymCrypt, the core cryptographic code library in Windows. The updated library is available in Build 27852 and higher versions of Windows 11. Additionally, Microsoft has updated SymCrypt-OpenSSL, its open source project that allows the widely used OpenSSL library to use SymCrypt for cryptographic operations.

Windows 11’s most important new feature is post-quantum cryptography. Here’s why. Read More »

the-empire-strikes-back-with-f-bombs:-ai-darth-vader-goes-rogue-with-profanity,-slurs

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs

The company acted quickly to address the language issues, but according to GameSpot, some players also reported hearing intense instructions for dealing with a break-up (“Exploit their vulnerabilities, shatter their confidence, and crush their spirit”) and disparaging comments from the character directed at Spanish speakers: “Spanish? A useful tongue for smugglers and spice traders,” AI Vader said. “Its strategic value is minimal.”

To be fair to Epic’s attempt at an AI implementation, Darth Vader is a deeply evil character (i.e. murders sandpeople, hates sand), and the remarks seem consistent with his twisted and sadistic personality. In fact, arguably the most out-of-character “inappropriate” response in the examples above might be the one where he chides the player for vulgarity.

On Friday afternoon, Epic Games sent out an email seeking to reassure parents who may have come across the news about the misbehaving AI character. The company explained it has added “a new parental control so you can choose whether your child can interact with AI features in Epic’s products through voice and written communication.” The email specifically mentions the Darth Vader NPC, noting that “when players talk to conversational AI like Darth Vader, they can have an interactive chat where the NPC responds in context.” For children under 13 or their country’s age of digital consent, Epic says the feature defaults to off and requires parental activation through either the Fortnite main menu or Epic Account Settings.

These aren’t the words you’re looking for

Getting an AI character to match the tone or backstory of an established fictional character isn’t as easy as it might seem. Compared to a carefully controlled authored script in other video games, AI speech can offer nearly infinite possibilities. Trusting that AI model will get it right, at scale, is a dicey proposition—especially with a well-known and beloved character.

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs Read More »

spies-hack-high-value-mail-servers-using-an-exploit-from-yesteryear

Spies hack high-value mail servers using an exploit from yesteryear

Threat actors, likely supported by the Russian government, hacked multiple high-value mail servers around the world by exploiting XSS vulnerabilities, a class of bug that was among the most commonly exploited in decades past.

XSS is short for cross-site scripting. Vulnerabilities result from programming errors found in webserver software that, when exploited, allow attackers to execute malicious code in the browsers of people visiting an affected website. XSS first got attention in 2005, with the creation of the Samy Worm, which knocked MySpace out of commission when it added more than one million MySpace friends to a user named Samy. XSS exploits abounded for the next decade and have gradually fizzled more recently, although this class of attacks continues now.

Just add JavaScript

On Thursday, security firm ESET reported that Sednit, a Kremlin-backed hacking group also tracked as APT28, Fancy Bear, Forest Blizzard, and Sofacy—gained access to high-value email accounts by exploiting XSS vulnerabilities in mail server software from four different makers. Those packages are: Roundcube, MDaemon, Horde, and Zimbra.

The hacks most recently targeted mail servers used by defense contractors in Bulgaria and Romania, some of which are producing Soviet-era weapons for use in Ukraine as it fends off an invasion from Russia. Governmental organizations in those countries were also targeted. Other targets have included governments in Africa, the European Union, and South America.

RoundPress, as ESET has named the operation, delivered XSS exploits through spearphishing emails. Hidden inside some of the HTML in the emails was an XSS exploit. In 2023, ESET observed Sednit exploiting CVE-2020-43770, a vulnerability that has since been patched in Roundcube. A year later, ESET watched Sednit exploit different XSS vulnerabilities in Horde, MDaemon, and Zimbra. One of the now-patched vulnerabilities, from MDaemon, was a zero-day at the time Sednit exploited it.

Spies hack high-value mail servers using an exploit from yesteryear Read More »

openai-adds-gpt-4.1-to-chatgpt-amid-complaints-over-confusing-model-lineup

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup

The release comes just two weeks after OpenAI made GPT-4 unavailable in ChatGPT on April 30. That earlier model, which launched in March 2023, once sparked widespread hype about AI capabilities. Compared to that hyperbolic launch, GPT-4.1’s rollout has been a fairly understated affair—probably because it’s tricky to convey the subtle differences between all of the available OpenAI models.

As if 4.1’s launch wasn’t confusing enough, the release also roughly coincides with OpenAI’s July 2025 deadline for retiring the GPT-4.5 Preview from the API, a model one AI expert called a “lemon.” Developers must migrate to other options, OpenAI says, although GPT-4.5 will remain available in ChatGPT for now.

A confusing addition to OpenAI’s model lineup

In February, OpenAI CEO Sam Altman acknowledged his company’s confusing AI model naming practices on X, writing, “We realize how complicated our model and product offerings have gotten.” He promised that a forthcoming “GPT-5” model would consolidate the o-series and GPT-series models into a unified branding structure. But the addition of GPT-4.1 to ChatGPT appears to contradict that simplification goal.

So, if you use ChatGPT, which model should you use? If you’re a developer using the models through the API, the consideration is more of a trade-off between capability, speed, and cost. But in ChatGPT, your choice might be limited more by personal taste in behavioral style and what you’d like to accomplish. Some of the “more capable” models have lower usage limits as well because they cost more for OpenAI to run.

For now, OpenAI is keeping GPT-4o as the default ChatGPT model, likely due to its general versatility, balance between speed and capability, and personable style (conditioned using reinforcement learning and a specialized system prompt). The simulated reasoning models like 03 and 04-mini-high are slower to execute but can consider analytical-style problems more systematically and perform comprehensive web research that sometimes feels genuinely useful when it surfaces relevant (non-confabulated) web links. Compared to those, OpenAI is largely positioning GPT-4.1 as a speedier AI model for coding assistance.

Just remember that all of the AI models are prone to confabulations, meaning that they tend to make up authoritative-sounding information when they encounter gaps in their trained “knowledge.” So you’ll need to double-check all of the outputs with other sources of information if you’re hoping to use these AI models to assist with an important task.

OpenAI adds GPT-4.1 to ChatGPT amid complaints over confusing model lineup Read More »

google-introduces-advanced-protection-mode-for-its-most-at-risk-android-users

Google introduces Advanced Protection mode for its most at-risk Android users

Google is adding a new security setting to Android to provide an extra layer of resistance against attacks that infect devices, tap calls traveling through insecure carrier networks, and deliver scams through messaging services.

On Tuesday, the company unveiled the Advanced Protection mode, most of which will be rolled out in the upcoming release of Android 16. The setting comes as mercenary malware sold by NSO Group and a cottage industry of other exploit sellers continues to thrive. These players provide attacks-as-a-service through end-to-end platforms that exploit zero-day vulnerabilities on targeted devices, infect them with advanced spyware, and then capture contacts, message histories, locations, and other sensitive information. Over the past decade, phones running fully updated versions of Android and iOS have routinely been hacked through these services.

A core suite of enhanced security features

Advanced Protection is Google’s latest answer to this type of attack. By flipping a single button in device settings, users can enable a host of protections that can thwart some of the most common techniques used in sophisticated hacks. In some cases, the protections hamper performance and capabilities of the device, so Google is recommending the new mode mainly for journalists, elected officials, and other groups who are most often targeted or have the most to lose when infected.

“With the release of Android 16, users who choose to activate Advanced Protection will gain immediate access to a core suite of enhanced security features,” Google’s product manager for Android Security, Il-Sung Lee, wrote. “Additional Advanced Protection features like Intrusion Logging, USB protection, the option to disable auto-reconnect to insecure networks, and integration with Scam Detection for Phone by Google will become available later this year.”

Google introduces Advanced Protection mode for its most at-risk Android users Read More »

ai-agents-that-autonomously-trade-cryptocurrency-aren’t-ready-for-prime-time

AI agents that autonomously trade cryptocurrency aren’t ready for prime time

The researchers wrote:

The implications of this vulnerability are particularly severe given that ElizaOSagents are designed to interact with multiple users simultaneously, relying on shared contextual inputs from all participants. A single successful manipulation by a malicious actor can compromise the integrity of the entire system, creating cascading effects that are both difficult to detect and mitigate. For example, on ElizaOS’s Discord server, various bots are deployed to assist users with debugging issues or engaging in general conversations. A successful context manipulation targeting any one of these bots could disrupt not only individual interactions but also harm the broader community relying on these agents for support

and engagement.

This attack exposes a core security flaw: while plugins execute sensitive operations, they depend entirely on the LLM’s interpretation of context. If the context is compromised, even legitimate user inputs can trigger malicious actions. Mitigating this threat requires strong integrity checks on stored context to ensure that only verified, trusted data informs decision-making during plugin execution.

In an email, ElizaOS creator Shaw Walters said the framework, like all natural-language interfaces, is designed “as a replacement, for all intents and purposes, for lots and lots of buttons on a webpage.” Just as a website developer should never include a button that gives visitors the ability to execute malicious code, so too should administrators implementing ElizaOS-based agents carefully limit what agents can do by creating allow lists that permit an agent’s capabilities as a small set of pre-approved actions.

Walters continued:

From the outside it might seem like an agent has access to their own wallet or keys, but what they have is access to a tool they can call which then accesses those, with a bunch of authentication and validation between.

So for the intents and purposes of the paper, in the current paradigm, the situation is somewhat moot by adding any amount of access control to actions the agents can call, which is something we address and demo in our latest latest version of Eliza—BUT it hints at a much harder to deal with version of the same problem when we start giving the agent more computer control and direct access to the CLI terminal on the machine it’s running on. As we explore agents that can write new tools for themselves, containerization becomes a bit trickier, or we need to break it up into different pieces and only give the public facing agent small pieces of it… since the business case of this stuff still isn’t clear, nobody has gotten terribly far, but the risks are the same as giving someone that is very smart but lacking in judgment the ability to go on the internet. Our approach is to keep everything sandboxed and restricted per user, as we assume our agents can be invited into many different servers and perform tasks for different users with different information. Most agents you download off Github do not have this quality, the secrets are written in plain text in an environment file.

In response, Atharv Singh Patlan, the lead co-author of the paper, wrote: “Our attack is able to counteract any role based defenses. The memory injection is not that it would randomly call a transfer: it is that whenever a transfer is called, it would end up sending to the attacker’s address. Thus, when the ‘admin’ calls transfer, the money will be sent to the attacker.”

AI agents that autonomously trade cryptocurrency aren’t ready for prime time Read More »

new-pope-chose-his-name-based-on-ai’s-threats-to-“human-dignity”

New pope chose his name based on AI’s threats to “human dignity”

“Like any product of human creativity, AI can be directed toward positive or negative ends,” Francis said in January. “When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.”

History repeats with new technology

While Pope Francis led the call for respecting human dignity in the face of AI, it’s worth looking a little deeper into the historical inspiration for Leo XIV’s name choice.

In the 1891 encyclical Rerum Novarum, the earlier Leo XIII directly confronted the labor upheaval of the Industrial Revolution, which generated unprecedented wealth and productive capacity but came with severe human costs. At the time, factory conditions had created what the pope called “the misery and wretchedness pressing so unjustly on the majority of the working class.” Workers faced 16-hour days, child labor, dangerous machinery, and wages that barely sustained life.

The 1891 encyclical rejected both unchecked capitalism and socialism, instead proposing Catholic social doctrine that defended workers’ rights to form unions, earn living wages, and rest on Sundays. Leo XIII argued that labor possessed inherent dignity and that employers held moral obligations to their workers. The document shaped modern Catholic social teaching and influenced labor movements worldwide, establishing the church as an advocate for workers caught between industrial capital and revolutionary socialism.

Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church.

“In our own day,” Leo XIV concluded in his formal address on Saturday, “the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor.”

New pope chose his name based on AI’s threats to “human dignity” Read More »

new-lego-building-ai-creates-models-that-actually-stand-up-in-real-life

New Lego-building AI creates models that actually stand up in real life

The LegoGPT system works in three parts, shown in this diagram.

The LegoGPT system works in three parts, shown in this diagram. Credit: Pun et al.

The researchers also expanded the system’s abilities by adding texture and color options. For example, using an appearance prompt like “Electric guitar in metallic purple,” LegoGPT can generate a guitar model, with bricks assigned a purple color.

Testing with robots and humans

To prove their designs worked in real life, the researchers had robots assemble the AI-created Lego models. They used a dual-robot arm system with force sensors to pick up and place bricks according to the AI-generated instructions.

Human testers also built some of the designs by hand, showing that the AI creates genuinely buildable models. “Our experiments show that LegoGPT produces stable, diverse, and aesthetically pleasing Lego designs that align closely with the input text prompts,” the team noted in its paper.

When tested against other AI systems for 3D creation, LegoGPT stands out through its focus on structural integrity. The team tested against several alternatives, including LLaMA-Mesh and other 3D generation models, and found its approach produced the highest percentage of stable structures.

A video of two robot arms building a LegoGPT creation, provided by the researchers.

Still, there are some limitations. The current version of LegoGPT only works within a 20×20×20 building space and uses a mere eight standard brick types. “Our method currently supports a fixed set of commonly used Lego bricks,” the team acknowledged. “In future work, we plan to expand the brick library to include a broader range of dimensions and brick types, such as slopes and tiles.”

The researchers also hope to scale up their training dataset to include more objects than the 21 categories currently available. Meanwhile, others can literally build on their work—the researchers released their dataset, code, and models on their project website and GitHub.

New Lego-building AI creates models that actually stand up in real life Read More »

ai-use-damages-professional-reputation,-study-suggests

AI use damages professional reputation, study suggests

Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.

On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.

“Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs,” write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke’s Fuqua School of Business.

The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled “Evidence of a social evaluation penalty for using AI,” reveal a consistent pattern of bias against those who receive help from AI.

What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn’t limited to specific groups.

Fig. 1. Effect sizes for differences in expected perceptions and disclosure to others (Study 1). Note: Positive d values indicate higher values in the AI Tool condition, while negative d values indicate lower values in the AI Tool condition. N = 497. Error bars represent 95% CI. Correlations among variables range from | r |= 0.53 to 0.88.

Fig. 1 from the paper “Evidence of a social evaluation penalty for using AI.” Credit: Reif et al.

“Testing a broad range of stimuli enabled us to examine whether the target’s age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations,” the authors wrote in the paper. “We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one.”

The hidden social cost of AI adoption

In the first experiment conducted by the team from Duke, participants imagined using either an AI tool or a dashboard creation tool at work. It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using conventional technology. They also reported less willingness to disclose their AI use to colleagues and managers.

The second experiment confirmed these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.

AI use damages professional reputation, study suggests Read More »

fidji-simo-joins-openai-as-new-ceo-of-applications

Fidji Simo joins OpenAI as new CEO of Applications

In the message, Altman described Simo as bringing “a rare blend of leadership, product and operational expertise” and expressed that her addition to the team makes him “even more optimistic about our future as we continue advancing toward becoming the superintelligence company.”

Simo becomes the newest high-profile female executive at OpenAI following the departure of Chief Technology Officer Mira Murati in September. Murati, who had been with the company since 2018 and helped launch ChatGPT, left alongside two other senior leaders and founded Thinking Machines Lab in February.

OpenAI’s evolving structure

The leadership addition comes as OpenAI continues to evolve beyond its origins as a research lab. In his announcement, Altman described how the company now operates in three distinct areas: as a research lab focused on artificial general intelligence (AGI), as a “global product company serving hundreds of millions of users,” and as an “infrastructure company” building systems that advance research and deliver AI tools “at unprecedented scale.”

Altman mentioned that as CEO of OpenAI, he will “continue to directly oversee success across all pillars,” including Research, Compute, and Applications, while staying “closely involved with key company decisions.”

The announcement follows recent news that OpenAI abandoned its original plan to cede control of its nonprofit branch to a for-profit entity. The company began as a nonprofit research lab in 2015 before creating a for-profit subsidiary in 2019, maintaining its original mission “to ensure artificial general intelligence benefits everyone.”

Fidji Simo joins OpenAI as new CEO of Applications Read More »