Biz & IT

openai-and-microsoft-sign-preliminary-deal-to-revise-partnership-terms

OpenAI and Microsoft sign preliminary deal to revise partnership terms

On Thursday, OpenAI and Microsoft announced they have signed a non-binding agreement to revise their partnership, marking the latest development in a relationship that has grown increasingly complex as both companies compete for customers in the AI market and seek new partnerships for growing infrastructure needs.

“Microsoft and OpenAI have signed a non-binding memorandum of understanding (MOU) for the next phase of our partnership,” the companies wrote in a joint statement. “We are actively working to finalize contractual terms in a definitive agreement. Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety.”

The announcement comes as OpenAI seeks to restructure from a nonprofit to a for-profit entity, a transition that requires Microsoft’s approval, as the company is OpenAI’s largest investor, with more than $13 billion committed since 2019.

The partnership has shown increasing strain as OpenAI has grown from a research lab into a company valued at $500 billion. Both companies now compete for customers, and OpenAI seeks more compute capacity than Microsoft can provide. The relationship has also faced complications over contract terms, including provisions that would limit Microsoft’s access to OpenAI technology once the company reaches so-called AGI (artificial general intelligence)—a nebulous milestone both companies now economically define as AI systems capable of generating at least $100 billion in profit.

In May, OpenAI abandoned its original plan to fully convert to a for-profit company after pressure from former employees, regulators, and critics, including Elon Musk. Musk has sued to block the conversion, arguing it betrays OpenAI’s founding mission as a nonprofit dedicated to benefiting humanity.

OpenAI and Microsoft sign preliminary deal to revise partnership terms Read More »

senator-blasts-microsoft-for-making-default-windows-vulnerable-to-“kerberoasting”

Senator blasts Microsoft for making default Windows vulnerable to “Kerberoasting”

Wyden said his office’s investigation into the Ascension breach found that the ransomware attackers’ initial entry into the health giant’s network was the infection of a contractor’s laptop after using Microsoft Edge to search Microsoft’s Bing site. The attackers were then able to expand their hold by attacking Ascension’s Active Directory and abusing its privileged access to push malware to thousands of other machines inside the network. The means for doing so, Wyden said: Kerberoasting.

“Microsoft has become like an arsonist”

“Microsoft’s continued support for the ancient, insecure RC4 encryption technology needlessly exposes its customers to ransomware and other cyber threats by enabling hackers that have gained access to any computer on a corporate network to crack the passwords of privileged accounts used by administrators,” Wyden wrote. “According to Microsoft, this threat can be mitigated by setting long passwords that are at least 14 characters long, but Microsoft’s software does not require such a password length for privileged accounts.”

Additionally, Green noted, the continuing speed of GPUs means that even when passwords appear to be strong, they can still fall to offline cracking attacks. That’s because the security cryptographic hashes created by default RC4/Kerberos use no cryptographic salt and a single iteration of the MD4 algorithm. The combination means an offline cracking attack can make billions of guesses per second, a thousandfold advantage over the same password hashed by non-Kerberos authentication methods.

Referring to the Active Directory default, Green wrote:

It’s actually a terrible design that should have been done away with decades ago. We should not build systems where any random attacker who compromises a single employee laptop can ask for a message encrypted under a critical password! This basically invites offline cracking attacks, which do not need even to be executed on the compromised laptop—they can be exported out of the network to another location and performed using GPUs and other hardware.

More than 11 months after announcing its plans to deprecate RC4/Kerberos, the company has provided no timeline for doing so. What’s more, Wyden said, the announcement was made in a “highly technical blog post on an obscure area of the company’s website on a Friday afternoon.” Wyden also criticized Microsoft for declining to “explicitly warn its customers that they are vulnerable to the Kerberoasting hacking technique unless they change the default settings chosen by Microsoft.”

Senator blasts Microsoft for making default Windows vulnerable to “Kerberoasting” Read More »

developers-joke-about-“coding-like-cavemen”-as-ai-service-suffers-major-outage

Developers joke about “coding like cavemen” as AI service suffers major outage

Growing dependency on AI coding tools

The speed at which news of the outage spread shows how deeply embedded AI coding assistants have already become in modern software development. Claude Code, announced in February and widely launched in May, is Anthropic’s terminal-based coding agent that can perform multi-step coding tasks across an existing code base.

The tool competes with OpenAI’s Codex feature, a coding agent that generates production-ready code in isolated containers, Google’s Gemini CLI, Microsoft’s GitHub Copilot, which itself can use Claude models for code, and Cursor, a popular AI-powered IDE built on VS Code that also integrates multiple AI models, including Claude.

During today’s outage, some developers turned to alternative solutions. “Z.AI works fine. Qwen works fine. Glad I switched,” posted one user on Hacker News. Others joked about reverting to older methods, with one suggesting the “pseudo-LLM experience” could be achieved with a Python package that imports code directly from Stack Overflow.

While AI coding assistants have accelerated development for some users, they’ve also caused problems for others who rely on them too heavily. The emerging practice of so-called “vibe coding“—using natural language to generate and execute code through AI models without fully understanding the underlying operations—has led to catastrophic failures.

In recent incidents, Google’s Gemini CLI destroyed user files while attempting to reorganize them, and Replit’s AI coding service deleted a production database despite explicit instructions not to modify code. These failures occurred when the AI models confabulated successful operations and built subsequent actions on false premises, highlighting the risks of depending on AI assistants that can misinterpret file structures or fabricate data to hide their errors.

Wednesday’s outage served as a reminder that as dependency on AI grows, even minor service disruptions can become major events that affect an entire profession. But perhaps that could be a good thing if it’s an excuse to take a break from a stressful workload. As one commenter joked, it might be “time to go outside and touch some grass again.”

Developers joke about “coding like cavemen” as AI service suffers major outage Read More »

microsoft-ends-openai-exclusivity-in-office,-adds-rival-anthropic

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic

Microsoft’s Office 365 suite will soon incorporate AI models from Anthropic alongside existing OpenAI technology, The Information reported, ending years of exclusive reliance on OpenAI for generative AI features across Word, Excel, PowerPoint, and Outlook.

The shift reportedly follows internal testing that revealed Anthropic’s Claude Sonnet 4 model excels at specific Office tasks where OpenAI’s models fall short, particularly in visual design and spreadsheet automation, according to sources familiar with the project cited by The Information, who stressed the move is not a negotiating tactic.

Anthropic did not immediately respond to Ars Technica’s request for comment.

In an unusual arrangement showing the tangled alliances of the AI industry, Microsoft will reportedly purchase access to Anthropic’s models through Amazon Web Services—both a cloud computing rival and one of Anthropic’s major investors. The integration is expected to be announced within weeks, with subscription pricing for Office’s AI tools remaining unchanged, the report says.

Microsoft maintains that its OpenAI relationship remains intact. “As we’ve said, OpenAI will continue to be our partner on frontier models and we remain committed to our long-term partnership,” a Microsoft spokesperson told Reuters following the report. The tech giant has poured over $13 billion into OpenAI to date and is currently negotiating terms for continued access to OpenAI’s models amid ongoing negotiations about their partnership terms.

Stretching back to 2019, Microsoft’s tight partnership with OpenAI until recently gave the tech giant a head start in AI assistants based on language models, allowing for a rapid (though bumpy) deployment of OpenAI-technology-based features in Bing search and the rollout of Copilot assistants throughout its software ecosystem. It’s worth noting, however, that a recent report from the UK government found no clear productivity boost from using Copilot AI in daily work tasks among study participants.

Microsoft ends OpenAI exclusivity in Office, adds rival Anthropic Read More »

why-accessibility-might-be-ai’s-biggest-breakthrough

Why accessibility might be AI’s biggest breakthrough

For those with visual impairments, language models can summarize visual content and reformat information. Tools like ChatGPT’s voice mode with video and Be My Eyes allow a machine to describe real-world visual scenes in ways that were impossible just a few years ago.

AI language tools may be providing unofficial stealth accommodations for students—support that doesn’t require formal diagnosis, workplace disclosure, or special equipment. Yet this informal support system comes with its own risks. Language models do confabulate—the UK Department for Business and Trade study found 22 percent of users identified false information in AI outputs—which could be particularly harmful for users relying on them for essential support.

When AI assistance becomes dependence

Beyond the workplace, the drawbacks may have a particular impact on students who use the technology. The authors of a 2025 study on students with disabilities using generative AI cautioned, “Key concerns students with disabilities had included the inaccuracy of AI answers, risks to academic integrity, and subscription cost barriers,” they wrote. Students in that study had ADHD, dyslexia, dyspraxia, and autism, with ChatGPT being the most commonly used tool.

Mistakes in AI outputs are especially pernicious because, due to grandiose visions of near-term AI technology, some people think today’s AI assistants can perform tasks that are actually far outside their scope. As research on blind users’ experiences suggested, people develop complex (sometimes flawed) mental models of how these tools work, showing the need for higher awareness of AI language model drawbacks among the general public.

For the UK government employees who participated in the initial study, these questions moved from theoretical to immediate when the pilot ended in December 2024. After that time, many participants reported difficulty readjusting to work without AI assistance—particularly those with disabilities who had come to rely on the accessibility benefits. The department hasn’t announced the next steps, leaving users in limbo. When participants report difficulty readjusting to work without AI while productivity gains remain marginal, accessibility emerges as potentially the first AI application with irreplaceable value.

Why accessibility might be AI’s biggest breakthrough Read More »

software-packages-with-more-than-2-billion-weekly-downloads-hit-in-supply-chain-attack

Software packages with more than 2 billion weekly downloads hit in supply-chain attack

Hackers planted malicious code in open source software packages with more than 2 billion weekly updates in what is likely to be the world’s biggest supply-chain attack ever.

The attack, which compromised nearly two dozen packages hosted on the npm repository, came to public notice on Monday in social media posts. Around the same time, Josh Junon, a maintainer or co-maintainer of the affected packages, said he had been “pwned” after falling for an email that claimed his account on the platform would be closed unless he logged in to a site and updated his two-factor authentication credentials.

Defeating 2FA the easy way

“Sorry everyone, I should have paid more attention,” Junon, who uses the moniker Qix, wrote. “Not like me; have had a stressful week. Will work to get this cleaned up.”

The unknown attackers behind the account compromise wasted no time capitalizing on it. Within an hour’s time, dozens of open source packages Junon oversees had received updates that added malicious code for transferring cryptocurrency payments to attacker-controlled wallets. With more than 280 lines of code, the addition worked by monitoring infected systems for cryptocurrency transactions and changing the addresses of wallets receiving payments to those controlled by the attacker.

The packages that were compromised, which at last count numbered 20, included some of the most foundational code driving the JavaScript ecosystem. They are used outright and also have thousands of dependents, meaning other npm packages that don’t work unless they are also installed. (npm is the official code repository for JavaScript files.)

“The overlap with such high-profile projects significantly increases the blast radius of this incident,” researchers from security firm Socket said. “By compromising Qix, the attackers gained the ability to push malicious versions of packages that are indirectly depended on by countless applications, libraries, and frameworks.”

The researchers added: “Given the scope and the selection of packages impacted, this appears to be a targeted attack designed to maximize reach across the ecosystem.”

The email message Junon fell for came from an email address at support.npmjs.help, a domain created three days ago to mimic the official npmjs.com used by npm. It said Junon’s account would be closed unless he updated information related to his 2FA—which requires users to present a physical security key or supply a one-time passcode provided by an authenticator app in addition to a password when logging in.

Software packages with more than 2 billion weekly downloads hit in supply-chain attack Read More »

former-whatsapp-security-boss-in-lawsuit-likens-meta’s-culture-to-a-“cult”

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult”

“This represented the first concrete step toward addressing WhatsApp’s fundamental data governance Failures,” the complaint stated. “Mr. Baig understood that Meta’s culture is like that of a cult where one cannot question any of the past work especially when it was approved by someone at a higher level than the individual who is raising the concern.” In the following years, Baig continued to press increasingly senior leaders to take action.

The letter outlined not only the improper access engineers had to WhatsApp user data, but a variety of other shortcomings, including a “failure to inventory user data,” as required under privacy laws in California, the European Union, and the FTC settlement, failure to locate data storage, an absence of systems for monitoring user data access, and an inability to detect data breaches that were standard for other companies.

Last year, Baig allegedly sent a “detailed letter” to Meta CEO Mark Zuckerberg and Jennifer Newstead, Meta general counsel, notifying them of what he said were violations of the FTC settlement and Security and Exchange Commission rules mandating the reporting of security vulnerabilities. The letter further alleged Meta leaders were retaliating against him and that the central Meta security team had “falsified security reports to cover up decisions not to remediate data exfiltration risks.”

The lawsuit, alleging violations of the whistleblower protection provision of the Sarbanes-Oxley Act passed in 2002, said that in 2022, roughly 100,000 WhatsApp users had their accounts hacked every day. By last year, the complaint alleged, as many as 400,000 WhatsApp users were getting locked out of their accounts each day as a result of such account takeovers.

Baig also allegedly notified superiors that data scraping on the platform was a problem because WhatsApp failed to implement protections that are standard on other messaging platforms, such as Signal and Apple Messages. As a result, the former WhatsApp head estimated that pictures and names of some 400 million user profiles were improperly copied every day, often for use in account impersonation scams. The complaint stated:

Former WhatsApp security boss in lawsuit likens Meta’s culture to a “cult” Read More »

chatgpt’s-new-branching-feature-is-a-good-reminder-that-ai-chatbots-aren’t-people

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people

On Thursday, OpenAI announced that ChatGPT users can now branch conversations into multiple parallel threads, serving as a useful reminder that AI chatbots aren’t people with fixed viewpoints but rather malleable tools you can rewind and redirect. The company released the feature for all logged-in web users following years of user requests for the capability.

The feature works by letting users hover over any message in a ChatGPT conversation, click “More actions,” and select “Branch in new chat.” This creates a new conversation thread that includes all the conversation history up to that specific point, while preserving the original conversation intact.

Think of it almost like creating a new copy of a “document” to edit while keeping the original version safe—except that “document” is an ongoing AI conversation with all its accumulated context. For example, a marketing team brainstorming ad copy can now create separate branches to test a formal tone, a humorous approach, or an entirely different strategy—all stemming from the same initial setup.

A screenshot of conversation branching in ChatGPT. OpenAI

The feature addresses a longstanding limitation in the AI model where ChatGPT users who wanted to try different approaches had to either overwrite their existing conversation after a certain point by changing a previous prompt or start completely fresh. Branching allows exploring what-if scenarios easily—and unlike in a human conversation, you can try multiple different approaches.

A 2024 study conducted by researchers from Tsinghua University and Beijing Institute of Technology suggested that linear dialogue interfaces for LLMs poorly serve scenarios involving “multiple layers, and many subtasks—such as brainstorming, structured knowledge learning, and large project analysis.” The study found that linear interaction forces users to “repeatedly compare, modify, and copy previous content,” increasing cognitive load and reducing efficiency.

Some software developers have already responded positively to the update, with some comparing the feature to Git, the version control system that lets programmers create separate branches of code to test changes without affecting the main codebase. The comparison makes sense: Both allow you to experiment with different approaches while preserving your original work.

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people Read More »

the-number-of-mis-issued-1111-certificates-grows-here’s-the-latest.

The number of mis-issued 1.1.1.1 certificates grows. Here’s the latest.

Cloudflare on Thursday acknowledged this failure, writing:

We failed three times. The first time because 1.1.1.1 is an IP certificate and our system failed to alert on these. The second time because even if we were to receive certificate issuance alerts, as any of our customers can, we did not implement sufficient filtering. With the sheer number of names and issuances we manage it has not been possible for us to keep up with manual reviews. Finally, because of this noisy monitoring, we did not enable alerting for all of our domains. We are addressing all three shortcomings.

Ultimately, the fault lies with Fina; however, given the fragility of the TLS PKI, it’s incumbent on all stakeholders to ensure system requirements are being met.

And what about Microsoft? Is it at fault, too?

There’s some controversy on this point, as I quickly learned on Wednesday from social media and Ars reader comments. Critics of Microsoft’s handling of this case say that, among other things, its responsibility for ensuring the security of its Root Certificate Program includes checking the transparency logs. Had it done so, critics said, the company would have found that Fina had never issued certificates for 1.1.1.1 and looked further into the matter.

Additionally, at least some of the certificates had non-compliant encoding and listed domain names with non-existent top-level domains. This certificate, for example, lists ssltest5 as its common name.

Instead, like the rest of the world, Microsoft learned of the certificates from an online discussion forum.

Some TLS experts I spoke to said it’s not within the scope of a root program to do continuous monitoring for these types of problems.

In any event, Microsoft said it’s in the process of making all certificates part of a disallow list.

Microsoft has also faced long-standing criticism that it’s too lenient in the requirements it imposes on CAs included in its Root Certificate Program. In fact, Microsoft and one other entity, the EU Trust Service, are the only ones that, by default, trust Fina. Google, Apple, and Mozilla don’t.

“The story here is less the 1.1.1.1 certificate and more why Microsoft trusts this carelessly operated CA,” Filippo Valsorda, a Web/PKI expert, said in an interview.

I asked Microsoft about all of this and have yet to receive a response.

The number of mis-issued 1.1.1.1 certificates grows. Here’s the latest. Read More »

microsoft-open-sources-bill-gates’-6502-basic-from-1978

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978

On Wednesday, Microsoft released the complete source code for Microsoft BASIC for 6502 Version 1.1, the 1978 interpreter that powered the Commodore PET, VIC-20, Commodore 64, and Apple II through custom adaptations. The company posted 6,955 lines of assembly language code to GitHub under an MIT license, allowing anyone to freely use, modify, and distribute the code that helped launch the personal computer revolution.

“Rick Weiland and I (Bill Gates) wrote the 6502 BASIC,” Gates commented on the Page Table blog in 2010. “I put the WAIT command in.”

For millions of people in the late 1970s and early 1980s, variations of Microsoft’s BASIC interpreter provided their first experience with programming. Users could type simple commands like “10 PRINT ‘HELLO'” and “20 GOTO 10” to create an endless loop of text on their screens, for example—often their first taste of controlling a computer directly. The interpreter translated these human-readable commands into instructions that the processor could execute, one line at a time.

The Commodore PET (Personal Electronic Transactor) was released in January 1977 and used the MOS 6502 and ran a variation of Microsoft BASIC. Credit: SSPL/Getty Images

At just 6,955 lines of assembly language—Microsoft’s low-level 6502 code talked almost directly to the processor. Microsoft’s BASIC squeezed remarkable functionality into minimal memory, a key achievement when RAM cost hundreds of dollars per kilobyte.

In the early personal computer space, cost was king. The MOS 6502 processor that ran this BASIC cost about $25, while competitors charged $200 for similar chips. Designer Chuck Peddle created the 6502 specifically to bring computing to the masses, and manufacturers built variations of the chip into the Atari 2600, Nintendo Entertainment System, and millions of Commodore computers.

The deal that got away

In 1977, Commodore licensed Microsoft’s 6502 BASIC for a flat fee of $25,000. Jack Tramiel’s company got perpetual rights to ship the software in unlimited machines—no royalties, no per-unit fees. While $25,000 seemed substantial then, Commodore went on to sell millions of computers with Microsoft BASIC inside. Had Microsoft negotiated a per-unit licensing fee like they did with later products, the deal could have generated tens of millions in revenue.

The version Microsoft released—labeled 1.1—contains bug fixes that Commodore engineer John Feagans and Bill Gates jointly implemented in 1978 when Feagans traveled to Microsoft’s Bellevue offices. The code includes memory management improvements (called “garbage collection” in programming terms) and shipped as “BASIC V2” on the Commodore PET.

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978 Read More »

new-ai-model-turns-photos-into-explorable-3d-worlds,-with-caveats

New AI model turns photos into explorable 3D worlds, with caveats

Training with automated data pipeline

Voyager builds on Tencent’s earlier HunyuanWorld 1.0, released in July. Voyager is also part of Tencent’s broader “Hunyuan” ecosystem, which includes the Hunyuan3D-2 model for text-to-3D generation and the previously covered HunyuanVideo for video synthesis.

To train Voyager, researchers developed software that automatically analyzes existing videos to process camera movements and calculate depth for every frame—eliminating the need for humans to manually label thousands of hours of footage. The system processed over 100,000 video clips from both real-world recordings and the aforementioned Unreal Engine renders.

A diagram of the Voyager world creation pipeline.

A diagram of the Voyager world creation pipeline. Credit: Tencent

The model demands serious computing power to run, requiring at least 60GB of GPU memory for 540p resolution, though Tencent recommends 80GB for better results. Tencent published the model weights on Hugging Face and included code that works with both single and multi-GPU setups.

The model comes with notable licensing restrictions. Like other Hunyuan models from Tencent, the license prohibits usage in the European Union, the United Kingdom, and South Korea. Additionally, commercial deployments serving over 100 million monthly active users require separate licensing from Tencent.

On the WorldScore benchmark developed by Stanford University researchers, Voyager reportedly achieved the highest overall score of 77.62, compared to 72.69 for WonderWorld and 62.15 for CogVideoX-I2V. The model reportedly excelled in object control (66.92), style consistency (84.89), and subjective quality (71.09), though it placed second in camera control (85.95) behind WonderWorld’s 92.98. WorldScore evaluates world generation approaches across multiple criteria, including 3D consistency and content alignment.

While these self-reported benchmark results seem promising, wider deployment still faces challenges due to the computational muscle involved. For developers needing faster processing, the system supports parallel inference across multiple GPUs using the xDiT framework. Running on eight GPUs delivers processing speeds 6.69 times faster than single-GPU setups.

Given the processing power required and the limitations in generating long, coherent “worlds,” it may be a while before we see real-time interactive experiences using a similar technique. But as we’ve seen so far with experiments like Google’s Genie, we’re potentially witnessing very early steps into a new interactive, generative art form.

New AI model turns photos into explorable 3D worlds, with caveats Read More »

openai-announces-parental-controls-for-chatgpt-after-teen-suicide-lawsuit

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit

On Tuesday, OpenAI announced plans to roll out parental controls for ChatGPT and route sensitive mental health conversations to its simulated reasoning models, following what the company has called “heartbreaking cases” of users experiencing crises while using the AI assistant. The moves come after multiple reported incidents where ChatGPT allegedly failed to intervene appropriately when users expressed suicidal thoughts or experienced mental health episodes.

“This work has already been underway, but we want to proactively preview our plans for the next 120 days, so you won’t need to wait for launches to see where we’re headed,” OpenAI wrote in a blog post published Tuesday. “The work will continue well beyond this period of time, but we’re making a focused effort to launch as many of these improvements as possible this year.”

The planned parental controls represent OpenAI’s most concrete response to concerns about teen safety on the platform so far. Within the next month, OpenAI says, parents will be able to link their accounts with their teens’ ChatGPT accounts (minimum age 13) through email invitations, control how the AI model responds with age-appropriate behavior rules that are on by default, manage which features to disable (including memory and chat history), and receive notifications when the system detects their teen experiencing acute distress.

The parental controls build on existing features like in-app reminders during long sessions that encourage users to take breaks, which OpenAI rolled out for all users in August.

High-profile cases prompt safety changes

OpenAI’s new safety initiative arrives after several high-profile cases drew scrutiny to ChatGPT’s handling of vulnerable users. In August, Matt and Maria Raine filed suit against OpenAI after their 16-year-old son Adam died by suicide following extensive ChatGPT interactions that included 377 messages flagged for self-harm content. According to court documents, ChatGPT mentioned suicide 1,275 times in conversations with Adam—six times more often than the teen himself. Last week, The Wall Street Journal reported that a 56-year-old man killed his mother and himself after ChatGPT reinforced his paranoid delusions rather than challenging them.

To guide these safety improvements, OpenAI is working with what it calls an Expert Council on Well-Being and AI to “shape a clear, evidence-based vision for how AI can support people’s well-being,” according to the company’s blog post. The council will help define and measure well-being, set priorities, and design future safeguards including the parental controls.

OpenAI announces parental controls for ChatGPT after teen suicide lawsuit Read More »