microsoft

windows-11’s-steam-deck-ish,-streamlined-xbox-gaming-ui-comes-to-all-pcs-in-april

Windows 11’s Steam Deck-ish, streamlined Xbox gaming UI comes to all PCs in April

When Asus and Microsoft launched the ROG Xbox Ally X last summer, it came with a bespoke controller-driven full-screen interface running on top of Windows 11. The handheld was still running Windows under the hood, and you could bring up the typical Windows desktop any time, but it defaulted to the full-screen gaming UI.

Then called either the “Xbox Experience for Handheld” or the “Xbox Full-Screen Experience (FSE)” depending on who you asked and when, Microsoft said it would be available on all Windows PCs at some point in 2026. That point has apparently arrived: Microsoft announced this week at the Game Developers Conference that other Windows 11 PCs “in select markets” would be getting what’s now being called “Xbox mode” starting in April.

Under the hood, a PC running in Xbox mode is still running regular-old Windows, with the same capabilities as any other PC. But there are system services and UI elements (like the standard Start menu and taskbar) that don’t launch when the system is in Xbox mode, something Microsoft claims can save a gigabyte or two of RAM while also allowing systems to use less energy. Users can return to Windows’ traditional desktop mode whenever they want, though.

Our experience with Xbox mode on the ROG Xbox Ally X was mixed; a Windows PC in Xbox mode is still a Windows PC, with both the broad game/app compatibility and the messiness that entails.

The seams between the controller-friendly interface and the mouse-and-keyboard version of Windows were the most visible when trying to download and launch games from third-party game stores like Steam and the Epic Games Store, which generally required you to use those store apps to buy and download games before they could be launched from the comfort of Xbox mode. We’ll have to test the update on other PCs after it rolls out to see whether Microsoft has made substantial improvements.

Windows 11’s Steam Deck-ish, streamlined Xbox gaming UI comes to all PCs in April Read More »

ms-exec:-microsoft’s-next-console-will-play-“xbox-and-pc-games”

MS exec: Microsoft’s next console will play “Xbox and PC games”

Last summer, we here at Ars made the argument that the company’s next Xbox console should give up the walled garden approach and just run Windows already. Now, newly named Microsoft Executive Vice President for Gaming Asha Sharma has strongly hinted that this is indeed the direction Microsoft is going, saying its next-generation console will “play your Xbox and PC games.”

In a social media post Thursday afternoon, Sharma said that “our commitment to the return of Xbox” would include a new console codenamed Project Helix that “will lead in performance and play your Xbox and PC games.” Sharma said she would be discussing that commitment and that console itself with developers and partners at her first Game Developers Conference next week.

Sharma’s statement leaves a little wiggle room for Project Helix to be something other than a full-fledged Windows-based living room gaming box. The coming console’s access to PC games could be limited to Microsoft’s existing streaming solution via PC Game Pass, for instance, or to games designed for Microsoft’s own Xbox-branded PC SDK and the PC Xbox app.

Still, a plain reading of Sharma’s statement suggests that Microsoft is getting ready to open up its next console to a complete Windows installation, with the ability to play tens of thousands of existing PC games. That doesn’t come as a complete shock, considering that Microsoft already used the Xbox name for last year’s Windows-based ROG Xbox Ally (and its somewhat console-esque full-screen “Xbox Experience”). Microsoft has also been slowly reducing the number of games that are fully exclusive to Xbox consoles, lowering the value of a walled-off console platform (Sony, meanwhile, pulled back this week from its recent trend of releasing first-party titles on PC as well). Meanwhile, Valve’s coming Steam Machine is threatening to bring Windows-free PC gaming to living rooms everywhere in the near future.

MS exec: Microsoft’s next console will play “Xbox and PC games” Read More »

new-microsoft-gaming-chief-has-“no-tolerance-for-bad-ai”

New Microsoft gaming chief has “no tolerance for bad AI”

A gaming education

Unlike Spencer, who spent years at Microsoft Game Studios before heading Microsoft’s gaming division, Sharma has no professional experience in the video game industry. And her personal experience with Xbox also seems somewhat limited; after sharing her Gamertag on social media over the weekend, curious gamers found that her Xbox play history dates back roughly one month. That’s also in stark contrast to Spencer, who has amassed a score of over 121,000 across decades of play.

In her interview with Variety, Sharma cited 2016’s Firewatch as an example of the kinds of games with “deep emotional resonance” and “a distinct point of view” that she’s looking for from Microsoft. And on social media, Sharma shared her list of the three greatest games ever: “Halo, Valheim, Goldeneye,” for what it’s worth. Sharma also seems to be taking recommendations for games to catch up on; after saying on social media that she would try Borderlands 2, the game appeared in her recently played games over the weekend.

A look at some of Sharma’s recently played Xbox games, as of this writing.

A look at some of Sharma’s recently played Xbox games, as of this writing. Credit: Xbox.com

Being a personal fan of video games isn’t necessarily required to succeed in running a gaming company. Nintendo President Hiroshi Yamauchi famously didn’t care for video games even as he launched the Famicom and Nintendo Entertainment System to worldwide success in the 1980s. Still, the lack of direct experience with the gaming world marks a sharp change after Spencer’s long tenure at a time when Microsoft is struggling to redefine the Xbox brand amid cratering hardware sales, a pivot away from software exclusives, and a move to extend the Xbox brand to many different devices.

Xbox President and COO Sarah Bond, who by all accounts was being set up to succeed Spencer, also announced her departure from Microsoft on Friday, ending a nearly nine-year stint as a public face for the company’s gaming efforts. The Verge reports that Bond caused a lot of friction within the Xbox team when she championed the “Xbox Everywhere” strategy and “This is an Xbox” marketing campaign, which focused on streaming Xbox games to hardware like mobile phones and tablets, according to anonymous sources. Shortly before the launch of that campaign in 2024, Microsoft lost marketing executives Jerrett West and Kareem Choudry, leading to significant internal reorganization.

Longtime Xbox Game Studios executive Matt Booty, whose history in the game industry dates back to working for Williams Electronics in the ’90s, has been promoted to executive vice president and chief content officer for Xbox and “will continue working closely with [Sharma] to ensure a smooth transition,” Microsoft said in its announcement Friday.

New Microsoft gaming chief has “no tolerance for bad AI” Read More »

microsoft-deletes-blog-telling-users-to-train-ai-on-pirated-harry-potter-books

Microsoft deletes blog telling users to train AI on pirated Harry Potter books


Wizarding world of AI slop

The now-deleted Harry Potter dataset was “mistakenly” marked public domain.

Following backlash in a Hacker News thread, Microsoft deleted a blog post that critics said encouraged developers to pirate Harry Potter books to train AI models that could then be used to create AI slop.

The blog, which is archived here, was written in November 2024 by a senior product manager, Pooja Kamath. According to her LinkedIn, Kamath has been at Microsoft for more than a decade and remains with the company. In 2024, Microsoft tapped her to promote a new feature that the blog said made it easier to “add generative AI features to your own applications with just a few lines of code using Azure SQL DB, LangChain, and LLMs.”

What better way to show “engaging and relatable examples” of Microsoft’s new feature that would “resonate with a wide audience” than to “use a well-known dataset” like Harry Potter books, the blog said.

The books are “one of the most famous and cherished series in literary history,” the blog noted, and fans could use the LLMs they trained in two fun ways: building Q&A systems providing “context-rich answers” and generating “new AI-driven Harry Potter fan fiction” that’s “sure to delight Potterheads.”

To help Microsoft customers achieve this vision, the blog linked to a Kaggle dataset that included all seven Harry Potter books, which, Ars verified, has been available online for years and incorrectly marked as “public domain.” Kaggle’s terms say that rights holders can send notices of infringing content, and repeat offenders risk suspensions, but Hacker News commenters speculated that the Harry Potter dataset flew under the radar, with only 10,000 downloads over time, not catching the attention of J.K. Rowling, who famously keeps a strong grip on the Harry Potter copyrights. The dataset was promptly deleted on Thursday after Ars reached out to the uploader, Shubham Maindola, a data scientist in India with no apparent links to Microsoft.

Maindola told Ars that “the dataset was marked as Public Domain by mistake. There was no intention to misrepresent the licensing status of the works.”

It’s unclear whether Kamath was directed to link to the Harry Potter books dataset in the blog or if it was an individual choice. Cathay Y. N. Smith, a law professor and co-director of Chicago-Kent College of Law’s Program in Intellectual Property Law, told Ars that Kamath may not have realized the books were too recent to be in the public domain.

“Someone might be really knowledgeable about books and technology, but not necessarily about copyright terms and how long they last,” Smith said. “Especially if she saw that something was marked by another reputable company as being public domain.”

Microsoft declined Ars’ request to comment. Kaggle did not respond to Ars’ request to comment.

Microsoft was “probably smart” to pull the blog

On Hacker News, commenters suggested that it’s unlikely anyone familiar with the popular franchise would believe the Harry Potter books were in the public domain. They debated whether Microsoft’s blog was “problematic copyright-wise,” since Microsoft not only encouraged customers to download the infringing materials but also used the books themselves to create Harry Potter AI models that relied on beloved characters to hype Microsoft products.

Microsoft’s blog was posted more than a year ago, at a time when AI firms began facing lawsuits over AI models, which had allegedly infringed copyrights by training on pirated materials and regurgitating works verbatim.

The blog recommended that users learn to train their own AI models by downloading the Harry Potter dataset and then uploading text files to Azure Blob Storage. It included example models based on a dataset that Microsoft seemingly uploaded to Azure Blob Storage, which only included the first book, Harry Potter and the Sorcerer’s Stone.

Training large language models (LLMs) on text files, Harry Potter fans could create Q&A systems capable of pulling up relevant excerpts of books. An example query offered was “Wizarding World snacks,” which retrieved an excerpt from The Sorcerer’s Stone where Harry marvels at strange treats like Bertie Bott’s Every Flavor Beans and chocolate frogs. Another prompt asking “How did Harry feel when he first learnt that he was a Wizard?” generated an output pointing to various early excerpts in the book.

But perhaps an even more exciting use case, Kamath suggested, was generating fan fiction to “explore new adventures” and “even create alternate endings.” That model could quickly comb the dataset for “contextually similar” excerpts that could be used to output fresh stories that fit with existing narratives and incorporate “elements from the retrieved passages,” the blog said.

As an example, Kamath trained a model to write a Harry Potter story she could use to market the feature she was blogging about. She asked the model to write a story in which Harry meets a new friend on the Hogwarts Express train who tells him all about Microsoft’s Native Vector Support in SQL “in the Muggle world.”

Drawing on parts of The Sorcerer’s Stone where Harry learns about Quidditch and gets to know Hermione Granger, the fan fiction showed a boy selling Harry on Microsoft’s “amazing” new feature. To do this, he likened it to having a spell that helps you find exactly what you need among thousands of options, instantly, while declaring it was perfect for machine learning, AI, and recommendation systems.

Further blurring the lines between Microsoft and Harry Potter brands, Kamath also generated an image showing Harry with his new friend, stamped with a Microsoft logo.

Smith told Ars that both use cases could frustrate rights holders, depending on the content in the model outputs.

“I think that the regurgitation and the creation of fan fiction, they both could flag copyright issues, in that fan fiction often has to take from the expressive elements, a copyrighted character, a character that’s famous enough to be protected by a copyright law or plot stories or sequences,” Smith said. “If these things are copied and reproduced, then that output could be potentially infringing.”

But it’s also still a gray area. Looking at the blog, Smith said, “I would be concerned,” but “I wouldn’t say it’s automatically infringement.”

Smith told Ars that, in pulling the blog, Microsoft “was probably smart,” since courts have only generally said that training AI on copyrighted books is fair use. But courts continue to probe questions about pirated AI training materials.

On the deleted Kaggle dataset page, Maindola previously explained that to source the data, he “downloaded the ebooks and then converted them to txt files.”

Microsoft may have infringed copyrights

If Microsoft ever faced questions as to whether the company knowingly used pirated books to train the example models, fair use “could be a difficult argument,” Smith said.

Hacker News commenters suggested the blog could be considered fair use, since the training guide was for “educational purposes,” and Smith said that Microsoft could raise some “good arguments” in its defense.

However, she also suggested that Microsoft could be deemed liable for contributing to infringement on some level after leaving the blog up for a year. Before it was removed, the Kaggle dataset was downloaded more than 10,000 times.

“The ultimate result is to create something infringing by saying, ‘Hey, here you go, go grab that infringing stuff and use that in our system,’” Smith said. “They could potentially have some sort of secondary contributory liability for copyright infringement, downloading it, as well as then using it to encourage others to use it for training purposes.”

On Hacker News, commenters slammed the blog, including a self-described former Microsoft employee who claimed that Microsoft lets employees “blog without having to go through some approval or editing process.”

“It looks like somebody made a bad judgment call on what to put in a company blog post (and maybe what constitutes ethical activity) and that it was taken down as soon as someone noticed,” the former employee said.

Others suggested the blame was solely with the Kaggle uploader, Maindola, who told Ars that the dataset should never have been marked “public domain.” But Microsoft critics pushed back, noting that the Kaggle page made it clear that no special permission was granted and that Microsoft’s employee should have known better. “They don’t need to know any details to know that these properties belong to massive companies and aren’t free for the taking,” one commenter said.

The Harry Potter books weren’t the only books targeted, the thread noted, linking to a separate Azure sample containing Isaac Asimov’s Foundation series, which is also not in the public domain.

“Microsoft could have used any dataset for their blog, they could have even chosen to use actual public domain novels,” another Hacker News commenter wrote. “Instead, they opted to use copywritten works that J.K. hasn’t released into the public domain (unless user ‘Shubham Maindola’ is J.K.’s alter ego).”

Smith suggested Microsoft could have avoided this week’s backlash by more carefully reviewing blogs, noting that “if a company is risk averse, this would probably be flagged.” But she also understood Kamath’s preference for Harry Potter over the many long-forgotten characters that exist in the public domain. On Hacker News, some commenters defended Kamath’s blog, urging that it should be considered fair use since nonprofits and educational institutions could do the same thing in a teaching context without issue.

“I would have been concerned if I were the one clearing this for Microsoft, but at the same time, I completely understand what this employee was doing,” Smith said. “No one wants to write fan fiction about books that are in the public domain.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Microsoft deletes blog telling users to train AI on pirated Harry Potter books Read More »

microsoft-gaming-chief-phil-spencer-steps-down-after-38-years-with-company

Microsoft gaming chief Phil Spencer steps down after 38 years with company

Microsoft Executive Vice President for Gaming Phil Spencer announced he will retire after 38 years at Microsoft and 12 years leading the company’s video game efforts. Asha Sharma, an executive currently in charge of Microsoft’s CoreAI division, will take his place.

Xbox President Sarah Bond, who many assumed was being groomed as Spencer’s eventual replacement, is also resigning from the company. Current Xbox Studios Head Matt Booty, meanwhile, is being promoted to Executive Vice President and Chief Content Officer and will work closely with Sharma.

In his departure note, Spencer said he told Microsoft CEO Satya Nadella last fall that he was “thinking about stepping back and starting the next chapter of my life.” Spencer will remain at Microsoft “in an advisory role” through the summer to help Sharma during the transition, he wrote.

Spencer, who got his start at Microsoft as an intern in 1988, served as a manager and executive at Microsoft Game Studios in 2003. In 2014, he took over as Head of Xbox, guiding the company through the aftermath of the troubled, Kinect-bundled launch of the Xbox One. More recently, he helped shepherd the company’s 2020 purchase of Bethesda Softworks and its $68.7 billion merger with Activision Blizzard, including the many regulatory battles that followed that latter announcement.

Meet the new boss

Sharma, who joined Microsoft just two years ago after stints at Meta and Instacart, promised in an introductory message to preside over “the return of Xbox,” and a “recommit[ment] to our core fans and players.” That commitment would “start with console which has shaped who we are,” but expand “across PC, mobile, and cloud,” Sharma wrote.

Microsoft gaming chief Phil Spencer steps down after 38 years with company Read More »

microsoft’s-new-10,000-year-data-storage-medium:-glass

Microsoft’s new 10,000-year data storage medium: glass


Femtosecond lasers etch data into a very stable medium.

Right now, Silica hardware isn’t quite ready for commercialization. Credit: Microsoft Research

Archival storage poses lots of challenges. We want media that is extremely dense and stable for centuries or more, and, ideally, doesn’t consume any energy when not being accessed. Lots of ideas have floated around—even DNA has been considered—but one of the simplest is to etch data into glass. Many forms of glass are very physically and chemically stable, and it’s relatively easy to etch things into it.

There’s been a lot of preliminary work demonstrating different aspects of a glass-based storage system. But in Wednesday’s issue of Nature, Microsoft Research announced Project Silica, a working demonstration of a system that can read and write data into small slabs of glass with a density of over a Gigabit per cubic millimeter.

Writing on glass

We tend to think of glass as fragile, prone to shattering, and capable of flowing downward over centuries, although the last claim is a myth. Glass is a category of material, and a variety of chemicals can form glasses. With the right starting chemical, it’s possible to make a glass that is, as the researchers put it, “thermally and chemically stable and is resistant to moisture ingress, temperature fluctuations and electromagnetic interference.” While it would still need to be handled in a way to minimize damage, glass provides the sort of stability we’d want for long-term storage.

Putting data into glass is as simple as etching it. But that’s been one of the challenges, as etching is typically a slow process. However, the development of femtosecond lasers—lasers that emit pulses that only last 10-15 seconds and can emit millions of them per second—can significantly cut down write times and allow etching to be focused on a very small area, increasing potential data density.

To read the data back, there are several options. We’ve already had great success using lasers to read data from optical disks, albeit slowly. But anything that can pick up the small features etched into the glass could conceivably work.

With the above considerations in mind, everything was in place on a theoretical level for Project Silica. The big question is how to put them together into a functional system. Microsoft decided that, just to be cautious, it would answer that question twice.

A real-world system

The difference between these two answers comes down to how an individual unit of data (called a voxel) is written to the glass. One type of voxel they tried was based on birefringence, where refraction of photons depends on their polarization. It’s possible to etch voxels into glass to create birefringence using polarized laser light, producing features smaller than the diffraction limit. In practice, this involved using one laser pulse to create an oval-shaped void, followed by a second, polarized pulse to induce birefringence. The identity of a voxel is based on the orientation of the oval; since we can resolve multiple orientations, it’s possible to save more than one bit in each voxel.

The alternative approach involves changing the magnitude of refractive effects by varying the amount of energy in the laser pulse. Again, it’s possible to discern more than two states in these voxels, allowing multiple data bits to be stored in each voxel.

The map data from Microsoft Flight Simulator etched onto the Silica storage medium.

Credit: Microsoft Research

The map data from Microsoft Flight Simulator etched onto the Silica storage medium. Credit: Microsoft Research

Reading these in Silica involves using a microscope that can pick up differences in refractive index. (For microscopy geeks, this is a way of saying “they used phase contrast microscopy.”) The microscopy sets the limits on how many layers of voxels can be placed in a single piece of glass. During etching, the layers were separated by enough distance so only a single layer would be in the microscope’s plane of focus at a time. The etching process also incorporates symbols that allow the automated microscope system to position the lens above specific points on the glass. From there, the system slowly changes its focal plane, moving through the stack and capturing images that include different layers of voxels.

To interpret these microscope images, Microsoft used a convolutional neural network that combines data from images that are both in and near the plane of focus for a given layer of voxels. This is effective because the influence of nearby voxels changes how a given voxel appears in a subtle way that the AI system can pick up on if given enough training data.

The final piece of the puzzle is data encoding. The Silica system takes the raw bitstream of the data it’s storing and adds error correction using a low-density parity-check code (the same error correction used in 5G networks). Neighboring bits are then combined to create symbols that take advantage of the voxels’ ability to store more than one bit. Once a stream of symbols is made, it’s ready to be written to glass.

Performance

Writing remains a bottleneck in the system, so Microsoft developed hardware that can write a single glass slab with four lasers simultaneously without generating too much heat. That is enough to enable writing at 66 megabits per second, and the team behind the work thinks that it would be possible to add up to a dozen additional lasers. That may be needed, given that it’s possible to store up to 4.84TB in a single slab of glass (the slabs are 12 cm x 12 cm and 0.2 cm thick). That works out to be over 150 hours to fully write a slab.

The “up to” aspect of the storage system has to do with the density of data that’s possible with the two different ways of writing data. The method that relies on birefringence requires more optical hardware and only works in high-quality glasses, but can squeeze more voxels into the same volume, and so has a considerably higher data density. The alternative approach can only put a bit over two terabytes into the same slab of glass, but can be done with simpler hardware and can work on any sort of transparent material.

Borosilicate glass offers extreme stability; Microsoft’s accelerated aging experiments suggest the data would be stable for over 10,000 years at room temperature. That led Microsoft to declare, “Our results demonstrate that Silica could become the archival storage solution for the digital age.”

That may be overselling it just a bit. The Square Kilometer Array telescope, for example, is expected to need to archive 700 petabytes of data each year. That would mean over 140,000 glass slabs would be needed to store the data from this one telescope. Even assuming that the write speed could be boosted by adding significantly more lasers, you’d need over 600 Silica machines operating in parallel to keep up. And the Square Kilometer Array is far from the only project generating enormous amounts of data.

That said, there are some features that make Silica a great match for this sort of thing, most notably the complete absence of energy needed to preserve the data, and the fact that it can be retrieved rapidly if needed (a sharp contrast to the days needed to retrieve information from DNA, for example). Plus, I’m admittedly drawn to a system with a storage medium that looks like something right out of science fiction.

Nature, 2026. DOI: 10.1038/s41586-025-10042-w (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Microsoft’s new 10,000-year data storage medium: glass Read More »

windows’-original-secure-boot-certificates-expire-in-june—here’s-what-you-need-to-do

Windows’ original Secure Boot certificates expire in June—here’s what you need to do

The second thing to check is the “default db,” which shows whether the new Secure Boot certificates are baked into your PC’s firmware. If they are, even resetting Secure Boot settings to the defaults in your PC’s BIOS will still allow you to boot operating systems that use the new certificates.

To check this, open PowerShell or Terminal again and type ([System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI dbdefault).bytes) -match 'Windows UEFI CA 2023'). If this command returns “true,” your system is running an updated BIOS with the new Secure Boot certificates built in. Older PCs and systems without a BIOS update installed will return “false” here.

Microsoft’s Costa says that “many newer PCs built since 2024, and almost all the devices shipped in 2025, already include the certificates” and won’t need to be updated at all. And PCs several years older than that may be able to get the certificates via a BIOS update.

In the US, Dell, HP, Lenovo, and Microsoft all have lists of specific systems and firmware versions, while Asus provides more general information about how to get the new certificates via Windows Update, the MyAsus app, or the Asus website. The oldest of the PCs listed generally date back to 2019 or 2020. If your PC shipped with Windows 11 out of the box, there should be a BIOS update with the new certificates available, though that may not be true of every system that meets the requirements for upgrading to Windows 11.

Microsoft encourages home users who can’t install the new certificates to use its customer support services for help. Detailed documentation is also available for IT shops and other large organizations that manage their own updates.

“The Secure Boot certificate update marks a generational refresh of the trust foundation that modern PCs rely on at startup,” writes Costa. “By renewing these certificates, the Windows ecosystem is ensuring that future innovations in hardware, firmware, and operating systems can continue to build on a secure, industry‐aligned boot process.”

Windows’ original Secure Boot certificates expire in June—here’s what you need to do Read More »

why-$700-could-be-a-“death-sentence”-for-the-steam-machine

Why $700 could be a “death sentence” for the Steam Machine

Bad news for Valve in particular?

On the surface, it might seem like every company making gaming hardware would be similarly affected by increasing component costs. In practice, though, analysts suggested that Valve might be in a uniquely bad position to absorb this ongoing market disruption.

Large console makers like Sony and Microsoft “can commit to tens of millions of orders, and have strong negotiating power,” Niko Partners analyst Daniel Ahmad pointed out. The Steam Machine, on the other hand, is “a niche product that cannot benefit in the same way when it comes to procurement,” meaning Valve has to shoulder higher component cost increases.

F-Squared’s Futter echoed that Valve is “not an enormous player in the hardware space, even with the Steam Deck’s success. So they likely don’t have the same kind of priority as a Nintendo, Sony, or Microsoft when it comes to suppliers.”

PlayStation 5 in horizontal orientation, compared to Xbox Series X in horizontal orientation

Sony and Microsoft might have an advantage when negotiating volume discounts with suppliers.

Credit: Sam Machkovech

Sony and Microsoft might have an advantage when negotiating volume discounts with suppliers. Credit: Sam Machkovech

The size of the Steam Machine price adjustment also might depend on when Valve made its supply chain commitments. “It’s not clear when or if Valve locked in supply contracts for the Steam Machine, or if supply can be diverted from the Steam Deck for the new product,” Tech Insights analyst James Sanders noted. On the other hand, “Sony and Microsoft likely will have locked in more favorable component pricing before the current spike,” Van Dreunen said.

That said, some other aspects of the Steam Machine design could give Valve some greater pricing flexibility. Sanders noted that the Steam Machine’s smaller physical size could mean smaller packaging and reduced shipping costs for Valve. And selling the system primarily through direct sales via the web and Steam itself eliminates the usual retailer markups console makers have to take into account, he added.

“I think Valve was hoping for a much lower price and that the component issue would be short-term,” Cole said. “Obviously it is looking more like a long-term issue.”

Why $700 could be a “death sentence” for the Steam Machine Read More »

neocities-founder-stuck-in-chatbot-hell-after-bing-blocked-1.5-million-sites

Neocities founder stuck in chatbot hell after Bing blocked 1.5 million sites


Microsoft won’t explain why Bing blocked 1.5 million Neocities websites.

Credit: Aurich Lawson | NeoCities

One of the weirdest corners of the Internet is suddenly hard to find on Bing, after the search engine inexplicably started blocking approximately 1.5 million independent websites hosted on Neocities.

Founded in 2013 to archive the “aesthetic awesomeness” of GeoCities websites, Neocities keeps the spirit of the 1990s Internet alive. It lets users design free websites without relying on standardized templates devoid of personality. For hundreds of thousands of people building websites around art, niche fandoms, and special expertise—or simply seeking a place to get a little weird online—Neocities provides a blank canvas that can be endlessly personalized when compared to a Facebook page. Delighted visitors discovering these sites are more likely to navigate by hovering flashing pointers over a web of spinning GIFs than clicking a hamburger menu or infinitely scrolling.

That’s the style of Internet that Kyle Drake, Neocities’ founder, strives to maintain. So he was surprised when he noticed that Bing was curiously blocking Neocities sites last summer. At first, the issue seemed resolved by contacting Microsoft, but after receiving more recent reports that users were struggling to log in, Drake discovered that another complete block was implemented in January. Even more concerning, he saw that after delisting the front page, Bing had started pointing users to a copycat site where he was alarmed to learn they were providing their login credentials.

Monitoring stats, Drake was stunned to see that Bing traffic had suddenly dropped from about half a million daily visitors to zero. He immediately reported the issue using Bing webmaster tools. Concerned that Bing was not just disrupting traffic but possibly also putting Neocities users at risk if bad actors were gaming search results, he hoped for a prompt resolution.

“This one site that was just a copy of our front page, I didn’t know if it was a phishing attack or what it was, I was just like, ‘whoa, what the heck?’” Drake told Ars.

However, weeks went by as Drake hit wall after wall, submitting nearly a dozen tickets while trying to get past the Bing chatbot to find a support member to fix the issue. Frustrated, he tried other internal channels as well, including offering to buy ads to see if an ads team member could help.

“I tried everything,” Drake said, but nothing worked. Neocities sites remained unlisted on Bing.

Although Bing only holds about 4.5 percent of the global search engine market, Drake said it was “embarrassing” that Neocities sites can’t be discovered using the default Windows search engine. He also noted that many other search engines license Bing data, further compounding the issue.

Ultimately, it’s affecting a lot of people, Drake said, but he suspects that his support tickets are being buried in probably trillions of requests each day from people wanting to improve their Bing indexing.

“There’s probably an actual human being at Bing that actually could fix this,” Drake told Ars, but “when you go to the webmaster tools,” you’re stuck talking to an AI chatbot, and “it’s all kind of automated.”

Ars reached Microsoft for comment, and the company took action to remove some inappropriate blocks.

Within 24 hours, the Neocities front page appeared in search results, but Drake ran tests over the next few days that showed that most subdomains are still being blocked, including popular Neocities sites that should garner high rankings.

Pressed to investigate further, Microsoft confirmed that some Neocities sites were delisted for violating policies designed to keep low-quality sites out of search results.

However, Microsoft would not identify which sites were problematic or directly connect with Neocities to resolve a seemingly significant amount of ongoing site blocks that do not appear to be linked to violations. Instead, Microsoft recommended that Neocities find a way to work directly with Microsoft, despite Ars confirming that Microsoft is currently ignoring an open ticket.

For Drake, “the current state of things is unknown.” It’s hard to tell if popular Neocities sites are still being blocked or if possibly Bing’s reindexing process is slow. Microsoft declined to clarify.

He’s still hoping that Microsoft will eventually resolve all the improper blocks, making it possible for Bing users to use the search engine not just to find businesses or information but also to discover creative people making websites just for fun. With so much AI slop invading social networks and search engines, Drake sees Neocities as “one of the last bastions of human content.”

“I hope we can resolve this amicably for both of us and that this doesn’t happen again in the future,” Drake said. “It’s really important for the future of the small web, and for quality content for web surfers in an increasingly generative AI world, that creative sites made by real humans are able to get a fair shot in search engine results.”

Bing deranked suspected phishing site

After Drake failed to quietly resolve the issue with Bing, he felt that he had no choice but to alert users to the potential risks from Bing’s delisting.

In a blog post in late January, Drake warned that Bing had “completely blocked” all Neocities subdomains from its search index. Even worse, “Bing was also placing what appeared to be a phishing attack against Neocities on the first page of search results,” Drake said.

“This is not only bad for search results, it’s very possible that it is actively dangerous,” Drake said.

After “several” complaints, Bing eventually deranked the suspected phishing site, Drake confirmed. But Bing “declined to reverse the block or provide a clear, actionable explanation for it,” which leaves Neocities users vulnerable, he said.

Since “it’s easy to get higher pagerank than a blocked site,” Drake warned that “it is possibly only a matter of time before another concerning site appears on Bing searches for Neocities.”

The blog emphasized that Google, the platform’s biggest traffic driver, was not blocking Neocities, nor was any search engine unlinked to Bing data. Urging a boycott that may force a resolution, Drake wrote, “we are recommending that Neocities users, and the broader Internet in general, not use Bing or search engines that source their results from Bing until this issue is resolved.

“If you use Bing or Bing-powered search engines, Neocities sites will not appear in your search results, regardless of content quality, originality, or compliance with webmaster guidelines,” Drake said. “If any Neocities-like sites appear on these results, they may be active phishing attacks against Neocities and should be treated with caution.”

Bing still blocking popular Neocities sites

Drake doesn’t want to boycott Bing, but in his blog, he said that Microsoft left him no choice but public disclosure:

“We did not want to write this post. We try very hard to have a good relationship with search engine providers. We would much rather quietly resolve this issue with Bing staff and move on. But after months of attempting to engage constructively through multiple channels, it became clear that silence only harms our users. Especially those who don’t realize their sites are invisible on some search engines.”

Drake told Ars that he thinks most people don’t realize how big Neocities has gotten since its early days reviving GeoCities’ spunk. The platform hosts 1,459,700 websites that have drawn in 13 billion visitors. Over the years, it has been profiled in Wired and The New York Times, and more recently, it has become a popular hub for gaming communities, Polygon reported.

As Neocities grew, Drake told Ars that much of his focus has been on improving content moderation. He works closely with a full-time dedicated content moderation staffer to quickly take down any problematic sites within 24 hours, he said. That effort includes reviewing reports and proactively screening new sites, with Drake noting that “our name domain provider requires us to take them down within 48 hours.”

Microsoft prohibits things like scraping content that could be considered copyright infringement or automatically generating content using “garbage text” to game the rankings. It also monitors for malicious behavior like phishing, as well as for prompt injection attacks on Bing’s large language model.

It’s unclear what kind of violations Microsoft found ahead of instituting the complete block; however, Drake told Ars that he has yet to identify any content that may have triggered it. He said he would promptly remove any websites flagged by Microsoft, if he could only talk to someone who could share that information.

“Naturally, we still don’t catch 100 percent of the sites with proactive moderation, and occasionally some problematic sites do get missed,” Drake said.

Although Drake is curious to learn more about what triggered the blocks, he told Ars that it’s clear that non-violative sites are still invisible on Bing.

One of the longest-running and most popular Neocities sites, Wired Sound for Wired People, is a perfect example. The bizarre, somewhat creepy anime fanpage is “very popular” and “has a lot of links to it all over the web,” Drake said. Yet if you search for its subdomain, “fauux,” the site no longer appears in Bing search results, as of this writing, while Google reliably spits it out as the top result.

Drake said that he still believes that Bing is blocking content by mistake, but Bing’s automated support tools aren’t making it easy to defend creators who are randomly blocked by one of the world’s biggest search engines.

“We have one of the lowest ratios of crap to legitimate content, human-made content, on the Internet,” Drake said. “And it’s really frustrating to see that all these human beings making really cool sites that people want to go to are just not available on the default Windows search engine.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Neocities founder stuck in chatbot hell after Bing blocked 1.5 million sites Read More »

microsoft-releases-urgent-office-patch-russian-state-hackers-pounce.

Microsoft releases urgent Office patch. Russian-state hackers pounce.

Russian-state hackers wasted no time exploiting a critical Microsoft Office vulnerability that allowed them to compromise the devices inside diplomatic, maritime, and transport organizations in more than half a dozen countries, researchers said Wednesday.

The threat group, tracked under names including APT28, Fancy Bear, Sednit, Forest Blizzard, and Sofacy, pounced on the vulnerability, tracked as CVE-2026-21509, less than 48 hours after Microsoft released an urgent, unscheduled security update late last month, the researchers said. After reverse-engineering the patch, group members wrote an advanced exploit that installed one of two never-before-seen backdoor implants.

Stealth, speed, and precision

The entire campaign was designed to make the compromise undetectable to endpoint protection. Besides being novel, the exploits and payloads were encrypted and ran in memory, making their malice hard to spot. The initial infection vector came from previously compromised government accounts from multiple countries and were likely familiar to the targeted email holders. Command and control channels were hosted in legitimate cloud services that are typically allow-listed inside sensitive networks.

“The use of CVE-2026-21509 demonstrates how quickly state-aligned actors can weaponize new vulnerabilities, shrinking the window for defenders to patch critical systems,” the researchers, with security firm Trellix, wrote. “The campaign’s modular infection chain—from initial phish to in-memory backdoor to secondary implants was carefully designed to leverage trusted channels (HTTPS to cloud services, legitimate email flows) and fileless techniques to hide in plain sight.”

The 72-hour spear phishing campaign began January 28 and delivered at least 29 distinct email lures to organizations in nine countries, primarily in Eastern Europe. Trellix named eight of them: Poland, Slovenia, Turkey, Greece, the UAE, Ukraine, Romania, and Bolivia. Organizations targeted were defense ministries (40 percent), transportation/logistics operators (35 percent), and diplomatic entities (25 percent).

Microsoft releases urgent Office patch. Russian-state hackers pounce. Read More »

developers-say-ai-coding-tools-work—and-that’s-precisely-what-worries-them

Developers say AI coding tools work—and that’s precisely what worries them


Ars spoke to several software devs about AI and found enthusiasm tempered by unease.

Credit: Aurich Lawson | Getty Images

Software developers have spent the past two years watching AI coding tools evolve from advanced autocomplete into something that can, in some cases, build entire applications from a text prompt. Tools like Anthropic’s Claude Code and OpenAI’s Codex can now work on software projects for hours at a time, writing code, running tests, and, with human supervision, fixing bugs. OpenAI says it now uses Codex to build Codex itself, and the company recently published technical details about how the tool works under the hood. It has caused many to wonder: Is this just more AI industry hype, or are things actually different this time?

To find out, Ars reached out to several professional developers on Bluesky to ask how they feel about these tools in practice, and the responses revealed a workforce that largely agrees the technology works, but remains divided on whether that’s entirely good news. It’s a small sample size that was self-selected by those who wanted to participate, but their views are still instructive as working professionals in the space.

David Hagerty, a developer who works on point-of-sale systems, told Ars Technica up front that he is skeptical of the marketing. “All of the AI companies are hyping up the capabilities so much,” he said. “Don’t get me wrong—LLMs are revolutionary and will have an immense impact, but don’t expect them to ever write the next great American novel or anything. It’s not how they work.”

Roland Dreier, a software engineer who has contributed extensively to the Linux kernel in the past, told Ars Technica that he acknowledges the presence of hype but has watched the progression of the AI space closely. “It sounds like implausible hype, but state-of-the-art agents are just staggeringly good right now,” he said. Dreier described a “step-change” in the past six months, particularly after Anthropic released Claude Opus 4.5. Where he once used AI for autocomplete and asking the occasional question, he now expects to tell an agent “this test is failing, debug it and fix it for me” and have it work. He estimated a 10x speed improvement for complex tasks like building a Rust backend service with Terraform deployment configuration and a Svelte frontend.

A huge question on developers’ minds right now is whether what you might call “syntax programming,” that is, the act of manually writing code in the syntax of an established programming language (as opposed to conversing with an AI agent in English), will become extinct in the near future due to AI coding agents handling the syntax for them. Dreier believes syntax programming is largely finished for many tasks. “I still need to be able to read and review code,” he said, “but very little of my typing is actual Rust or whatever language I’m working in.”

When asked if developers will ever return to manual syntax coding, Tim Kellogg, a developer who actively posts about AI on social media and builds autonomous agents, was blunt: “It’s over. AI coding tools easily take care of the surface level of detail.” Admittedly, Kellogg represents developers who have fully embraced agentic AI and now spend their days directing AI models rather than typing code. He said he can now “build, then rebuild 3 times in less time than it would have taken to build manually,” and ends up with cleaner architecture as a result.

One software architect at a pricing management SaaS company, who asked to remain anonymous due to company communications policies, told Ars that AI tools have transformed his work after 30 years of traditional coding. “I was able to deliver a feature at work in about 2 weeks that probably would have taken us a year if we did it the traditional way,” he said. And for side projects, he said he can now “spin up a prototype in like an hour and figure out if it’s worth taking further or abandoning.”

Dreier said the lowered effort has unlocked projects he’d put off for years: “I’ve had ‘rewrite that janky shell script for copying photos off a camera SD card’ on my to-do list for literal years.” Coding agents finally lowered the barrier to entry, so to speak, low enough that he spent a few hours building a full released package with a text UI, written in Rust with unit tests. “Nothing profound there, but I never would have had the energy to type all that code out by hand,” he told Ars.

Of vibe coding and technical debt

Not everyone shares the same enthusiasm as Dreier. Concerns about AI coding agents building up technical debt, that is, making poor design choices early in a development process that snowball into worse problems over time, originated soon after the first debates around “vibe coding” emerged in early 2025. Former OpenAI researcher Andrej Karpathy coined the term to describe programming by conversing with AI without fully understanding the resulting code, which many see as a clear hazard of AI coding agents.

Darren Mart, a senior software development engineer at Microsoft who has worked there since 2006, shared similar concerns with Ars. Mart, who emphasizes he is speaking in a personal capacity and not on behalf of Microsoft, recently used Claude in a terminal to build a Next.js application integrating with Azure Functions. The AI model “successfully built roughly 95% of it according to my spec,” he said. Yet he remains cautious. “I’m only comfortable using them for completing tasks that I already fully understand,” Mart said, “otherwise there’s no way to know if I’m being led down a perilous path and setting myself (and/or my team) up for a mountain of future debt.”

A data scientist working in real estate analytics, who asked to remain anonymous due to the sensitive nature of his work, described keeping AI on a very short leash for similar reasons. He uses GitHub Copilot for line-by-line completions, which he finds useful about 75 percent of the time, but restricts agentic features to narrow use cases: language conversion for legacy code, debugging with explicit read-only instructions, and standardization tasks where he forbids direct edits. “Since I am data-first, I’m extremely risk averse to bad manipulation of the data,” he said, “and the next and current line completions are way too often too wrong for me to let the LLMs have freer rein.”

Speaking of free rein, Nike backend engineer Brian Westby, who uses Cursor daily, told Ars that he sees the tools as “50/50 good/bad.” They cut down time on well-defined problems, he said, but “hallucinations are still too prevalent if I give it too much room to work.”

The legacy code lifeline and the enterprise AI gap

For developers working with older systems, AI tools have become something like a translator and an archaeologist rolled into one. Nate Hashem, a staff engineer at First American Financial, told Ars Technica that he spends his days updating older codebases where “the original developers are gone and documentation is often unclear on why the code was written the way it was.” That’s important because previously “there used to be no bandwidth to improve any of this,” Hashem said. “The business was not going to give you 2-4 weeks to figure out how everything actually works.”

In that high-pressure, relatively low-resource environment, AI has made the job “a lot more pleasant,” in his words, by speeding up the process of identifying where and how obsolete code can be deleted, diagnosing errors, and ultimately modernizing the codebase.

Hashem also offered a theory about why AI adoption looks so different inside large corporations than it does on social media. Executives demand their companies become “AI oriented,” he said, but the logistics of deploying AI tools with proprietary data can take months of legal review. Meanwhile, the AI features that Microsoft and Google bolt onto products like Gmail and Excel, the tools that actually reach most workers, tend to run on more limited AI models. “That modal white-collar employee is being told by management to use AI,” Hashem said, “but is given crappy AI tools because the good tools require a lot of overhead in cost and legal agreements.”

Speaking of management, the question of what these new AI coding tools mean for software development jobs drew a range of responses. Does it threaten anyone’s job? Kellogg, who has embraced agentic coding enthusiastically, was blunt: “Yes, massively so. Today it’s the act of writing code, then it’ll be architecture, then it’ll be tiers of product management. Those who can’t adapt to operate at a higher level won’t keep their jobs.”

Dreier, while feeling secure in his own position, worried about the path for newcomers. “There are going to have to be changes to education and training to get junior developers the experience and judgment they need,” he said, “when it’s just a waste to make them implement small pieces of a system like I came up doing.”

Hagerty put it in economic terms: “It’s going to get harder for junior-level positions to get filled when I can get junior-quality code for less than minimum wage using a model like Sonnet 4.5.”

Mart, the Microsoft engineer, put it more personally. The software development role is “abruptly pivoting from creation/construction to supervision,” he said, “and while some may welcome that pivot, others certainly do not. I’m firmly in the latter category.”

Even with this ongoing uncertainty on a macro level, some people are really enjoying the tools for personal reasons, regardless of larger implications. “I absolutely love using AI coding tools,” the anonymous software architect at a pricing management SaaS company told Ars. “I did traditional coding for my entire adult life (about 30 years) and I have way more fun now than I ever did doing traditional coding.”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Developers say AI coding tools work—and that’s precisely what worries them Read More »

people-complaining-about-windows-11-hasn’t-stopped-it-from-hitting-1-billion-users

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users

Complaining about Windows 11 is a popular sport among tech enthusiasts on the Internet, whether you’re publicly switching to Linux, publishing guides about the dozens of things you need to do to make the OS less annoying, or getting upset because you were asked to sign in to an app after clicking a sign-in button.

Despite the negativity surrounding the current version of Windows, it remains the most widely used operating system on the world’s desktop and laptop computers, and people usually prefer to stick to what they’re used to. As a result, Windows 11 has just cleared a big milestone—Microsoft CEO Satya Nadella said on the company’s most recent earnings call (via The Verge) that Windows 11 now has over 1 billion users worldwide.

Windows 11 also reached that milestone just a few months quicker than Windows 10 did—1,576 days after its initial public launch on October 5, 2021. Windows 10 took 1,692 days to reach the same milestone, based on its July 29, 2015, general availability date and Microsoft’s announcement on March 16, 2020.

That’s especially notable because Windows 10 was initially offered as a free upgrade to all users of Windows 7 and Windows 8, with no change in system requirements relative to those older versions. Windows 11 was (and still is) a free upgrade to Windows 10, but its relatively high system requirements mean there are plenty of Windows 10 PCs that aren’t eligible to run Windows 11.

Windows 10’s long goodbye

It’s hard to gauge how many PCs are still running Windows 10 because public data on the matter is unreliable. But we can still make educated guesses—and it’s clear that the software is still running on hundreds of millions of PCs, despite hitting its official end-of-support date last October.

Statcounter, one popularly referenced source that collects OS and browser usage stats from web analytics data, reports that between 50 and 55 percent of Windows PCs worldwide are running Windows 11, and between 40 and 45 percent of them run Windows 10. Statcounter also reports that Windows 10 and Windows 7 usage have risen slightly over the last few months, which highlights the noisiness of the data. But as of late 2025, Dell COO Jeffrey Clarke said that there were still roughly 1 billion active Windows 10 PCs in use, around 500 million of which weren’t eligible for an upgrade because of hardware requirements. If Windows 11 just cleared the 1 billion user mark, that suggests Statcounter’s reporting of a nearly evenly split user base isn’t too far from the truth.

People complaining about Windows 11 hasn’t stopped it from hitting 1 billion users Read More »