Microsoft’s Windows updates over the last couple of years have mostly been focused on adding generative AI features to the operating system, including multiple versions of the Copilot assistant. Copilot has made it into Windows 11 (and even, to a more limited extent, the aging Windows 10) as a native app, and then a wrapper around a web app, and soon as a native app again.
But this month’s Windows updates are removing the Copilot app from some Windows 11 PCs and unpinning it from the taskbar, according to this Microsoft support document. This bug obviously won’t affect systems where Copilot had already been uninstalled, but it has already led to confusion among some Windows users.
Microsoft says it is “working on a resolution to address the issue” but that users who want to get Copilot back can reinstall the app from the Microsoft Store and repin it to the taskbar, the same process you use to install Copilot on PCs where it has been removed.
Though some version of Copilot has been included in fresh Windows 11 installs since mid-2023, and Microsoft even added a Copilot key into the standard Windows keyboard in early 2024, Copilot’s appearance and capabilities have shifted multiple times since then.
Screenshot showing Copilot continues to serve tools Microsoft took action to have removed from GitHub. Credit: Lasso
Lasso ultimately determined that Microsoft’s fix involved cutting off access to a special Bing user interface, once available at cc.bingj.com, to the public. The fix, however, didn’t appear to clear the private pages from the cache itself. As a result, the private information was still accessible to Copilot, which in turn would make it available to the Copilot user who asked.
The Lasso researchers explained:
Although Bing’s cached link feature was disabled, cached pages continued to appear in search results. This indicated that the fix was a temporary patch and while public access was blocked, the underlying data had not been fully removed.
When we revisited our investigation of Microsoft Copilot, our suspicions were confirmed: Copilot still had access to the cached data that was no longer available to human users. In short, the fix was only partial, human users were prevented from retrieving the cached data, but Copilot could still access it.
The post laid out simple steps anyone can take to find and view the same massive trove of private repositories Lasso identified.
There’s no putting toothpaste back in the tube
Developers frequently embed security tokens, private encryption keys and other sensitive information directly into their code, despite best practices that have long called for such data to be inputted through more secure means. This potential damage worsens when this code is made available in public repositories, another common security failing. The phenomenon has occurred over and over for more than a decade.
When these sorts of mistakes happen, developers often make the repositories private quickly, hoping to contain the fallout. Lasso’s findings show that simply making the code private isn’t enough. Once exposed, credentials are irreparably compromised. The only recourse is to rotate all credentials.
This advice still doesn’t address the problems resulting when other sensitive data is included in repositories that are switched from public to private. Microsoft incurred legal expenses to have tools removed from GitHub after alleging they violated a raft of laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act. Company lawyers prevailed in getting the tools removed. To date, Copilot continues undermining this work by making the tools available anyway.
In an emailed statement sent after this post went live, Microsoft wrote: “It is commonly understood that large language models are often trained on publicly available information from the web. If users prefer to avoid making their content publicly available for training these models, they are encouraged to keep their repositories private at all times.”
Microsoft hasn’t said for how long this “limited time” offer will last, but presumably it will only last for a year or two to help ease the transition between the old pricing and the new pricing. New subscribers won’t be offered the option to pay for the Classic plans.
Subscribers on the Personal and Family plans can’t use Copilot indiscriminately; they get 60 AI credits per month to use across all the Office apps, credits that can also be used to generate images or text in Windows apps like Designer, Paint, and Notepad. It’s not clear how these will stack with the 15 credits that Microsoft offers for free for apps like Designer, or the 50 credits per month Microsoft is handing out for Image Cocreator in Paint.
As Microsoft notes, this is the first price increase it has ever implemented for the personal Microsoft 365 subscriptions in the US, which have stayed at the same levels since being introduced as Office 365 over a decade ago. Pricing for the business plans and pricing in other countries has increased before. Pricing for Office Home 2024 ($150) and Office Home & Business 2024 ($250), which can’t access Copilot or other Microsoft 365 features, is also the same as it was before.
Microsoft CEO Satya Nadella has announced a dramatic restructuring of the company’s engineering organization, which is pivoting the company’s focus to developing the tools that will underpin agentic AI.
Dubbed “CoreAI – Platform and Tools,” the new division rolls the existing AI platform team and the previous developer division (responsible for everything from .NET to Visual Studio) along with some other teams into one big group.
As for what this group will be doing specifically, it’s basically everything that’s mission-critical to Microsoft in 2025, as Nadella tells it:
This new division will bring together Dev Div, AI Platform, and some key teams from the Office of the CTO (AI Supercomputer, AI Agentic Runtimes, and Engineering Thrive), with the mission to build the end-to-end Copilot & AI stack for both our first-party and third-party customers to build and run AI apps and agents. This group will also build out GitHub Copilot, thus having a tight feedback loop between the leading AI-first product and the AI platform to motivate the stack and its roadmap.
To accomplish all that, “Jay Parikh will lead this group as EVP.” Parikh was hired by Microsoft in October; he previously worked as the VP and global head of engineering at Meta.
The fact that the blog post doesn’t say anything about .NET or Visual Studio, instead emphasizing GitHub Copilot and anything and everything related to agentic AI, says a lot about how Nadella sees Microsoft’s future priorities.
So-called AI agents are applications that are given specified boundaries (action spaces) and a large memory capacity to independently do subsets of the kinds of work that human office workers do today. Some company leaders and AI commentators believe these agents will outright replace jobs, while others are more conservative, suggesting they’ll simply be powerful tools to streamline the jobs people already have.
Judge: AI will likely play “larger role” in Google search remedies as market shifts.
Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google’s AI products may be restricted as an appropriate remedy following the government’s win in the search monopoly trial.
According to Law360, Mehta said that “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google’s anticompetitive behavior, the judge wants to hear much more about how each side views AI’s role in Google’s search empire during the remedies stage of litigation than he did during the search trial.
“AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase,” Mehta said. “Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?”
To fight the DOJ’s proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google’s ability to compete, the tech company is currently trying to pry into Microsoft’s AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has “core relevance” to the remedies litigation, Law360 reported.
The DOJ has asked for a wide range of remedies to stop Google from potentially using AI to entrench its market dominance in search and search text advertising. They include a ban on exclusive agreements with publishers to train on content, which the DOJ fears might allow Google to block AI rivals from licensing data, potentially posing a barrier to entry in both markets. Under the proposed remedies, Google would also face restrictions on investments in or acquisitions of AI products, as well as mergers with AI companies.
Additionally, the DOJ wants Mehta to stop Google from any potential self-preferencing, such as making an AI product mandatory on Android devices Google controls or preventing a rival from distribution on Android devices.
The government seems very concerned that Google may use its ownership of Android to play games in the emerging AI sector. They’ve further recommended an order preventing Google from discouraging partners from working with rivals, degrading the quality of rivals’ AI products on Android devices, or otherwise “coercing” manufacturers or other Android partners into giving Google’s AI products “better treatment.”
Importantly, if the court orders AI remedies linked to Google’s control of Android, Google could risk a forced sale of Android if Mehta grants the DOJ’s request for “contingent structural relief” requiring divestiture of Android if behavioral remedies don’t destroy the current monopolies.
Finally, the government wants Google to be required to allow publishers to opt out of AI training without impacting their search rankings. (Currently, opting out of AI scraping automatically opts sites out of Google search indexing.)
All of this, the DOJ alleged, is necessary to clear the way for a thriving search market as AI stands to shake up the competitive landscape.
“The promise of new technologies, including advances in artificial intelligence (AI), may present an opportunity for fresh competition,” the DOJ said in a court filing. “But only a comprehensive set of remedies can thaw the ecosystem and finally reverse years of anticompetitive effects.”
At the status conference Tuesday, DOJ attorney David Dahlquist reiterated to Mehta that these remedies are needed so that Google’s illegal conduct in search doesn’t extend to this “new frontier” of search, Law360 reported. Dahlquist also clarified that the DOJ views these kinds of AI products “as new access points for search, rather than a whole new market.”
“We’re very concerned about Google’s conduct being a barrier to entry,” Dahlquist said.
Google could not immediately be reached for comment. But the search giant has maintained that AI is beyond the scope of the search trial.
During the status conference, Google attorney John E. Schmidtlein disputed that AI remedies are relevant. While he agreed that “AI is key to the future of search,” he warned that “extraordinary” proposed remedies would “hobble” Google’s AI innovation, Law360 reported.
Microsoft shields confidential AI deals
Microsoft is predictably protective of its AI deals, arguing in a court filing that its “highly confidential agreements with OpenAI, Perplexity AI, Inflection, and G42 are not relevant to the issues being litigated” in the Google trial.
According to Microsoft, Google is arguing that it needs this information to “shed light” on things like “the extent to which the OpenAI partnership has driven new traffic to Bing and otherwise affected Microsoft’s competitive standing” or what’s required by “terms upon which Bing powers functionality incorporated into Perplexity’s search service.”
These insights, Google seemingly hopes, will convince Mehta that Google’s AI deals and investments are the norm in the AI search sector. But Microsoft is currently blocking access, arguing that “Google has done nothing to explain why” it “needs access to the terms of Microsoft’s highly confidential agreements with other third parties” when Microsoft has already offered to share documents “regarding the distribution and competitive position” of its AI products.
Microsoft also opposes Google’s attempts to review how search click-and-query data is used to train OpenAI’s models. Those requests would be better directed at OpenAI, Microsoft said.
If Microsoft gets its way, Google’s discovery requests will be limited to just Microsoft’s content licensing agreements for Copilot. Microsoft alleged those are the only deals “related to the general search or the general search text advertising markets” at issue in the trial.
On Tuesday, Microsoft attorney Julia Chapman told Mehta that Microsoft had “agreed to provide documents about the data used to train its own AI model and also raised concerns about the competitive sensitivity of Microsoft’s agreements with AI companies,” Law360 reported.
It remains unclear at this time if OpenAI will be forced to give Google the click-and-query data Google seeks. At the status hearing, Mehta ordered OpenAI to share “financial statements, information about the training data for ChatGPT, and assessments of the company’s competitive position,” Law360 reported.
But the DOJ may also be interested in seeing that data. In their proposed final judgment, the government forecasted that “query-based AI solutions” will “provide the most likely long-term path for a new generation of search competitors.”
Because of that prediction, any remedy “must prevent Google from frustrating or circumventing” court-ordered changes “by manipulating the development and deployment of new technologies like query-based AI solutions.” Emerging rivals “will depend on the absence of anticompetitive constraints to evolve into full-fledged competitors and competitive threats,” the DOJ alleged.
Mehta seemingly wants to see the evidence supporting the DOJ’s predictions, which could end up exposing carefully guarded secrets of both Google’s and its biggest rivals’ AI deals.
On Tuesday, the judge noted that integration of AI into search engines had already evolved what search results pages look like. And from his “very layperson’s perspective,” it seems like AI’s integration into search engines will continue moving “very quickly,” as both parties seem to agree.
Whether he buys into the DOJ’s theory that Google could use its existing advantage as the world’s greatest gatherer of search query data to block rivals from keeping pace is still up in the air, but the judge seems moved by the DOJ’s claim that “AI has the ability to affect market dynamics in these industries today as well as tomorrow.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Users will be asked to reauthenticate with Windows Hello every time they access their Recall database. Credit: Microsoft
Microsoft has now delayed the feature multiple times to address those concerns, and it outlined multiple security-focused additions to Recall in a blog post in September. Among other changes, the feature is now opt-in by default and is protected by additional encryption. Users must also re-authenticate with Windows Hello each time they access the database. Turning on the feature requires Secure Boot, BitLocker disk encryption, and Windows Hello to be enabled. In addition to the manual exclusion lists for sites and apps, the new Recall also attempts to mask sensitive data like passwords and credit card numbers so they aren’t stored in the Recall database.
The new version of Recall can also be completely uninstalled for users who have no interest in it, or by IT administrators who don’t want to risk it exposing sensitive data.
Testers will need to kick the tires on all of these changes to make sure that they meaningfully address all the risks and issues that the original version of Recall had, and this Windows Insider preview is their chance to do it.
“Do security”
Part of the original Recall controversy was that Microsoft wasn’t going to run it through the usual Windows Insider process—it was intended to be launched directly to users of the new Copilot+ PCs via a day-one software update. This in itself was a big red flag; usually, even features as small as spellcheck for the Notepad app go through multiple weeks of Windows Insider testing before Microsoft releases them to the public. This gives the company a chance to fix bugs, collect and address user feedback, and even scrub new features altogether.
Microsoft is supposedly re-orienting itself to put security over all other initiatives and features. CEO Satya Nadella recently urged employees to “do security” when presented with the option to either launch something quickly or launch something securely. In Recall’s case, the company’s rush to embrace generative AI features almost won out over that “do security” mandate. If future AI features go through the typical Windows Insider testing process first, that will be a sign that Microsoft is taking its commitment to security seriously.
It’s been a big year for Windows running on Arm chips, something that Microsoft and Arm chipmakers have been trying to get off the ground for well over a decade. Qualcomm’s Snapdragon X Elite and X Plus are at the heart of dozens of Copilot+ Windows PCs, which promise unique AI features and good battery life without as many of the app and hardware compatibility problems that have plagued Windows-on-Arm in the past.
But Qualcomm has unceremoniously canceled the dev kit and is sending out refunds to those who ordered them. That’s according to a note received by developer and YouTuber Jeff Geerling, who had already received the Snapdragon Dev Kit and given it a middling review a couple of weeks ago.
“The launch of 30+ Snapdragon X-series powered PCs is a testament to our ability to deliver leading technology and the PC industry’s desire to move to our next-generation technology,” reads Qualcomm’s statement. “However, the Developer Kit product comprehensively has not met our usual standards of excellence and so we are reaching out to let you know that unfortunately we have made the decision to pause this product and the support of it, indefinitely.”
Qualcomm’s statement also says that “any material, if received” will not have to be returned—those lucky enough to have gotten one of the Dev Kits up until now may be able to keep it and get their money back, though the PC is no longer officially being supported by Qualcomm.
Judge calls for a swift end to experts secretly using AI to sway cases.
A New York judge recently called out an expert witness for using Microsoft’s Copilot chatbot to inaccurately estimate damages in a real estate dispute that partly depended on an accurate assessment of damages to win.
In an order Thursday, judge Jonathan Schopf warned that “due to the nature of the rapid evolution of artificial intelligence and its inherent reliability issues” that any use of AI should be disclosed before testimony or evidence is admitted in court. Admitting that the court “has no objective understanding as to how Copilot works,” Schopf suggested that the legal system could be disrupted if experts started overly relying on chatbots en masse.
His warning came after an expert witness, Charles Ranson, dubiously used Copilot to cross-check calculations in a dispute over a $485,000 rental property in the Bahamas that had been included in a trust for a deceased man’s son. The court was being asked to assess if the executrix and trustee—the deceased man’s sister—breached her fiduciary duties by delaying the sale of the property while admittedly using it for personal vacations.
To win, the surviving son had to prove that his aunt breached her duties by retaining the property, that her vacations there were a form of self-dealing, and that he suffered damages from her alleged misuse of the property.
It was up to Ranson to figure out how much would be owed to the son had the aunt sold the property in 2008 compared to the actual sale price in 2022. But Ranson, an expert in trust and estate litigation, “had no relevant real estate expertise,” Schopf said, finding that Ranson’s testimony was “entirely speculative” and failed to consider obvious facts, such as the pandemic’s impact on rental prices or trust expenses like real estate taxes.
Seemingly because Ranson didn’t have the relevant experience in real estate, he turned to Copilot to fill in the blanks and crunch the numbers. The move surprised Internet law expert Eric Goldman, who told Ars that “lawyers retain expert witnesses for their specialized expertise, and it doesn’t make any sense for an expert witness to essentially outsource that expertise to generative AI.”
“If the expert witness is simply asking a chatbot for a computation, then the lawyers could make that same request directly without relying on the expert witness (and paying the expert’s substantial fees),” Goldman suggested.
Perhaps the son’s legal team wasn’t aware of how big a role Copilot played. Schopf noted that Ranson couldn’t recall what prompts he used to arrive at his damages estimate. The expert witness also couldn’t recall any sources for the information he took from the chatbot and admitted that he lacked a basic understanding of how Copilot “works or how it arrives at a given output.”
Ars could not immediately reach Ranson for comment. But in Schopf’s order, the judge wrote that Ranson defended using Copilot as a common practice for expert witnesses like him today.
“Ranson was adamant in his testimony that the use of Copilot or other artificial intelligence tools, for drafting expert reports is generally accepted in the field of fiduciary services and represents the future of analysis of fiduciary decisions; however, he could not name any publications regarding its use or any other sources to confirm that it is a generally accepted methodology,” Schopf wrote.
Goldman noted that Ranson relying on Copilot for “what was essentially a numerical computation was especially puzzling because of generative AI’s known hallucinatory tendencies, which makes numerical computations untrustworthy.”
Because Ranson was so bad at explaining how Copilot works, Schopf took the extra time to actually try to use Copilot to generate the estimates that Ranson got—and he could not.
Each time, the court entered the same query into Copilot—”Can you calculate the value of $250,000 invested in the Vanguard Balanced Index Fund from December 31, 2004 through January 31, 2021?”—and each time Copilot generated a slightly different answer.
This “calls into question the reliability and accuracy of Copilot to generate evidence to be relied upon in a court proceeding,” Schopf wrote.
Chatbot not to blame, judge says
While the court was experimenting with Copilot, they also probed the chatbot for answers to a more Big Picture legal question: Are Copilot’s responses accurate enough to be cited in court?
The court found that Copilot had less faith in its outputs than Ranson seemingly did. When asked “are you accurate” or “reliable,” Copilot responded that “my accuracy is only as good as my sources, so for critical matters, it’s always wise to verify.” When more specifically asked, “Are your calculations reliable enough for use in court,” Copilot similarly recommended that outputs “should always be verified by experts and accompanied by professional evaluations before being used in court.”
Although it seemed clear that Ranson did not verify outputs before using them in court, Schopf noted that at least “developers of the Copilot program recognize the need for its supervision by a trained human operator to verify the accuracy of the submitted information as well as the output.”
Microsoft declined Ars’ request to comment.
Until a bright-line rule exists telling courts when to accept AI-generated testimony, Schopf suggested that courts should require disclosures from lawyers to stop chatbot-spouted inadmissible testimony from disrupting the legal system.
“The use of artificial intelligence is a rapidly growing reality across many industries,” Schopf wrote. “The mere fact that artificial intelligence has played a role, which continues to expand in our everyday lives, does not make the results generated by artificial intelligence admissible in Court.”
Ultimately, Schopf found that there was no breach of fiduciary duty, negating the need for Ranson’s Copilot-cribbed testimony on damages in the Bahamas property case. Schopf denied all of the son’s objections in their entirety (as well as any future claims) after calling out Ranson’s misuse of the chatbot at length.
But in his order, the judge suggested that Ranson seemed to get it all wrong before involving the chatbot.
“Whether or not he was retained and/ or qualified as a damages expert in areas other than fiduciary duties, his testimony shows that he admittedly did not perform a full analysis of the problem, utilized an incorrect time period for damages, and failed to consider obvious elements into his calculations, all of which go against the weight and credibility of his opinion,” Schopf wrote.
Schopf noted that the evidence showed that rather than the son losing money from his aunt’s management of the trust—which Ranson’s cited chatbot’s outputs supposedly supported—the sale of the property in 2022 led to “no attributable loss of capital” and “in fact, it generated an overall profit to the Trust.”
Goldman suggested that Ranson did not seemingly spare much effort by employing Copilot in a way that seemed to damage his credibility in court.
“It would not have been difficult for the expert to pull the necessary data directly from primary sources, so the process didn’t even save much time—but that shortcut came at the cost of the expert’s credibility,” Goldman told Ars.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
Whether you care about Microsoft’s Copilot AI assistant or not, many new PCs introduced this year have included a dedicated Copilot key on the keyboard; this is true whether the PC meets the requirements for Microsoft’s Copilot+ PC program or not. Microsoft’s commitment to putting AI features in all its products runs so deep that the company changed the Windows keyboard for the first time in three decades.
But what happens if you don’t use Copilot regularly, or you’ve disabled or uninstalled it entirely, or if you simply don’t need to have it available at the press of a button? Microsoft is making allowances for you in a new Windows Insider Preview build in the Dev channel, which will allow the Copilot key to be reprogrammed so that it can launch more than just Copilot.
The area in Settings where you can reprogram the Copilot key in the latest Windows Insider Preview build in the Dev channel. Credit: Microsoft
There are restrictions. To appear in the menu of options in the Settings app, Microsoft says an app must be “MSIX packaged and signed, thus indicating the app meets security and privacy requirements to keep customers safe.” Generally an app installed via the Microsoft Store or apps built into Windows will meet those requirements, though apps installed from other sources may not. But you can’t make the Copilot key launch any old executable or batch file, and you can’t customize it to do anything other than launch apps (at least, not without using third-party tools for reconfiguring your keyboard).
On Monday, Microsoft unveiled updates to its consumer AI assistant Copilot, introducing two new experimental features for a limited group of $20/month Copilot Pro subscribers: Copilot Labs and Copilot Vision. Labs integrates OpenAI’s latest o1 “reasoning” model, and Vision allows Copilot to see what you’re browsing in Edge.
Microsoft says Copilot Labs will serve as a testing ground for Microsoft’s latest AI tools before they see wider release. The company describes it as offering “a glimpse into ‘work-in-progress’ projects.” The first feature available in Labs is called “Think Deeper,” and it uses step-by-step processing to solve more complex problems than the regular Copilot. Think Deeper is Microsoft’s version of OpenAI’s new o1-preview and o1-mini AI models, and it has so far rolled out to some Copilot Pro users in Australia, Canada, New Zealand, the UK, and the US.
Copilot Vision is an entirely different beast. The new feature aims to give the AI assistant a visual window into what you’re doing within the Microsoft Edge browser. When enabled, Copilot can “understand the page you’re viewing and answer questions about its content,” according to Microsoft.
Microsoft’s Copilot Vision promo video.
The company positions Copilot Vision as a way to provide more natural interactions and task assistance beyond text-based prompts, but it will likely raise privacy concerns. As a result, Microsoft says that Copilot Vision is entirely opt-in and that no audio, images, text, or conversations from Vision will be stored or used for training. The company is also initially limiting Vision’s use to a pre-approved list of websites, blocking it on paywalled and sensitive content.
The rollout of these features appears gradual, with Microsoft noting that it wants to balance “pioneering features and a deep sense of responsibility.” The company said it will be “listening carefully” to user feedback as it expands access to the new capabilities. Microsoft has not provided a timeline for wider availability of either feature.
Mustafa Suleyman, chief executive of Microsoft AI, told Reuters that he sees Copilot as an “ever-present confidant” that could potentially learn from users’ various Microsoft-connected devices and documents, with permission. He also mentioned that Microsoft co-founder Bill Gates has shown particular interest in Copilot’s potential to read and parse emails.
But judging by the visceral reaction to Microsoft’s Recall feature, which keeps a record of everything you do on your PC so an AI model can recall it later, privacy-sensitive users may not appreciate having an AI assistant monitor their activities—especially if those features send user data to the cloud for processing.
Enlarge/ Microsoft’s Arm-powered Surface Laptop 7. We’re still waiting for Arm chips to make their way into cheaper PCs.
Andrew Cunningham
For the first time in the decade-plus that Microsoft has been trying to make Arm-powered Windows PCs happen, we’ve finally got some pretty good ones. The latest Surface Pro and Surface Laptop (and the other Copilot+ PCs) benefit from extensive work done to Windows 11’s x86 translation layer, a wider selection of native apps, and most importantly, Snapdragon X Pro and X Elite chips from Qualcomm that are as good as or better than Intel’s or AMD’s current offerings.
The main problem with these computers is that they’re all on the expensive side. The cheapest Snapdragon X PC right now is probably this $899 developer kit mini-desktop; the cheapest laptops start around the same $1,000 price as the entry-level MacBook Air.
That’s a problem Qualcomm hopes to correct next year. Qualcomm CEO Christiano Amon said on the company’s Q3 earnings call (as recorded by The Verge) that the company was hoping to bring Arm PC prices down to $700 at some point in 2025, noting that these cheaper PCs wouldn’t compromise the performance of the Snapdragon X series’ built-in neural processing unit (NPU).
That Amon singled out the NPU is interesting because it leaves the door open to further reductions in CPU and GPU performance to make cheaper products that can hit those lower prices. The Snapdragon X Plus series keeps the exact same NPU as the X Elite, for example, but comes with fewer CPU and GPU cores that are clocked lower than the Snapdragon X Elite chips.
Qualcomm may want to keep NPU performance the same because Microsoft has a minimum NPU performance requirement of 40 trillion operations per second (TOPS) to qualify for its Copilot+ PC label and associated features in Windows 11. Other requirements include 16GB of memory and 256GB of storage, but Microsoft specifically hasn’t made specific CPU or GPU performance recommendations for the Copilot+ program beyond the basic ones necessary for running Windows 11 in the first place. Copilot+ PCs come with additional AI-powered features that take advantage of local processing power rather than sending requests to the cloud, though as of this writing, there aren’t many of these features, and one of the biggest ones (Recall) has been delayed indefinitely because of privacy and security concerns.
Lofty goals for Arm PCs
Both Arm and Qualcomm have made lofty claims about their goals in the PC market. Arm CEO Rene Haas says Arm chips could account for more than half of all Windows PC shipments in the next five years, and Amon has said that PC OEMs expect as much as 60 percent of their systems to ship with Arm chips in the next three years.
These claims seem overly optimistic; Intel and AMD aren’t going anywhere and aren’t standing still, and despite improvements to Windows-on-Arm, the PC ecosystem still has decades invested in x86 chips. But if either company is ever going to get anywhere close to those numbers, fielding decent systems at more mass-market prices will be key to achieving that kind of volume.
Hopefully, the cheaper Snapdragon systems will be available both as regular laptops and as mini desktops, like Qualcomm’s dev kit desktop. To succeed, the Arm Windows ecosystem will need to mirror what is available in both the x86 PC ecosystem and Apple’s Mac lineup to capture as many buyers as possible.
And the more Arm PCs there are out there, the more incentive developers will have to continue fixing Windows-on-Arm’s last lingering compatibility problems. Third-party drivers for things like printers, mice, audio preamps and mixers, and other accessories are the biggest issue right now since there’s no way to translate the x86 versions. The only way to support this hardware will be with more Arm-native software, and the only way to get more Arm-native software is to make it worth developers’ time to write it.
Microsoft is all-in on its Copilot+ PC push right now, but the fact is that they’ll be an extremely small minority among the PC install base for the foreseeable future. The program’s stringent hardware requirements—16GB of RAM, at least 256GB of storage, and a fast neural processing unit (NPU)—disqualify all but new PCs, keeping features like Recall from running on all current Windows 11 PCs.
But the Copilot chatbot remains supported on all Windows 11 PCs (and most Windows 10 PCs), and a change Microsoft has made to recent Windows 11 Insider Preview builds is actually making the feature less useful and accessible than it is in the current publicly available versions of Windows. Copilot is being changed from a persistent sidebar into an app window that can be resized, minimized, and pinned and unpinned from the taskbar, just like any other app. But at least as of this writing, this version of Copilot can no longer adjust Windows’ settings, and it’s no longer possible to call it up with the Windows+C keyboard shortcut. Only newer keyboards with the dedicated Copilot key will have an easy built-in keyboard shortcut for summoning Copilot.
If Microsoft keeps these changes intact, they’ll hit Windows 11 PCs when the 24H2 update is released to the general public later this year; the changes are already present on Copilot+ PCs, which are running a version of Window 11 24H2 out of the box.
Changing how Copilot works is all well and good—despite how quickly Microsoft has pushed it out to every Windows PC in existence, it has been labeled a “preview” up until the 24H2 update, and some amount of change is to be expected. But discontinuing the just-introduced Win+C keyboard shortcut to launch Copilot feels pointless, especially since the Win+C shortcut isn’t being reassigned.
The Copilot assistant exists on the taskbar, so it’s not as though it’s difficult to access, but the feature is apparently important enough to merit the first major change to Windows keyboards in three decades. Surely it also justifies retaining a keyboard shortcut for the vast majority of PC keyboards without a dedicated Copilot key.
People who want to continue to use Win+C as a launch key for Copilot can do so with custom keyboard remappers like Microsoft’s own Keyboard Manager PowerToy. Simply set Win+C as a shortcut for the obscure Win+Shift+F23 shortcut that the hardware Copilot key is already mapped to and you’ll be back in business.
Win+C has a complicated history
Enlarge/ Win+C always seems to get associated with transient, unsuccessful Windows features like Charms and Cortana.
Andrew Cunningham
The Win+C keyboard shortcut actually has a bit of a checkered history, having been reassigned over the years to multiple less-than-successful Windows initiatives. In Windows 8, it made its debut as a shortcut for the “Charms” menu, part of the operating system’s tablet-oriented user interface that was designed to partially replace the old Start menu. But Windows 10 retreated from this new tablet UI, and the Charms bar was discontinued.
In Windows 10, Win+C was assigned to the Cortana voice assistant instead, Microsoft’s contribution to the early-2010s voice assistant boom kicked off by Apple’s Siri and refined by competitors like Amazon’s Alexa. But Cortana, like the Charms bar, never really took off, and Microsoft switched the voice assistant off in 2023 after a few years of steadily deprioritizing it in Windows 10 (and mostly hiding it in Windows 11).
Most older versions of Windows didn’t do anything with the Win+C, but if you go all the way back to the Windows 95 era, users of Microsoft Natural Keyboards who installed Microsoft’s IntelliType software could use Win+C to open the Control Panel. This shortcut apparently never made it into Windows itself, even as the Windows key became standard equipment on PCs in the late ’90s and early 2000s.
So pour one out for Win+C, the keyboard shortcut that is always trying to do something new and not quite catching on. We can’t wait to see what it does next.