GitHub

microsoft-makes-zork-i,-ii,-and-iii-open-source-under-mit-license

Microsoft makes Zork I, II, and III open source under MIT License

Zork, the classic text-based adventure game of incalculable influence, has been made available under the MIT License, along with the sequels Zork II and Zork III.

The move to take these Zork games open source comes as the result of the shared work of the Xbox and Activision teams along with Microsoft’s Open Source Programs Office (OSPO). Parent company Microsoft owns the intellectual property for the franchise.

Only the code itself has been made open source. Ancillary items like commercial packaging and marketing assets and materials remain proprietary, as do related trademarks and brands.

“Rather than creating new repositories, we’re contributing directly to history. In collaboration with Jason Scott, the well-known digital archivist of Internet Archive fame, we have officially submitted upstream pull requests to the historical source repositories of Zork I, Zork II, and Zork III. Those pull requests add a clear MIT LICENSE and formally document the open-source grant,” says the announcement co-written by Stacy Haffner (director of the OSPO at Microsoft) and Scott Hanselman (VP of Developer Community at the company).

Microsoft gained control of the Zork IP when it acquired Activision in 2022; Activision had come to own it when it acquired original publisher Infocom in the late ’80s. There was an attempt to sell Zork publishing rights directly to Microsoft even earlier in the ’80s, as founder Bill Gates was a big Zork fan, but it fell through, so it’s funny that it eventually ended up in the same place.

To be clear, this is not the first time the original Zork source code has been available to the general public. Scott uploaded it to GitHub in 2019, but the license situation was unresolved, and Activision or Microsoft could have issued a takedown request had they wished to.

Now that’s obviously not at risk of happening anymore.

Microsoft makes Zork I, II, and III open source under MIT License Read More »

claude-code-gets-a-web-version—but-it’s-the-new-sandboxing-that-really-matters

Claude Code gets a web version—but it’s the new sandboxing that really matters

Now, it can instead be given permissions for specific file system folders and network servers. That means fewer approval steps, but it’s also more secure overall against prompt injection and other risks.

Anthropic’s demo video for Claude Code on the web.

According to Anthropic’s engineering blog, the new network isolation approach only allows Internet access “through a unix domain socket connected to a proxy server running outside the sandbox. … This proxy server enforces restrictions on the domains that a process can connect to, and handles user confirmation for newly requested domains.” Additionally, users can customize the proxy to set their own rules for outgoing traffic.

This way, the coding agent can do things like fetch npm packages from approved sources, but without carte blanche for communicating with the outside world, and without badgering the user with constant approvals.

For many developers, these additions are more significant than the availability of web or mobile interfaces. They allow Claude Code agents to operate more independently without as many detailed, line-by-line approvals.

That’s more convenient, but it’s a double-edged sword, as it will also make code review even more important. One of the strengths of the too-many-approvals approach was that it made sure developers were still looking closely at every little change. Now it might be a little bit easier to miss Claude Code making a bad call.

The new features are available in beta now as a research preview, and they are available to Claude users with Pro or Max subscriptions.

Claude Code gets a web version—but it’s the new sandboxing that really matters Read More »

anthropic’s-claude-haiku-4.5-matches-may’s-frontier-model-at-fraction-of-cost

Anthropic’s Claude Haiku 4.5 matches May’s frontier model at fraction of cost

And speaking of cost, Haiku 4.5 is included for subscribers of the Claude web and app plans. Through the API (for developers), the small model is priced at $1 per million input tokens and $5 per million output tokens. That compares to Sonnet 4.5 at $3 per million input and $15 per million output tokens, and Opus 4.1 at $15 per million input and $75 per million output tokens.

The model serves as a cheaper drop-in replacement for two older models, Haiku 3.5 and Sonnet 4. “Users who rely on AI for real-time, low-latency tasks like chat assistants, customer service agents, or pair programming will appreciate Haiku 4.5’s combination of high intelligence and remarkable speed,” Anthropic writes.

Claude 4.5 Haiku answers the classic Ars Technica AI question,

Claude 4.5 Haiku answers the classic Ars Technica AI question, “Would the color be called ‘magenta’ if the town of Magenta didn’t exist?”

On SWE-bench Verified, a test that measures performance on coding tasks, Haiku 4.5 scored 73.3 percent compared to Sonnet 4’s similar performance level (72.7 percent). The model also reportedly surpasses Sonnet 4 at certain tasks like using computers, according to Anthropic’s benchmarks. Claude Sonnet 4.5, released in late September, remains Anthropic’s frontier model and what the company calls “the best coding model available.”

Haiku 4.5 also surprisingly edges up close to what OpenAI’s GPT-5 can achieve in this particular set of benchmarks (as seen in the chart above), although since the results are self-reported and potentially cherry-picked to match a model’s strengths, one should always take them with a grain of salt.

Still, making a small, capable coding model may have unexpected advantages for agentic coding setups like Claude Code. Anthropic designed Haiku 4.5 to work alongside Sonnet 4.5 in multi-model workflows. In such a configuration, Anthropic says, Sonnet 4.5 could break down complex problems into multi-step plans, then coordinate multiple Haiku 4.5 instances to complete subtasks in parallel, like spinning off workers to get things done faster.

For more details on the new model, Anthropic released a system card and documentation for developers.

Anthropic’s Claude Haiku 4.5 matches May’s frontier model at fraction of cost Read More »

microsoft-open-sources-bill-gates’-6502-basic-from-1978

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978

On Wednesday, Microsoft released the complete source code for Microsoft BASIC for 6502 Version 1.1, the 1978 interpreter that powered the Commodore PET, VIC-20, Commodore 64, and Apple II through custom adaptations. The company posted 6,955 lines of assembly language code to GitHub under an MIT license, allowing anyone to freely use, modify, and distribute the code that helped launch the personal computer revolution.

“Rick Weiland and I (Bill Gates) wrote the 6502 BASIC,” Gates commented on the Page Table blog in 2010. “I put the WAIT command in.”

For millions of people in the late 1970s and early 1980s, variations of Microsoft’s BASIC interpreter provided their first experience with programming. Users could type simple commands like “10 PRINT ‘HELLO'” and “20 GOTO 10” to create an endless loop of text on their screens, for example—often their first taste of controlling a computer directly. The interpreter translated these human-readable commands into instructions that the processor could execute, one line at a time.

The Commodore PET (Personal Electronic Transactor) was released in January 1977 and used the MOS 6502 and ran a variation of Microsoft BASIC. Credit: SSPL/Getty Images

At just 6,955 lines of assembly language—Microsoft’s low-level 6502 code talked almost directly to the processor. Microsoft’s BASIC squeezed remarkable functionality into minimal memory, a key achievement when RAM cost hundreds of dollars per kilobyte.

In the early personal computer space, cost was king. The MOS 6502 processor that ran this BASIC cost about $25, while competitors charged $200 for similar chips. Designer Chuck Peddle created the 6502 specifically to bring computing to the masses, and manufacturers built variations of the chip into the Atari 2600, Nintendo Entertainment System, and millions of Commodore computers.

The deal that got away

In 1977, Commodore licensed Microsoft’s 6502 BASIC for a flat fee of $25,000. Jack Tramiel’s company got perpetual rights to ship the software in unlimited machines—no royalties, no per-unit fees. While $25,000 seemed substantial then, Commodore went on to sell millions of computers with Microsoft BASIC inside. Had Microsoft negotiated a per-unit licensing fee like they did with later products, the deal could have generated tens of millions in revenue.

The version Microsoft released—labeled 1.1—contains bug fixes that Commodore engineer John Feagans and Bill Gates jointly implemented in 1978 when Feagans traveled to Microsoft’s Bellevue offices. The code includes memory management improvements (called “garbage collection” in programming terms) and shipped as “BASIC V2” on the Commodore PET.

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978 Read More »

college-student’s-“time-travel”-ai-experiment-accidentally-outputs-real-1834-history

College student’s “time travel” AI experiment accidentally outputs real 1834 history

A hobbyist developer building AI language models that speak Victorian-era English “just for fun” got an unexpected history lesson this week when his latest creation mentioned real protests from 1834 London—events the developer didn’t know had actually happened until he Googled them.

“I was interested to see if a protest had actually occurred in 1834 London and it really did happen,” wrote Reddit user Hayk Grigorian, who is a computer science student at Muhlenberg College in Pennsylvania.

For the past month, Grigorian has been developing what he calls TimeCapsuleLLM, a small AI language model (like a pint-sized distant cousin to ChatGPT) which has been trained entirely on texts from 1800–1875 London. Grigorian wants to capture an authentic Victorian voice in the AI model’s outputs. As a result, the AI model ends up spitting out text that’s heavy with biblical references and period-appropriate rhetorical excess.

Grigorian’s project joins a growing field of researchers exploring what some call “Historical Large Language Models” (HLLMs) if they feature a larger base model than the small one Grigorian is using. Similar projects include MonadGPT, which was trained on 11,000 texts from 1400 to 1700 CE that can discuss topics using 17th-century knowledge frameworks, and XunziALLM, which generates classical Chinese poetry following ancient formal rules. These models offer researchers a chance to interact with the linguistic patterns of past eras.

According to Grigorian, TimeCapsuleLLM’s most intriguing recent output emerged from a simple test. When he prompted it with “It was the year of our Lord 1834,” the AI model—which is trained to continue text from wherever a user leaves off—generated the following:

It was the year of our Lord 1834 and the streets of London were filled with protest and petition. The cause, as many re counted, was not bound in the way of private, but having taken up the same day in the day of Lord Palmerston, the public will receive a short statement of the difficulties under which the day of law has reached us. It is a matter of deep regret, that the present events in the history of the world are clear, and consequently will be’known. It is not true that the very men who first settled in the Gospel at Jerusalem should have so extensive and so interesting a record of the prosperity and prosperity

Curious about the accuracy, Grigorian did some fact-checking. “The output also brought up Lord Palmerston,” he wrote, “and after a google search I learned that his actions resulted in the 1834 protests.”

College student’s “time travel” AI experiment accidentally outputs real 1834 history Read More »

github-will-be-folded-into-microsoft-proper-as-ceo-steps-down

GitHub will be folded into Microsoft proper as CEO steps down

Putting GitHub more directly under its AI umbrella makes some degree of sense for Microsoft, given how hard it has pushed tools like GitHub Copilot, an AI-assisted coding tool. Microsoft has continually iterated on GitHub Copilot since introducing it in late 2021, adding support for multiple language models and “agents” that attempt to accomplish plain-language requests in the background as you work on other things.

However, there have been problems, too. Copilot inadvertently exposed the private code repositories of a few major companies earlier this year. And a recent Stack Overflow survey showed that trust in AI-assisted coding tools’ accuracy may be declining even as usage has increased, citing the extra troubleshooting and debugging work caused by “solutions that are almost right, but not quite.”

It’s unclear whether Dohmke’s departure and the elimination of the CEO position will change much in terms of the way GitHub operates or the products it creates and maintains. As GitHub’s CEO, Dohmke was already reporting to Julia Liuson, president of the company’s developer division, and Liuson reported to Core AI group leader Jay Parikh. The CoreAI group itself is only a few months old—it was announced by Microsoft CEO Satya Nadella in January, and “build[ing] out GitHub Copilot” was already one of the group’s responsibilities.

“Ultimately, we must remember that our internal organizational boundaries are meaningless to both our customers and to our competitors,” wrote Nadella when he announced the formation of the CoreAI group.

GitHub will be folded into Microsoft proper as CEO steps down Read More »

github-abused-to-distribute-payloads-on-behalf-of-malware-as-a-service

GitHub abused to distribute payloads on behalf of malware-as-a-service

Researchers from Cisco’s Talos security team have uncovered a malware-as-a-service operator that used public GitHub accounts as a channel for distributing an assortment of malicious software to targets.

The use of GitHub gave the malware-as-a-service (MaaS) a reliable and easy-to-use platform that’s greenlit in many enterprise networks that rely on the code repository for the software they develop. GitHub removed the three accounts that hosted the malicious payloads shortly after being notified by Talos.

“In addition to being an easy means of file hosting, downloading files from a GitHub repository may bypass Web filtering that is not configured to block the GitHub domain,” Talos researchers Chris Neal and Craig Jackson wrote Thursday. “While some organizations can block GitHub in their environment to curb the use of open-source offensive tooling and other malware, many organizations with software development teams require GitHub access in some capacity. In these environments, a malicious GitHub download may be difficult to differentiate from regular web traffic.”

Emmenhtal, meet Amadey

The campaign, which Talos said had been ongoing since February, used a previously known malware loader tracked under names including Emmenhtal and PeakLight. Researchers from security firm Palo Alto Networks and Ukraine’s major state cyber agency SSSCIP had already documented the use of Emmenhtal in a separate campaign that embedded the loader into malicious emails to distribute malware to Ukrainian entities. Talos found the same Emmenhtal variant in the MaaS operation, only this time the loader was distributed through GitHub.

The campaign using GitHub was different from one targeting Ukrainian entities in another key way. Whereas the final payload in the one targeting the Ukrainian entities was a malicious backdoor known as SmokeLoader, the GitHub one installed Amadey, a separate malware platform known. Amadey was first seen in 2018 and was initially used to assemble botnets. Talos said the primary function of Amadey is to collect system information from infected devices and download a set of secondary payloads that are customized to their individual characteristics, based on the specific purpose in different campaigns.

GitHub abused to distribute payloads on behalf of malware-as-a-service Read More »

copilot-exposes-private-github-pages,-some-removed-by-microsoft

Copilot exposes private GitHub pages, some removed by Microsoft

Screenshot showing Copilot continues to serve tools Microsoft took action to have removed from GitHub. Credit: Lasso

Lasso ultimately determined that Microsoft’s fix involved cutting off access to a special Bing user interface, once available at cc.bingj.com, to the public. The fix, however, didn’t appear to clear the private pages from the cache itself. As a result, the private information was still accessible to Copilot, which in turn would make it available to the Copilot user who asked.

The Lasso researchers explained:

Although Bing’s cached link feature was disabled, cached pages continued to appear in search results. This indicated that the fix was a temporary patch and while public access was blocked, the underlying data had not been fully removed.

When we revisited our investigation of Microsoft Copilot, our suspicions were confirmed: Copilot still had access to the cached data that was no longer available to human users. In short, the fix was only partial, human users were prevented from retrieving the cached data, but Copilot could still access it.

The post laid out simple steps anyone can take to find and view the same massive trove of private repositories Lasso identified.

There’s no putting toothpaste back in the tube

Developers frequently embed security tokens, private encryption keys and other sensitive information directly into their code, despite best practices that have long called for such data to be inputted through more secure means. This potential damage worsens when this code is made available in public repositories, another common security failing. The phenomenon has occurred over and over for more than a decade.

When these sorts of mistakes happen, developers often make the repositories private quickly, hoping to contain the fallout. Lasso’s findings show that simply making the code private isn’t enough. Once exposed, credentials are irreparably compromised. The only recourse is to rotate all credentials.

This advice still doesn’t address the problems resulting when other sensitive data is included in repositories that are switched from public to private. Microsoft incurred legal expenses to have tools removed from GitHub after alleging they violated a raft of laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act. Company lawyers prevailed in getting the tools removed. To date, Copilot continues undermining this work by making the tools available anyway.

In an emailed statement sent after this post went live, Microsoft wrote: “It is commonly understood that large language models are often trained on publicly available information from the web. If users prefer to avoid making their content publicly available for training these models, they are encouraged to keep their repositories private at all times.”

Copilot exposes private GitHub pages, some removed by Microsoft Read More »

amid-a-flurry-of-hype,-microsoft-reorganizes-entire-dev-team-around-ai

Amid a flurry of hype, Microsoft reorganizes entire dev team around AI

Microsoft CEO Satya Nadella has announced a dramatic restructuring of the company’s engineering organization, which is pivoting the company’s focus to developing the tools that will underpin agentic AI.

Dubbed “CoreAI – Platform and Tools,” the new division rolls the existing AI platform team and the previous developer division (responsible for everything from .NET to Visual Studio) along with some other teams into one big group.

As for what this group will be doing specifically, it’s basically everything that’s mission-critical to Microsoft in 2025, as Nadella tells it:

This new division will bring together Dev Div, AI Platform, and some key teams from the Office of the CTO (AI Supercomputer, AI Agentic Runtimes, and Engineering Thrive), with the mission to build the end-to-end Copilot & AI stack for both our first-party and third-party customers to build and run AI apps and agents. This group will also build out GitHub Copilot, thus having a tight feedback loop between the leading AI-first product and the AI platform to motivate the stack and its roadmap.

To accomplish all that, “Jay Parikh will lead this group as EVP.” Parikh was hired by Microsoft in October; he previously worked as the VP and global head of engineering at Meta.

The fact that the blog post doesn’t say anything about .NET or Visual Studio, instead emphasizing GitHub Copilot and anything and everything related to agentic AI, says a lot about how Nadella sees Microsoft’s future priorities.

So-called AI agents are applications that are given specified boundaries (action spaces) and a large memory capacity to independently do subsets of the kinds of work that human office workers do today. Some company leaders and AI commentators believe these agents will outright replace jobs, while others are more conservative, suggesting they’ll simply be powerful tools to streamline the jobs people already have.

Amid a flurry of hype, Microsoft reorganizes entire dev team around AI Read More »

yearlong-supply-chain-attack-targeting-security-pros-steals-390k-credentials

Yearlong supply-chain attack targeting security pros steals 390K credentials

Screenshot showing a graph tracking mining activity. Credit: Checkmarx

But wait, there’s more

On Friday, Datadog revealed that MUT-1244 employed additional means for installing its second-stage malware. One was through a collection of at least 49 malicious entries posted to GitHub that contained Trojanized proof-of-concept exploits for security vulnerabilities. These packages help malicious and benevolent security personnel better understand the extent of vulnerabilities, including how they can be exploited or patched in real-life environments.

A second major vector for spreading @0xengine/xmlrpc was through phishing emails. Datadog discovered MUT-1244 had left a phishing template, accompanied by 2,758 email addresses scraped from arXiv, a site frequented by professional and academic researchers.

A phishing email used in the campaign. Credit: Datadog

The email, directed to people who develop or research software for high-performance computing, encouraged them to install a CPU microcode update available that would significantly improve performance. Datadog later determined that the emails had been sent from October 5 through October 21.

Additional vectors discovered by Datadog. Credit: Datadog

Further adding to the impression of legitimacy, several of the malicious packages are automatically included in legitimate sources, such as Feedly Threat Intelligence and Vulnmon. These sites included the malicious packages in proof-of-concept repositories for the vulnerabilities the packages claimed to exploit.

“This increases their look of legitimacy and the likelihood that someone will run them,” Datadog said.

The attackers’ use of @0xengine/xmlrpc allowed them to steal some 390,000 credentials from infected machines. Datadog has determined the credentials were for use in logging into administrative accounts for websites that run the WordPress content management system.

Taken together, the many facets of the campaign—its longevity, its precision, the professional quality of the backdoor, and its multiple infection vectors—indicate that MUT-1244 was a skilled and determined threat actor. The group did, however, err by leaving the phishing email template and addresses in a publicly available account.

The ultimate motives of the attackers remain unclear. If the goal were to mine cryptocurrency, there would likely be better populations than security personnel to target. And if the objective was targeting researchers—as other recently discovered campaigns have done—it’s unclear why MUT-1244 would also employ cryptocurrency mining, an activity that’s often easy to detect.

Reports from both Checkmarx and Datadog include indicators people can use to check if they’ve been targeted.

Yearlong supply-chain attack targeting security pros steals 390K credentials Read More »

github-copilot-moves-beyond-openai-models-to-support-claude-3.5,-gemini

GitHub Copilot moves beyond OpenAI models to support Claude 3.5, Gemini

The large language model-based coding assistant GitHub Copilot will switch from using exclusively OpenAI’s GPT models to a multi-model approach over the coming weeks, GitHub CEO Thomas Dohmke announced in a post on GitHub’s blog.

First, Anthropic’s Claude 3.5 Sonnet will roll out to Copilot Chat’s web and VS Code interfaces over the next few weeks. Google’s Gemini 1.5 Pro will come a bit later.

Additionally, GitHub will soon add support for a wider range of OpenAI models, including GPT o1-preview and o1-mini, which are intended to be stronger at advanced reasoning than GPT-4, which Copilot has used until now. Developers will be able to switch between the models (even mid-conversation) to tailor the model to fit their needs—and organizations will be able to choose which models will be usable by team members.

The new approach makes sense for users, as certain models are better at certain languages or types of tasks.

“There is no one model to rule every scenario,” wrote Dohmke. “It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice.”

It starts with the web-based and VS Code Copilot Chat interfaces, but it won’t stop there. “From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon,” Dohmke wrote.

There are a handful of additional changes coming to GitHub Copilot, too, including extensions, the ability to manipulate multiple files at once from a chat with VS Code, and a preview of Xcode support.

GitHub Spark promises natural language app development

In addition to the Copilot changes, GitHub announced Spark, a natural language tool for developing apps. Non-coders will be able to use a series of natural language prompts to create simple apps, while coders will be able to tweak more precisely as they go. In either use case, you’ll be able to take a conversational approach, requesting changes and iterating as you go, and comparing different iterations.

GitHub Copilot moves beyond OpenAI models to support Claude 3.5, Gemini Read More »

winamp-really-whips-open-source-coders-into-frenzy-with-its-source-release

Winamp really whips open source coders into frenzy with its source release

As people in the many, many busy GitHub issue threads are suggesting, coding has come a long way since the heyday of the Windows-98-era Winamp player, and Winamp seems to have rushed its code onto a platform it does not really understand.

Winamp flourished around the same time as illegal MP3 networks such as Napster, Limewire, and Kazaa, providing a more capable means of organizing and playing deeply compressed music with incorrect metadata. After a web shutdown in 2013 that seemed inevitable in hindsight, Winamp’s assets were purchased by a company named Radionomy in 2014, and a new version was due out in 2019, one that aimed to combine local music libraries with web streaming of podcasts and radio.

Winamp did get that big update in 2022, though the app was “still in many ways an ancient app,” Ars’ Andrew Cunningham wrote then. There was support for music NFTs added at the end of 2022.

In its press release for the code availability, the Brussels-based Llama Group SA, with roughly 100 employees, says that “Tens of millions of users still use Winamp for Windows every month.” It plans to release “two major official versions per year with new features,” as well as offering Winamp for Creators, intended for artists or labels to manage their music, licensing, distribution, and monetization on various platforms.

Winamp really whips open source coders into frenzy with its source release Read More »