By making small adjustments to the frequency that the qubits are operating at, it’s possible to avoid these problems. This can be done when the Heron chip is being calibrated before it’s opened for general use.
Separately, the company has done a rewrite of the software that controls the system during operations. “After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that,” Gambetta said. The result is a dramatic speed-up. “Something that took 122 hours now is down to a couple of hours,” he told Ars.
Since people are paying for time on this hardware, that’s good for customers now. However, it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.
Deeper computations
Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:
“The researchers turned to a method where they intentionally amplified and then measured the processor’s noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.”
The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it’s still easier to do error mitigation calculations than simulate the quantum computer’s behavior on the same hardware, there’s still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. “They’ve got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU,” Gambetta told Ars. “So I think it’s a combination of both.”
As we described earlier this year, operating a quantum computer will require a significant investment in classical computing resources, given the amount of measurements and control operations that need to be executed and interpreted. That means that operating a quantum computer will also require a software stack to control and interpret the flow of information from the quantum side.
But software also gets involved well before anything gets executed. While it’s possible to execute algorithms on quantum hardware by defining the full set of commands sent to the hardware, most users are going to want to focus on algorithm development, rather than the details of controlling any single piece of quantum hardware. “If everyone’s got to get down and know what the noise is, [use] performance management tools, they’ve got to know how to compile a quantum circuit through hardware, you’ve got to become an expert in too much to be able to do the algorithm discovery,” said IBM’s Jay Gambetta. So, part of the software stack that companies are developing to control their quantum hardware includes software that converts abstract representations of quantum algorithms into the series of commands needed to execute them.
IBM’s version of this software is called Qiskit (although it was made open source and has since been adopted by other companies). Recently, IBM made a couple of announcements regarding Qiskit, both benchmarking it in comparison to other software stacks and opening it up to third-party modules. We’ll take a look at what software stacks do before getting into the details of what’s new.
What’s the software stack do?
It’s tempting to view IBM’s Qiskit as the equivalent of a compiler. And at the most basic level, that’s a reasonable analogy, in that it takes algorithms defined by humans and converts them to things that can be executed by hardware. But there are significant differences in the details. A compiler for a classical computer produces code that the computer’s processor converts to internal instructions that are used to configure the processor hardware and execute operations.
Even when using what’s termed “machine language,” programmers don’t directly control the hardware; programmers have no control over where on the hardware things are executed (ie, which processor or execution unit within that processor), or even the order instructions are executed in.
Things are very different for quantum computers, at least at present. For starters, everything that happens on the processor is controlled by external hardware, which typically act by generating a series of laser or microwave pulses. So, software like IBM’s Qiskit or Microsoft’s Q# act by converting the code they’re given into commands that are sent to hardware that’s external to the processor.
These “compilers” must also keep track of exactly which part of the processor things are happening on. Quantum computers act by performing specific operations (called gates) on individual or pairs of qubits; to do that, you have to know exactly which qubit you’re addressing. And, for things like superconducting qubits, where there can be device-to-device variations, which hardware qubits you end up using can have a significant effect on the outcome of the calculations.
As a result, most things like Qiskit provide the option of directly addressing the hardware. If a programmer chooses not to, however, the software can transform generic instructions into a precise series of actions that will execute whatever algorithm has been encoded. That involves the software stack making choices about which physical qubits to use, what gates and measurements to execute, and what order to execute them in.
The role of the software stack, however, is likely to expand considerably over the next few years. A number of companies are experimenting with hardware qubit designs that can flag when one type of common error occurs, and there has been progress with developing logical qubits that enable error correction. Ultimately, any company providing access to quantum computers will want to modify its software stack so that these features are enabled without requiring effort on the part of the people designing the algorithms.
Zynga must pay IBM nearly $45 million in damages after a jury ruled that popular games in its FarmVille series, as well as individual hits like Harry Potter: Puzzles and Spells, infringed on two early IBM patents.
In an SEC filing, Zynga reassured investors that “the patents at issue have expired and Zynga will not have to modify or stop operating any of the games at issue” as a result of the loss. But the substantial damages owed will likely have financial implications for Zynga parent company Take-Two Interactive Software, analysts said, unless Zynga is successful in its plans to overturn the verdict.
A Take-Two spokesperson told Ars: “We are disappointed in the verdict; however, believe we will prevail on appeal.”
For IBM, the win comes after a decade of failed attempts to stop what it claimed was Zynga’s willful infringement of its patents.
In court filings, IBM told the court that it first alerted Zynga to alleged infringement in 2014, detailing how its games leveraged patented technology from the 1980s that came about when IBM launched Prodigy.
But rather than negotiate with IBM, like tech giants Amazon, Apple, Google, and Facebook have, Zynga allegedly dodged accountability, delaying negotiations and making excuses to postpone meetings for years. In that time, IBM alleged that rather than end its infringement or license IBM’s technologies, Zynga “expanded its infringing activity” after “openly” admitting to IBM that “litigation would be the only remaining path” to end it.
This left IBM “no choice but to seek judicial assistance,” IBM told the court.
IBM argued that its patent, initially used to launch Prodigy, remains “fundamental to the efficient communication of Internet content.” Known as patent ‘849, that patent introduced “novel methods for presenting applications and advertisements in an interactive service that would take advantage of the computing power of each user’s personal computer (PC) and thereby reduce demand on host servers, such as those used by Prodigy,” which made it “more efficient than conventional systems.”
According to IBM’s complaint, “By harnessing the processing and storage capabilities of the user’s PC, applications could then be composed on the fly from objects stored locally on the PC, reducing reliance on Prodigy’s server and network resources.”
The jury found that Zynga infringed that patent, as well as a ‘719 patent designed to “improve the performance” of Internet apps by “reducing network communication delays.” That patent describes technology that improves an app’s performance by “reducing the number of required interactions between client and server,” IBM’s complaint said, and also makes it easier to develop and update apps.
The company told the court that licensing these early technologies helps sustain the company’s innovations today.
As of 2022, IBM confirmed that it has spent “billions of dollars on research and development” and that the company vigilantly protects those investments when it discovers newcomers like Zynga seemingly seeking to avoid those steep R&D costs by leveraging IBM innovations to fuel billions of dollars in revenue without paying IBM licensing fees.
“IBM’s technology is a key driver of Zynga’s success,” IBM argued back in 2022, and on Friday, the jury agreed.
“IBM is pleased with the jury verdict that recognizes Zynga’s infringement of IBM’s patents,” IBM’s spokesperson told Ars.
Cost of pre-Internet IBM licenses
In its defense, Zynga tried and failed to argue that the patents were invalid, including contesting the validity of the 1980s patent—which Zynga claimed never should have been issued, alleging it was due to “intent to deceive” the patent office by withholding information.
It’s currently unclear what licensing deal IBM offered to Zynga initially or how much Zynga could have paid to avoid damages awarded this week. IBM did not respond to Ars’ request to further detail terms of the failed deal.
But the 1980s patent in particular has been at the center of several lawsuits that IBM has raised to protect its early intellectual property from alleged exploitation by Internet companies. Back in 2006, when IBM sued Amazon, IBM executive John Kelly vowed to protect the company’s patents “through every means available.” IBM followed through on that promise throughout the 2010s, securing notable settlements from various companies, like Priceline and Twitter, where terms of the subsequent licensing deals were not disclosed.
However, IBM’s aggressive defense of its pre-Internet patents hasn’t dinged every Internet company. When Chewy pushed back on IBM’s patent infringement claims in 2021, the pet supplier managed to beat IBM’s claims by proving in 2022 that its platform was non-infringing, Reuters reported.
Through that lawsuit, the public got a rare look into how IBM values its patents, attempting to get Chewy to agree to pay $36 million to license its technologies before suing to demand at least $83 million in damages for alleged infringement. In the end, Chewy was right to refuse to license the tech just to avoid a court battle.
Now that some of IBM’s early patents have become invalid, IBM’s patent-licensing machine may start slowing down.
For Zynga, the cost of fighting IBM so far has not restricted access to its games or forced Zynga to redesign its platforms to be non-infringing, which were remedies sought in IBM’s initial prayer for relief in the lawsuit. But overturning the jury’s verdict to avoid paying millions in damages may be a harder hurdle to clear, as a jury has rejected what may be Zynga’s best defense, and the jury’s notes and unredacted verdict remain sealed.
According to Take-Two’s SEC filing, the jury got it wrong, and Take-Two plans to prove it: “Zynga believes this result is not supported by the facts and the law and intends to seek to overturn the verdict and reduce or eliminate the damages award through post-trial motions and appeal.”
Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had!
This year, we’re back partnering with IBM again and we’re looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we’re going to the coasts—both east and west. Read on for details!
September: San Jose, California
Our first event will be in San Jose on September 18, and it’s titled “Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next.” The idea will be to explore what generative AI means for the future of data management. The topics we’ll be discussing include:
Playing the infrastructure long game to address any kind of workload
Identifying infrastructure vulnerabilities with today’s AI tools
Infrastructure’s environmental footprint: Navigating impacts and responsibilities
We’re getting our panelists locked down right now, and while I don’t have any names to share, many will be familiar to Ars readers from past events—or from the front page.
As a neat added bonus, we’re going to host the event at the Computer History Museum, which any Bay Area Ars reader can attest is an incredibly cool venue. (Just nobody spill anything. I think they’ll kick us out if we break any exhibits!)
October: Washington, DC
Switching coasts, on October 29 we’ll set up shop in our nation’s capital for a similar show. This time, our event title will be “AI in DC: Privacy, Compliance, and Making Infrastructure Smarter.” Given that we’ll be in DC, the tone shifts a bit to some more policy-centric discussions, and the talk track looks like this:
The key to compliance with emerging technologies
Data security in the age of AI-assisted cyber-espionage
The best infrastructure solution for your AI/ML strategy
Same here deal with the speakers as with the September—I can’t name names yet, but the list will be familiar to Ars readers and I’m excited. We’re still considering venues, but hoping to find something that matches our previous events in terms of style and coolness.
Interested in attending?
While it’d be awesome if everyone could come, the old song and dance applies: space, as they say, will be limited at both venues. We’d like to make sure local folks in both locations get priority in being able to attend, so we’re asking anyone who wants a ticket to register for the events at the sign-up pages below. You should get an email immediately confirming we’ve received your info, and we’ll send another note in a couple of weeks with further details on timing and attendance.
On the Ars side, at minimum both our EIC Ken Fisher and I will be in attendance at both events, and we’ll likely have some other Ars staff showing up where we can—free drinks are a strong lure for the weary tech journalist, so there ought to be at least a few appearing at both. Hoping to see you all there!
8BitDo is releasing an IBM-inspired look for its $100 wireless mechanical keyboard. Keyboard enthusiasts love regaling normies with tales of IBM’s buckling spring keyboards and the precedent they set for today’s mechanical keyboards. But 8BitDo’s Retro Mechanical Keyboard M Edition doesn’t adopt very much from IBM’s iconic designs.
8BitDo’s Retro mechanical keyboards come in different looks that each pay tribute to classic tech. The tributes are subtle enough to avoid copyright issues. Similar to 8BitDo’s ‘80s Nintendo Entertainment System (NES) and Commodore 64 designs, the M Edition doesn’t have any official IBM logos. However, the M Edition’s color scheme, chunkier build, and typeface selection, including on the Tab key with arrows and elsewhere, are nods to IBM’s Model M, which first succeeded the Model F in 1985.
Of course, the keyboard’s naming, and the IBM behemoth and floppy disks strategically placed in marketing images, are notes of that, too:
Despite its IBM-blue striped B and A buttons, the M Edition won’t be sufficient for retro keyboard fans seeking the distinct, buckling-spring experience of a true Model M.
As mentioned, the M Edition is basically a new color scheme for 8BitDo’s wireless mechanical keyboard offering. The peripheral connects to Windows 10 and Android 9.0 and newer devices via a USB-A cable, its wireless USB-A 2.4 Ghz dongle, or Bluetooth 5.0. 8BitDo claims it can endure up to 200 hours of use before needing a recharge. The M Edition also comes with the detachable A and B “Super Buttons” that connect to the keyboard via a 3.5 mm jack and are programmable without software.
Differing from the Model M’s buckling spring switches, the M Edition has Kailh Box White V2 mechanical switches, which typically feel clicky and light to press. With crisp clicks and noticeable, but not slowing, feedback, they’re good for a modern mechanical switch for frequent typing.
But IBM’s ’80s keyboard didn’t use modern mechanical switches. It used buckling springs over a membrane sheet that made keys feel heavier to push than the keys on the preceding Model F keyboard (which used buckling springs over a capacitive PCB). 8BitDo’s switches are hot-swappable, though, making them easily replaceable.
The M Edition’s keycaps have an MDA-profile-like height, according to 8BitDo’s website. True Model M keycaps all had the same profile. The M Edition’s keycaps are doubleshot like the true Model M’s were, but the new keyboard uses cheaper ABS plastic instead of PBT.
While dimensions of 14.8×6.7×1.8 inches make the M Edition fairly dense for a tenkeyless keyboard, I would have loved to see 8BitDo commit to the vintage look with a thicker border north of the keys and more height toward the top.
But smaller keyboards that let owners reclaim desk space are the more common prebuilt mechanical keyboard releases these days, especially for gaming-peripherals brands like 8BitDo. A gaming focus also helps explain why there’s no numpad on the M Edition. 8BitDo is releasing a detachable numpad to go with the keyboard. It can connect via Bluetooth, dongle, or cable, but it will cost $45 extra.
8BitDo’s new keyboard colorway may appeal to people craving a hint of IBM nostalgia that doesn’t make their workspace look like it’s completely stuck in the past. But considering the fandom and legacy of old-school IBM keyboards’ switches and looks, shades of gray and blue won’t feel retro enough for many IBM keyboard fans.
If you want a real Model M, there’s a market of found and restored models available online and in thrift stores and electronics stores. For a modern spin, like USB ports and Mac support, Unicomp also makes new Model M keyboards that are truer to the original IBM design, particularly in their use of buckling spring switches.
There’s a strong consensus that tackling most useful problems with a quantum computer will require that the computer be capable of error correction. There is absolutely no consensus, however, about what technology will allow us to get there. A large number of companies, including major players like Microsoft, Intel, Amazon, and IBM, have all committed to different technologies to get there, while a collection of startups are exploring an even wider range of potential solutions.
We probably won’t have a clearer picture of what’s likely to work for a few years. But there’s going to be lots of interesting research and development work between now and then, some of which may ultimately represent key milestones in the development of quantum computing. To give you a sense of that work, we’re going to look at three papers that were published within the last couple of weeks, each of which tackles a different aspect of quantum computing technology.
Hot stuff
Error correction will require connecting multiple hardware qubits to act as a single unit termed a logical qubit. This spreads a single bit of quantum information across multiple hardware qubits, making it more robust. Additional qubits are used to monitor the behavior of the ones holding the data and perform corrections as needed. Some error correction schemes require over a hundred hardware qubits for each logical qubit, meaning we’d need tens of thousands of hardware qubits before we could do anything practical.
A number of companies have looked at that problem and decided we already know how to create hardware on that scale—just look at any silicon chip. So, if we could etch useful qubits through the same processes we use to make current processors, then scaling wouldn’t be an issue. Typically, this has meant fabricating quantum dots on the surface of silicon chips and using these to store single electrons that can hold a qubit in their spin. The rest of the chip holds more traditional circuitry that performs the initiation, control, and readout of the qubit.
This creates a notable problem. Like many other qubit technologies, quantum dots need to be kept below one Kelvin in order to keep the environment from interfering with the qubit. And, as anyone who’s ever owned an x86-based laptop knows, all the other circuitry on the silicon generates heat. So, there’s the very real prospect that trying to control the qubits will raise the temperature to the point that the qubits can’t hold onto their state.
That might not be the problem that we thought, according to some work published in Wednesday’s Nature. A large international team that includes people from the startup Diraq have shown that a silicon quantum dot processor can work well at the relatively toasty temperature of 1 Kelvin, up from the usual milliKelvin that these processors normally operate at.
The work was done on a two-qubit prototype made with materials that were specifically chosen to improve noise tolerance; the experimental procedure was also optimized to limit errors. The team then performed normal operations starting at 0.1 K, and gradually ramped up the temperatures to 1.5 K, checking performance as they did so. They found that a major source of errors, state preparation and measurement (SPAM), didn’t change dramatically in this temperature range: “SPAM around 1 K is comparable to that at millikelvin temperatures and remains workable at least until 1.4 K.”
The error rates they did see depended on the state they were preparing. One particular state (both spin-up) had a fidelity of over 99 percent, while the rest were less constrained, at somewhere above 95 percent. States had a lifetime of over a millisecond, which qualifies as long-lived int he quantum world.
All of which is pretty good, and suggests that the chips can tolerate reasonable operating temperatures, meaning on-chip control circuitry can be used without causing problems. The error rates of the hardware qubits are still well above those that would be needed for error correction to work. However, the researchers suggest that they’ve identified error processes that can potentially be compensated for. They expect that the ability to do industrial-scale manufacturing will ultimately lead to working hardware.
In the annals of PC history, IBM’s OS/2 represents a road not taken. Developed in the waning days of IBM’s partnership with Microsoft—the same partnership that had given us a decade or so of MS-DOS and PC-DOS—OS/2 was meant to improve on areas where DOS was falling short on modern systems. Better memory management, multitasking capabilities, and a usable GUI were all among the features introduced in version 1.x.
But Microsoft was frustrated with some of IBM’s goals and demands, and the company continued to develop an operating system called Windows on its own. Where IBM wanted OS/2 to be used mainly to boost IBM-made PCs and designed it around the limitations of Intel’s 80286 CPU, Windows was being created with the booming market for PC-compatible clones in mind. Windows 1.x and 2.x failed to make much of a dent, but 1990’s Windows 3.0 was a hit, and it came preinstalled on many consumer PCs; Microsoft and IBM broke off their partnership shortly afterward, making OS/2 version 1.2 the last one publicly released and sold with Microsoft’s involvement.
But Microsoft had done a lot of work on version 2.0 of OS/2 at the same time as it was developing Windows. It was far enough along that preview screenshots appeared in PC Magazine, and early builds were shipped to developers who could pay for them, but it was never formally released to the public.
But software archaeologist Neozeed recently published a stable internal preview of Microsoft’s OS/2 2.0 to the Internet Archive, along with working virtual machine disk images for VMware and 86Box. The preview, bought by Brian Ledbetter on eBay for $650 plus $15.26 in shipping, dates to July 1990 and would have cost developers who wanted it a whopping $2,600. A lot to pay for a version of an operating system that would never see the light of day!
The Microsoft-developed build of OS/2 2.0 bears only a passing resemblance to the 32-bit version of OS/2 2.0 that IBM finally shipped on its own in April 1992. Neozeed has published a more thorough exploration of Microsoft’s version, digging around in its guts and getting some early Windows software running (the ability to run DOS and Windows apps was simultaneously a selling point of OS/2 and a reason for developers not to create OS/2-specific apps, one of the things that helped to doom OS/2 in the end). It’s a fascinating detail from a turning point in the history of the PC as we know it today, but as a usable desktop operating system, it leaves something to be desired.
This unreleased Microsoft-developed OS/2 build isn’t the first piece of Microsoft-related software history that has been excavated in the last few months. In January, an Internet Archive user discovered and uploaded an early build of 86-DOS, the software that Microsoft bought and turned into MS-DOS/PC-DOS for the original IBM PC 5150. Funnily enough, these unreleased previews serve as bookends for IBM and Microsoft’s often-contentious partnership.
As part of the “divorce settlement” between Microsoft and IBM, IBM would take over the development and maintenance of OS/2 1.x and 2.x while Microsoft continued to work on a more advanced far-future version 3.0 of OS/2. This operating system was never released as OS/2, but it would eventually become Windows NT, Microsoft’s more stable business-centric version of Windows. Windows NT merged with the consumer versions of Windows in the early 2000s with Windows 2000 and Windows XP, and those versions gradually evolved into Windows as we know it today.
It has been 18 years since IBM formally discontinued its last release of OS/2, but as so often happens in computing, the software has found a way to live on. ArcaOS is a semi-modernized, intermittently updated branch of OS/2 updated to run on modern hardware while still supporting the ability to run MS-DOS and 16-bit Windows apps.
In a move that marks the end of an era, New Mexico State University (NMSU) recently announced the impending closure of its Hobbes OS/2 Archive on April 15, 2024. For over three decades, the archive has been a key resource for users of the IBM OS/2 operating system and its successors, which once competed fiercely with Microsoft Windows.
In a statement made to The Register, a representative of NMSU wrote, “We have made the difficult decision to no longer host these files on hobbes.nmsu.edu. Although I am unable to go into specifics, we had to evaluate our priorities and had to make the difficult decision to discontinue the service.”
Hobbes is hosted by the Department of Information & Communication Technologies at New Mexico State University in Las Cruces, New Mexico. In the official announcement, the site reads, “After many years of service, hobbes.nmsu.edu will be decommissioned and will no longer be available. As of April 15th, 2024, this site will no longer exist.”
We reached out to New Mexico State University to inquire about the history of the Hobbes archive but did not receive a response. The earliest record we’ve found of the Hobbes archive online is this 1992 Walnut Creek CD-ROM collection that gathered up the contents of the archive for offline distribution. At around 32 years old, minimum, that makes Hobbes one of the oldest software archives on the Internet, akin to the University of Michigan’s archives and ibiblio at UNC.
Archivists such as Jason Scott of the Internet Archive have stepped up to say that the files hosted on Hobbes are safe and already mirrored elsewhere. “Nobody should worry about Hobbes, I’ve got Hobbes handled,” wrote Scott on Mastodon in early January. OS/2 World.com also published a statement about making a mirror. But it’s still notable whenever such an old and important piece of Internet history bites the dust.
Like many archives, Hobbes started as an FTP site. “The primary distribution of files on the Internet were via FTP servers,” Scott tells Ars Technica. “And as FTP servers went down, they would also be mirrored as subdirectories in other FTP servers. Companies like CDROM.COM / Walnut Creek became ways to just get a CD-ROM of the items, but they would often make the data available at http://ftp.cdrom.com to download.”
The Hobbes site is a priceless digital time capsule. You can still find the Top 50 Downloads page, which includes sound and image editors, and OS/2 builds of the Thunderbird email client. The archive contains thousands of OS/2 games, applications, utilities, software development tools, documentation, and server software dating back to the launch of OS/2 in 1987. There’s a certain charm in running across OS/2 wallpapers from 1990, and even the archive’s Update Policy is a historical gem—last updated on March 12, 1999.
The legacy of OS/2
OS/2 began as a joint venture between IBM and Microsoft, undertaken as a planned replacement for IBM PC DOS (also called “MS-DOS” in the form sold by Microsoft for PC clones). Despite advanced capabilities like 32-bit processing and multitasking, OS/2 later competed with and struggled to gain traction against Windows. The partnership between IBM and Microsoft dissolved after the success of Windows 3.0, leading to divergent paths in OS strategies for the two companies.
Through iterations like the Warp series, OS/2 established a key presence in niche markets that required high stability, such as ATMs and the New York subway system. Today, its legacy continues in specialized applications and in newer versions (like eComStation) maintained by third-party vendors—despite being overshadowed in the broader market by Linux and Windows.
A footprint like that is worth preserving, and a loss of one of OS/2’s primary archives, even if mirrored elsewhere, is a cultural blow. Apparently, Hobbes has reportedly almost disappeared before but received a stay of execution. In the comments section for an article on The Register, someone named “TrevorH” wrote, “This is not the first time that Hobbes has announced it’s going away. Last time it was rescued after a lot of complaints and a number of students or faculty came forward to continue to maintain it.”
As the final shutdown approaches in April, the legacy of Hobbes is a reminder of the importance of preserving the digital heritage of software for future generations—so that decades from now, historians can look back and see how things got to where they are today.