Author name: Kris Guyer

rfk-jr.-adds-more-anti-vaccine-members-to-cdc-vaccine-advisory-panel

RFK Jr. adds more anti-vaccine members to CDC vaccine advisory panel

Kirk Milhoan, a pediatric cardiologist who is a senior fellow at the Independent Medical Alliance (formerly Front Line COVID-19 Critical Care Alliance), which promotes misinformation about COVID-19 vaccines and touts unproven and dubious COVID-19 treatments. Those include the malaria drug hydroxychloroquine, the de-worming drug ivermectin, and various concoctions of vitamins and other drugs. Milhoan has stated that mRNA COVID-19 vaccines should be removed from the market, telling KFF in March: “We should stop it and test it more before we move forward.”

Evelyn Griffin, an obstetrician and gynecologist in Louisiana who reportedly lost her job for refusing to get a COVID-19 vaccine. In a speech at a Louisiana Health Freedom Day in May 2024, Griffin claimed that doctors “blindly believed” that mRNA COVID-19 vaccines were safe. She has also claimed that the vaccines cause “bizarre and rare conditions,” according to the Post.

Hillary Blackburn, a pharmacist in St. Louis. Reuters reports that she is the daughter-in-law of Sen. Marsha Blackburn (R-Tenn.), who has opposed vaccine mandates.

Raymond Pollak, a semi-retired transplant surgeon who filed a whistleblower lawsuit against the University of Illinois Hospital in 1999, alleging the hospital manipulated patient data to increase their chances of receiving livers. The hospital settled the suit, paying $2.5 million, while denying wrongdoing.

ACIP is scheduled to meet at the end of this week, on September 18 and September 19. According to an agenda recently posted online, the committee will vote on recommendations for a measles, mumps, rubella, and varicella (MMRV) combination vaccine, the Hepatitis B vaccine, and this year’s updated COVID-19 vaccines. Vaccine experts widely fear that the committee will rescind recommendations and restrict access to those vaccines. Such moves will likely create new, potentially insurmountable barriers for people, including children, to get vaccines.

ACIP-recommended vaccines are required to be covered by private health insurance plans and the Vaccines for Children program for Medicaid-eligible and under- or uninsured kids, which covers about half of American children. Without ACIP recommendations for a vaccine, insurance coverage would be an open question, and vulnerable children would simply lose access entirely.

RFK Jr. adds more anti-vaccine members to CDC vaccine advisory panel Read More »

nasa-closing-its-original-repository-for-columbia-artifacts-to-tours

NASA closing its original repository for Columbia artifacts to tours

NASA is changing the way that its employees come in contact with, and remember, one of its worst tragedies.

In the wake of the 2003 loss of the space shuttle Columbia and its STS-107 crew, NASA created a program to use the orbiter’s debris for research and education at Kennedy Space Center in Florida. Agency employees were invited to see what remained of the space shuttle as a powerful reminder as to why they had to be diligent in their work. Access to the Columbia Research and Preservation Office, though, was limited as a result of its location and related logistics.

To address that and open up the experience to more of the workforce at Kennedy, the agency has quietly begun work to establish a new facility.

“The room, titled Columbia Learning Center (CLC), is a whole new concept,” a NASA spokesperson wrote in an email. “There are no access requirements; anyone at NASA Kennedy can go in any day of the week and stay as long as they like. The CLC will be available whenever employees need the inspiration and message for generations to come.”

Debris depository

On February 1, 2003, Columbia was making its way back from a 16-day science mission in Earth orbit when the damage that it suffered during its launch resulted in the orbiter breaking apart over East Texas. Instead of landing at Kennedy as planned, Columbia fell to the ground in more than 85,000 pieces.

The tragedy claimed the lives of commander Rick Husband, pilot Willie McCool, mission specialists David Brown, Kalpana Chawla, Michael Anderson, and Laurel Clark, and payload specialist Ilan Ramon of Israel.

NASA closing its original repository for Columbia artifacts to tours Read More »

60-years-after-gemini,-newly-processed-images-reveal-incredible-details

60 years after Gemini, newly processed images reveal incredible details


“It’s that level of risk that they were taking. I think that’s what really hit home.”

Before / after showing the image transformation. Buzz Aldrin is revealed as he takes the first selfie in space on Gemini 12, November 12, 1966. Credit: NASA / ASU / Andy Saunders

Before / after showing the image transformation. Buzz Aldrin is revealed as he takes the first selfie in space on Gemini 12, November 12, 1966. Credit: NASA / ASU / Andy Saunders

Six decades have now passed since some of the most iconic Project Gemini spaceflights. The 60th anniversary of Gemini 4, when Ed White conducted the first US spacewalk, came in June. The next mission, Gemini 5, ended just two weeks ago, in 1965. These missions are now forgotten by most Americans, as most of the people alive during that time are now deceased.

However, during these early years of spaceflight, NASA engineers and astronauts cut their teeth on a variety of spaceflight firsts, flying a series of harrowing missions during which it seems a miracle that no one died.

Because the Gemini missions, as well as NASA’s first human spaceflight program Mercury, yielded such amazing stories, I was thrilled to realize that a new book has recently been published—Gemini & Mercury Remastered—that brings them back to life in vivid color.

The book is a collection of 300 photographs from NASA’s Mercury and Gemini programs during the 1960s, in which Andy Saunders has meticulously restored the images and then deeply researched their background to more fully tell the stories behind them. The end result is a beautiful and powerful reminder of just how brave America’s first pioneers in space were. What follows is a lightly edited conversation with Saunders about how he developed the book and some of his favorite stories from it.

Ars: Why put out a book on Mercury and Gemini now?

Andy Saunders: Well, it’s the 60th anniversaries of the Gemini missions, but the book is really the prequel to my first book, Apollo Remastered. This is about the missions that came before. So it takes us right back to the very dawn of human space exploration, back to the very beginning, and this was always a project I was going to work on next. Because, as well as being obviously very important in spaceflight history, they’re very important in terms of human history, the human evolution, even, you know, the first time we were able to escape Earth.

For tens of thousands of years, civilizations have looked up and dreamt of leaving Earth and voyaging to the stars. And this golden era in the early 1960s is when that ancient dream finally became a reality. Also, of course, the first opportunity to look back at Earth and give us that unique perspective. But I think it’s really the photographs specifically that will just forever symbolize and document at the beginning of our expansion out into the cosmos. You know, of course, we went to the Moon with Apollo. We’ll go back with Artemis. We spent long periods on the International Space Station. We’ll walk on Mars. We’ll eventually become a multi-planetary species. But this is where it all began and how it all began.

Ars: They used modified Hasselblad cameras during Apollo to capture these amazing images. What types of cameras were used during Mercury and Gemini?

Saunders: Mercury was more basic cameras. So on the very first missions, NASA didn’t want the astronaut to take a camera on board. The capsules were tiny. They were very busy. They’re very short missions, obviously very groundbreaking missions. So, the first couple of missions, there was a camera out of the porthole window, just taking photographs automatically. But it was John Glenn on his mission (Mercury-Atlas 6) who said, “No, I want to take a camera. People want to know what it’s going to be like to be an astronaut. They’re going to want to look at Earth through the window. I’m seeing things no humans ever seen before.” So he literally saw a $40 camera in a drugstore on his way after a haircut at Cocoa Beach. He thought, “That’s perfect.” And he bought it himself, and then NASA adapted it. They put a pistol grip on to help him to use it. And with it, he took the first still photographs of Earth from space.

So it was the early astronauts that kind of drove the desire to take cameras themselves, but they were quite basic. Wally Schirra (Mercury-Atlas 8) then took the first Hasselblad. He wanted medium format, better quality, but really, the photographs from Mercury aren’t as stunning as Gemini. It’s partly the windows and the way they took the photos, and they’d had little experience. Also, preservation clearly wasn’t high up on the agenda in Mercury, because the original film is evidently in a pretty bad state. The first American in space is an incredibly important moment in history. But every single frame of the original film of Alan Shepard’s flight was scribbled over with felt pen, it’s torn, and it’s fixed with like a piece of sticky tape. But it’s a reminder that these weren’t taken for their aesthetic quality. They weren’t taken for posterity. You know, they were technical information. The US was trying to catch up with the Soviets. Preservation wasn’t high up on the agenda.

This is not some distant planet seen in a sci-fi movie, it’s our Earth, in real life, as we explored space in the 1960s. The Sahara desert, photographed from Gemini 11, September 14, 1966. As we stand at the threshold of a new space age, heading back to the Moon, onward to Mars and beyond, the photographs taken during Mercury and Gemini will forever symbolize and document the beginning of humankind’s expansion out into the cosmos. NASA / ASU / Andy Saunders

Ars: I want to understand your process. How many photos did you consider for this book?

Saunders: With Apollo, they took about 35,000 photographs. With Mercury and Gemini, there were about 5,000. Which I was quite relieved about.  So yeah, I went through all 5,000 they took. I’m not sure how much 16 millimeter film in terms of time, because it was at various frame rates, but a lot of 16 millimeter film. So I went through every frame of film that was captured from launch to splashdown on every mission.

Ars: Out of that material, how much did you end up processing?

Saunders: What I would first do is have a quick look, particularly if there’s apparently nothing in them, because a lot of them are very underexposed. But with digital processing, like I did with the cover of the Apollo book, we can pull out stuff that you actually can’t see in the raw file. So it’s always worth taking a look. So do a very quick edit, and then if it’s not of interest, it’s discarded. Or it might be that clearly an important moment was happening, even if it’s not a particularly stunning photograph, I would save that one. So I was probably down from 5,000 to maybe 800, and then do a better edit on it.

And then the final 300 that are in the book are those that are either aesthetically stunning, or they’re a big transformation, or they show something important that happened on the mission, or a historically significant moment. But also, what I want to do with the book, as well as showing the photographs, is tell the stories, these incredible human stories that, because of the risks they were taking. So to do that, I effectively reconstructed every mission from launch to splashdown by using lots of different pieces of information in order to effectively map the photography onto a timeline so that it can then tell the story through the captions. So a photograph might be in there simply to help tell part of the story.

Ars: What was your favorite story to tell?

Saunders: Well, perhaps in terms of a chapter and a mission, I’d say Gemini 4 is kind of the heart of the book. You know, first US space walk, quite a lot of drama occurred when they couldn’t close the hatch. There’s some quite poignant shots, particularly of Ed White, of course, who later lost his life in the Apollo 1 fire. But in terms of the story, I mean, Gemini 9A was just, there needs to be a movie about just Gemini 9A. Right from the start, from losing the prime crew, and then just what happened out on Gene Cernan’s EVA, how he got back into the capsule alive is quite incredible, and all this detail I’ve tried to cover because he took his camera. So he called it the spacewalk from hell. Everything that could go wrong went wrong. He was incredibly exhausted, overheated. His visor steamed over. He went effectively blind, and he was at the back of the adapter section. This is at a point when NASA just hadn’t mastered EVA. So, simply how you maneuver in space, they just haven’t mastered, so he was exhausted. He was almost blind. Then he lost communication with Tom Stafford, his command pilot. He tore his suit, because, of course, back then, there were all kinds of jagged parts on the spacecraft.

And then when he’s finally back in the hatch, he was quite a big chap, and they couldn’t close the hatch, so he was bent double trying to close the hatch. He started to see stars. He said, Tom, if we don’t close this hatch now and re-pressurize, I am going to die. They got it closed, got his helmet off, and Tom Stafford said he just looked like someone that had spent far too long in a sauna. Stafford sprayed him with a water hose to kind of cool him down. So what happened on that mission is just quite incredible. But there was something on every mission, you know, from Gus Grissom sinking of the Liberty Bell and him almost drowning, the heat shield coming loose, or an indicator that suggested the heat shield was loose on Glenn’s mission. There’s an image of that in the book. Like I said, I mapped everything to the timeline, and worked out the frame rates, and we’ve got the clock we can see over his shoulder. So I could work out exactly when he was at the point of maximum heating through reentry, when part of the strapping that kept the retro pack on, to try and hold a heat shield on that hit the window, and he’s talking, but no one was listening, because it was during radio blackout.

After being informed his heat shield may have come loose, John Glenn is holding steadfast in the face of real uncertainty, as he observes the retro pack burn up outside his window, illuminating the cabin in an orange glow, during re-entry on February 20, 1962. “This is Friendship Seven. I think the pack just let go … A real fireball outside! … Great chunks of that retro pack breaking off all the way through!”

Credit: NASA / Andy Saunders

After being informed his heat shield may have come loose, John Glenn is holding steadfast in the face of real uncertainty, as he observes the retro pack burn up outside his window, illuminating the cabin in an orange glow, during re-entry on February 20, 1962. “This is Friendship Seven. I think the pack just let go … A real fireball outside! … Great chunks of that retro pack breaking off all the way through!” Credit: NASA / Andy Saunders

The process I used for this, on the low-quality 16 mm film, was to stack hundreds and hundreds of frames to bring out incredible detail. You can almost see the pores in his skin. To see this level of detail, to me, it’s just like a portrait of courage. There he is, holding steadfast, not knowing if he’s about to burn up in the atmosphere. So that was quite a haunting image, if you like, to be able to help you step on board, you know, these tiny Mercury spacecraft, to see them, to see what they saw, to look out the windows and see how they saw it.

Ars: What was new or surprising to you as you spent so much time with these photos and looking at the details?

Saunders: The human side to them. Now that we can see them this clearly, they seem to have an emotional depth to them. And it’s that level of risk that they were taking. I think that’s what really hit home. The Earth shots are stunning. You know, you can almost feel the scale, particularly with a super wide lens, and the altitudes they flew to. And you can just imagine what it must have been like out on an EVA, for example. I think Gene Cernan said it was like sitting on God’s front porch, the view he had on his EVA. So those Earth shots are stunning, but it’s really those the human side that really hits home for me. I read every word of every transcript of every mission. All the conversations were recorded on tape between the air and the ground, and between the astronauts when they were out of ground contact, and reading those it really hits home what they were doing. I found myself holding my breath, and, you know, my shoulders were stiff.

Ars: So what’s next? I mean, there’s only about 100 million photos from the Space Shuttle era.

Saunders: Thankfully, they weren’t all taken on film. So if I wanted to complete space on film, then what I haven’t yet done is Apollo-Soyuz, Skylab, and the first, whatever it is, 20 percent of the shuttle. So maybe that’s next. But I would just like a rest, because I’ve been doing this now since the middle of 2019, literally nonstop. It’s all I’ve done with Apollo and now Mercury and Gemini. The books make a really nice set in that they’re exactly the same size. So it covers the first view of the curvature of Earth and space right through to our last steps on the Moon.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

60 years after Gemini, newly processed images reveal incredible details Read More »

education-report-calling-for-ethical-ai-use-contains-over-15-fake-sources

Education report calling for ethical AI use contains over 15 fake sources

AI language models like the kind that power ChatGPT, Gemini, and Claude excel at producing exactly this kind of believable fiction when they lack actual information on a topic because they first and foremost produce plausible outputs, not accurate ones. If there are no patterns in the dataset that match what the user is seeking they will create the best approximation based on statistical patterns learned during training. Even AI models that can search the web for real sources can potentially fabricate citations, choose the wrong ones, or mischaracterize them.

“Errors happen. Made-up citations are a totally different thing where you essentially demolish the trustworthiness of the material,” Josh Lepawsky, the former president of the Memorial University Faculty Association who resigned from the report’s advisory board in January, told CBC, citing a “deeply flawed process.”

The irony runs deep

The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report’s 110 recommendations specifically states the provincial government should “provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use.”

Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations. “Around the references I cannot find, I can’t imagine another explanation,” she told CBC. “You’re like, ‘This has to be right, this can’t not be.’ This is a citation in a very important document for educational policy.”

When contacted by CBC, co-chair Karen Goodnough declined an interview request, writing in an email: “We are investigating and checking references, so I cannot respond to this at the moment.”

The Department of Education and Early Childhood Development acknowledged awareness of “a small number of potential errors in citations” in a statement to CBC from spokesperson Lynn Robinson. “We understand that these issues are being addressed, and that the online report will be updated in the coming days to rectify any errors.”

Education report calling for ethical AI use contains over 15 fake sources Read More »

jef-raskin’s-cul-de-sac-and-the-quest-for-the-humane-computer

Jef Raskin’s cul-de-sac and the quest for the humane computer


“He wanted to make [computers] more usable and friendly to people who weren’t geeks.”

Consider the cul-de-sac. It leads off the main street past buildings of might-have-been to a dead-end disconnected from the beaten path. Computing history, of course, is filled with such terminal diversions, most never to be fully realized, and many for good reason. Particularly when it comes to user interfaces and how humans interact with computers, a lot of wild ideas deserved the obscure burials they got.

But some deserved better. Nearly every aspiring interface designer believed the way we were forced to interact with computers was limiting and frustrating, but one man in particular felt the emphasis on design itself missed the forest for the trees. Rather than drowning in visual metaphors or arcane iconographies doomed to be as complex as the systems they represented, the way we deal and interact with computers should stress functionality first, simultaneously considering both what users need to do and the cognitive limits they have. It was no longer enough that an interface be usable by a human—it must be humane as well.

What might a computer interface based on those principles look like? As it turns out, we already know.

The man was Jef Raskin, and this is his cul-de-sac.

The Apple core of the Macintosh

It’s sometimes forgotten that Raskin was the originator of the Macintosh project in 1979. Raskin had come to Apple with a master’s in computer science from Penn State University, six years as an assistant professor of visual arts at the University of California, San Diego (UCSD), and his own consulting company. Apple co-founder Steve Jobs subsequently hired Raskin’s company to write the Apple II’s BASIC programming manual, and Raskin joined Apple as manager of publications in 1978.

Raskin’s work on documentation and testing, combined with his technical acumen, gave him outsized influence within the young company. As the 40-column uppercase-only Apple II was ill-suited for Raskin’s writing, Apple developed a text editor and an 80-column display card, and Raskin leveraged his UCSD contacts to port UCSD Pascal and the p-System virtual machine to the Apple II when Steve Wozniak developed the Apple II’s floppy disk drives. (Apple sold this as Apple Pascal, and many landmark software programs like the Apple Presents Apple tutorial were written in it.)

But Raskin nevertheless concluded that a complex computer (by the standards of the day) could never exist in quantity, nor be usable by enough people to matter. In his 1979 essay “Computers by the Millions,” he argued against systems like the Apple II and the in-development Apple III that relied on expansion slots and cards for many advanced features. “What was not said was that you then had the rather terrible task of writing software to support these new ‘boards,’” he wrote. “Even the more sophisticated operating systems still required detailed understanding of the add-ons… This creates a software nightmare.”

Instead, he felt that “personal computers will be self-contained, complete, and essentially un-expandable. As we’ll see, this strategy not only makes it possible to write complete software but also makes the hardware much cheaper and producible.” Ultimately, Raskin believed, only a low-priced, low-complexity design could be manufactured in large enough numbers for a future world and be functional there.

The original Macintosh was designed as an embodiment of some of these concepts. Apple chairman Mike Markkula had a $500 (around $2,200 in 2025) game machine concept in mind called “Annie,” named after the Playboy comic character and intended as a low-end system paired with the Apple II—starting at around double that price at the time—and the higher-end Apple III and Lisa, which were then in development. Raskin wasn’t interested in developing a game console, but he did suggest to Markkula that a $500 computer could have more appeal, and he spent several months writing specifications and design documents for the proposed system before it was approved.

“My message,” wrote Raskin in The Book of Macintosh, “is that computers are easy to use, and useful in everyday life, and I want to see them out there, in people’s hands, and being used.” Finding female codenames sexist, he changed Annie to Macintosh after his favorite variety of apple, though using a variant spelling to avoid a lawsuit with the previously existing McIntosh Laboratory. (His attempt was ultimately for naught, as Apple later ended up having to license the trademark from the hi-fi audio manufacturer and then purchase it outright anyway.)

Raskin’s small team developed the hardware at Apple’s repurposed original Cupertino offices separate from the main campus. Initially, he put together a rough all-in-one concept, originally based on an Apple II (reportedly serial number 2) with a “jury-rigged” monitor. This evolved into a prototype chiefly engineered by Burrell Smith, selecting for its CPU the 8-bit Motorola 6809 as an upgrade from the Apple II’s MOS 6502 but still keeping costs low.

Similarly, a color display and a larger amount of RAM would have also added expense, so the prototype had a small 256×256 monochrome CRT driven by the ubiquitous Motorola 6845 CRTC, plus 64K of RAM. A battery and built-in printer were considered early on but ultimately rejected. The interface emphasized text and keyboard: There was no mouse, and the display was character-based instead of graphical.

Raskin was aware of early graphical user interfaces in development, particularly Xerox PARC’s, and he had even contributed to early design work on the Lisa, but he believed the mouse was inferior to trackballs and tablets and felt such pointing devices were more appropriate for graphics than text. Instead, function keys allowed the user to select built-in applications, and the machine could transparently shift between simple text entry or numeric evaluation in a “calculator-based language” depending on what the user was typing.

During the project’s development, Apple management had recurring concerns about its progress, and it was nearly canceled several times. This changed in late 1980 when Jobs was removed from the Lisa project by President Mike Scott, after which Jobs moved to unilaterally take over the Macintosh, which at that time was otherwise considered a largely speculative affair.

Raskin initially believed the change would be positive, as Jobs stated he was only interested in developing the hardware, and his presence and interest quickly won the team new digs and resources. New team member Bud Tribble suggested that it should be able to take advantage of the Lisa’s powerful graphics routines by migrating to its Motorola 68000, and by February 1981, Smith was able to duly redesign the prototype for the more powerful CPU while maintaining its lower-cost 8-bit data bus.

This new prototype expanded graphics to 384×256, allowed the use of more RAM, and ran at 8 MHz, making the prototype noticeably faster than the 5 MHz Lisa yet substantially cheaper. However, by sharing so much of Lisa’s code, the interface practically demanded a pointing device, and the mouse was selected, even though Raskin had so carefully tried to avoid it. (Raskin later said he did prevail with Jobs on the mouse only having one button, which he believed would be easier for novices, though other Apple employees like Larry Tesler have contested his influence on this decision.)

As Jobs started to take over more and more portions of the project, the two men came into more frequent conflict, and Raskin eventually quit Apple for good in March 1982. The extent of Raskin’s residual impact on the Macintosh’s final form is often debated, but the resulting 1984 Macintosh 128K is clearly a different machine from what Raskin originally envisioned. Apple acknowledged Raskin’s contributions in 1987 by presenting him with one of the six “millionth” Macintoshes, which he auctioned off in 1999 along with the Apple II used in the original concept.

A Swyftly tilting project

After Raskin’s departure from Apple, he established Information Appliance, Inc. in Palo Alto to develop his original concept on his own terms. By this time, it was almost a foregone conclusion that microcomputers would sooner or later make their way to everyone; indeed, home computer pioneers like Jack Tramiel’s Commodore were already selling inexpensive “computers by the millions”—literally. With the technology now evolving at a rapid pace, Raskin wanted to concentrate more on the user interface and the concept’s built-in functionality, reviving the ideas he believed had become lost in the Macintosh’s transition. He christened it with a new name: Swyft.

In terms of industrial design, the Swyft owed a fair bit to Raskin’s prior prototype as it was also an all-in-one machine, using a built-in 9” monochrome CRT display. Unlike the Macintosh, however, the screen was set back at an angle and the keyboard was built-in; it also had a small handle at the base of its sloped keyboard making it at least notionally portable.

Disk technology had advanced, so it sported a 3.5-inch floppy drive (also like the Macintosh, albeit hidden behind a door), though initially the prototype used a less-powerful 8-bit MOS 6502 CPU running at 2MHz. The 6502’s 64K addressing limit and the additional memory banking logic it required eventually proved inadequate, and the CPU was changed during development to the Motorola 68008, a cheaper version of the 68000 with an 8-bit data bus and a maximum address space of 1MB. Raskin intended the Swyft to act like an always-on appliance, always ready and always instant, so it had a lower-power mode and absolutely no power switch.

Instead of Pascal or assembly language, Swyft’s ROM operating system was primarily written in Forth. To reduce the size of the compiled code, developer Terry Holmes created a “tokenized” version that embedded smaller tokens instead of execution addresses into Forth word definitions, trading the overhead of an additional lookup step (which was written in hand-coded assembly and made very quick) for a smaller binary size. This modified dialect was called tForth (for “token,” or “Terry”). The operating system supported the hardware and the demands of the on-screen bitmapped display, which could handle true proportional text.

Swyft’s user interface was also radically different and was based on a “document” metaphor. Most computers of that time and today, mobile devices included, divide functionality among separate applications that access files. Raskin believed this approach was excessive and burdensome, writing in 1986 that “[b]y choosing to focus on computers rather than the tasks we wanted done, we inherited much of the baggage that had accumulated around earlier generations of computers. It is more a matter of style and operating systems that need elaborate user interfaces to support huge application programs.”

He expanded on this point in his 2000 book The Humane Interface: “[Y]ou start in the generating application. Your first step is to get to the desktop. You must also know which icons correspond to the desired documents, and you or someone else had to have gone through the steps of naming those documents. You will also have to know in which folder they are stored.”

Raskin thus conceived of a unified workspace in which everything was stored, accessed through one single interface appearing to the user as a text editor editing one single massive document. The editor was intelligent and could handle different types of text according to its context, and the user could subdivide the large document workspace into multiple subdocuments, all kept together. (This even included Forth code, which the user could write and evaluate in place to expand the system as they wished.) Data received from the serial port was automatically “typed” into the same document, and any or all text could be sent over the serial port or to a printer. Instead of function keys, a USE FRONT key acted like an Option or Command key to access special features.

Because everything was kept in one place, when the user saved the system state to a floppy disk, their entire workspace was frozen and stored in its entirety. Swyft additionally tagged the disk with a unique identifier so it knew when a disk was changed. When that disk was reinserted and resumed, the user picked up exactly where they left off, at exactly the same point, with everything they had been working on. Since everything was kept together and loaded en masse, there was no need for a filesystem.

Swyft also lacked a mouse—or indeed any conventional means of moving the cursor around. To navigate through the document, Swyft instead had LEAP keys, which when pressed alone would “creep” forward or backward by single characters. But when held down, you could type a string of characters and release the key, and the system would search forward or backward for that string and highlight it, jumping entire pages and subdocuments if necessary.

If you knew what was in a particular subdocument, you could find it or just LEAP forward to the next document marker to scan through what was there. Additionally, by leaping to one place, leaping again to another, and then pressing both LEAP keys together, you could select text as well. The steps to send, delete, change, or copy anything in the document are the same for everything in the document. “So the apparent simplicity [of other systems] is arrived at only after considerable work has been done and the user has shouldered a number of mental burdens,” wrote Raskin, adding, “the conceptual simplicity of the methods outlined here would be preferable. In most cases, the work required is also far less.”

Get something on sale faster, said Tom Swyftly

While around 60 Swyft prototypes of varying functionality were eventually made, IAI’s backers balked at the several million dollars additionally required to launch the product under the company’s own name. To increase their chances of a successful return on investment, they demanded a licensee for the design instead that would insulate the small company from the costs of manufacturing and sales. They found it in Japanese manufacturer Canon, which had expanded from its core optical and imaging lines into microcomputers but had spent years unsuccessfully trying to crack the market. However, possibly because of its unusual interface, Canon unexpectedly put its electronic typewriter division in charge of the project, and the IAI team began work with Canon’s engineers to refine the hardware for mass production.

SwyftCard advertisement in Byte, October 1985, with Jef Raskin and Steve Wozniak.

In the meantime, IAI investors prevailed upon management to find a way to release some of the Swyft technology early in a less expensive incarnation. This concept eventually turned into an expansion card for the Apple IIe. Raskin’s team was able to adapt some of the code written for the Swyft to the new device, but because the IIe is also a 6502-based system and is itself limited to a 64K address space, it required its own onboard memory banking hardware as well. With the card installed, the IIe booted into a scaled-down Swyft environment using its onboard 16K EPROM, with the option of disabling it temporarily to boot regular Apple software. Unlike the original Swyft, the Apple II SwyftCard does not use the bitmap display and appears strictly in 80-column non-proportional text. The SwyftCard went on sale in 1985 for $89.95, approximately $270 in 2025 dollars.

The initial SwyftCard tutorial page. Credit: Cameron Kaiser

The SwyftCard’s unified workspace can be subdivided into various “subdocuments,” which appear as hard page breaks with equals signs. Although up to 200 pages were supported, in practice, the available workspace limits you to about 15 or 20, “densely typed.” It came with a built-in tutorial which began with orienting you to the LEAP keys (i.e., the two Apple keys) and how to navigate: hold one of them down and type the text to leap to (or equals signs to jump to the next subdocument), or tap them repeatedly to slowly “creep.”

The two-tone cursor. Credit: Cameron Kaiser

Swyft and the SwyftCard implement a two-phased cursor, which the SwyftCard calls either “wide” or “narrow.” By default, the cursor is “narrow,” alternating between a solid and a partially filled block. As you type, the cursor splits into a “wide” form—any text shown in inverse, usually the last character you entered, is what is removed when you press DELETE, with the blinking portion after the inverse text indicating the insertion point. When you creep or leap, the cursor merges back into the “narrow” form. When narrow, DELETE deletes right as a true delete, instead of a backspace. If you selected text by pressing both LEAP keys together, those become highlighted in inverse and can be cut and pasted.

The SwyftCard software defines a USE FRONT key (i.e., the Control key) as well. This was most noticeable as a quick key combination for saving your work to disk, to which the entire workspace was saved in one go with no filenames (i.e., one disk equated one workspace), though it had many other such functions within the program. Since it could be tricky to juggle floppies without overwriting them, the software also took pains to ensure each formatted disk was tagged with a unique identifier to avoid accidental erasure. It also implemented serial communications such that you could dial up a remote system and use USE FRONT-SEND to send it or be dialed into and receive text into the workspace automatically.

SwyftCards didn’t sell in massive numbers, but their users loved them, particularly the speed and flexibility the system afforded. David Thornburg (the designer of the KoalaPad tablet), writing for A+ in November 1985, said it “accomplished something that I never knew was possible. It not only outperforms any Apple II word-processing system, but it lets the Apple IIe outperform the Macintosh… Will Rogers was right: it does take genius to make things simple.”

The Swyft and SwyftCard, however, were as much philosophy as interface; they represented Raskin’s clear desire to “abolish the application.” Rather than starting a potentially different interface to do a particular task, the task should be part of the machine’s standard interface and be launched by direct command. Similarly, even within the single user interface, there should be no “modes” and no switching between different minor behaviors: the interface ought to follow the same rules as much of the time as possible.

“Modes are a significant source of errors, confusion, unnecessary restrictions, and complexity in interfaces,” Raskin wrote in The Humane Interface, illustrating it with the example of “at one moment, tapping Return inserts a return character into the text, whereas at another time, tapping Return cases the text typed immediately prior to that tap to be executed as a command.”

Even a device as simple as a push-button flashlight is modal, argued Raskin, because “[i]f you do not know the present state of the flashlight, you cannot predict what a press of the flashlight’s button will do.” Even if an individual application itself is notionally modeless, Raskin presented the real-world example of Command-N commonly used to open a new document but AOL’s client using Command-M for a new E-mail message; the situation “that gives rise to a mode in this example consists of having a particular application active. The problem occurs when users employ the Command-N command habitually,” he wrote.

Ultimately, wrote Raskin, “[a]n interface is humane if it is responsive to human needs and considerate of human frailties.” In this case, the particular frailty Raskin concentrated on is the natural unconscious human tendency to form habitual behaviors. Because such habits are hard to break, command actions and gestures in an interface should be consistent enough that their becoming habitual makes them more effective, allowing a user to “do the task without having to think about it… We must design interfaces that (1) deliberately take advantage of the human trait of habit development and (2) allow users to develop habits that smooth the flow of their work.” If a task is always accomplished the same way, he asserted, then when the user has acquired the habit of doing so, they will have simultaneously mastered that task.

The Canon Cat’s one and only life

Raskin’s next computer preserved many such ideas from the Swyft, but it only did so in spite of the demands of Canon management, who forced multiple changes during development. Although the original Swyft (though not the SwyftCard) had true proportional text and at least the potential for user-created graphics, Canon’s electric typewriter division was then in charge of the project and insisted on non-proportional fixed-width text and no graphics, because that’s all the official daisywheel printer could generate—even though the system’s bitmapped display remained. (A laser printer option was later added but was nevertheless still limited to text.)

Raskin wanted to use a Mac-like floppy drive that could automatically detect floppy disk insertion, but Canon required the system to use their own floppy drives, which didn’t. Not every change during development was negative. Much of the more complicated Swyft logic board was consolidated into smaller custom gate array chips for mass production, along with the use of a regular 68000 instead of the more limited 68008, which was also cheaper in volume despite only being run at 5MHz.

However, against his repeated demands to the contrary and lengthy explanations of the rationale, Raskin was dismayed to find the device was nevertheless fitted with a power switch; Canon’s engineering staff said they simply thought an error had been made and added it, and by then, it was too late in development to remove it.

Canon management also didn’t understand the new machine’s design philosophy, treating it as an overgrown word processor (dubbed a “WORK Processor [sic]”) instead of the general-purpose computer Raskin intended, and required its programmability in Forth to be removed. This was unpopular with Raskin’s team, so rather than remove it completely, they simply hid it behind an unlikely series of keystrokes and excised it from the manual. On the other hand, because Canon considered it an overgrown word processor, it seemed entirely consistent to keep the Swyft’s primary interface intact otherwise, including its telecommunication features. The new system also got a new name: the Cat.

Canon Cat advertising brochure.

Thus was released the Canon Cat, announced in July 1987, for $1,495 (about $4,150 in 2025 dollars ). The released version came with 256K of RAM, with sockets to add an optional 128K more for 384K total, shared between the video circuitry, Forth dictionary, settings, and document text, all of which could be stored to the 3.5-inch floppy. (Another row of solder pads could potentially hold yet another 128K, but no shipping Cat ever populated it.)

Its 256K of system ROM contained the entirety of the editor and tForth runtime, plus built-in help screens, all immediately available as soon as you turned it on. An additional 128K ROM provided a 90,000-word dictionary to which the user could add words that were also automatically saved to the same disk. The system and dictionary ROMs came in versions for US and UK English, French, and German.

The Canon Cat. Cameron Kaiser

Like the Swyft it was based on, the Cat was an all-in-one system. The 9-inch monochrome CRT was retained, but the floppy drive no longer had a door, and the keyboard was extended with several special keys. In particular, the LEAP keys, as befitting their central importance, were given a row to themselves in an eye-catching shade of pink.

Function key combinations with USE FRONT are printed on the front of the keycaps. The Cat provided both a 1200 baud modem and a 9600bps RS-232 connector for serial data; it could dial out or be dialed into to upload text. Text transmitted to the Cat via the serial port was inserted into the document as if it had been typed in at the console. A Centronics-style printer port connected Canon’s official printer options, though many printers were compatible.

The Cat can be (imperfectly) emulated with MAME; the Internet Archive has a preconfigured Wasm version with Canon ROMs that you can also run in your browser. Note that the current MAME driver, as of this writing, will freeze if the emulated Cat makes a beep, and the ROM’s default keyboard layout assumes you’re using a real Cat, not a PC or Mac. These minor issues can be worked around in the emulated Cat’s setup menu by setting the problem signal to Flash (without a beep) and the keyboard to ASCII. The screenshots here are taken from MAME and adjusted to resemble the Cat’s display aspect ratio.

The Swyft and SwyftCard’s editing paradigm transferred to the Canon Cat nearly exactly. Preserved is the “wide” and “narrow” cursor, showing both the deletion range and the insertion point, as well as the use of the LEAP keys to creep, search, and select text ranges. (In MAME, the emulated LEAP keys are typically mapped to both Alt or Option keys.) SHIFT-LEAP can also be used to scroll the screen line by line, tapping LEAP repeatedly with SHIFT down to continue motion, and the Cat additionally implements a single level of undo with a dedicated UNDO key. The USE FRONT key also persisted, usually mapped in MAME to the Control key(s). Text could be bolded or underlined.

Similarly, the Cat inherits the same “multiple document interface” as the Swyfts: the workspace can be arbitrarily divided into documents, here using the DOCUMENT/PAGE key (mapped usually to Page Down in MAME), and the next or previous document can be LEAPed to by using the DOCUMENT/PAGE key as the target.

However, the Cat has an expanded interface compared to the SwyftCard, with a ruler (in character positions) at the bottom, text and keyboard modes, and open areas for on-screen indicators when disk access or computations are in progress.

Calculating data with the Canon Cat. Credit: Cameron Kaiser

Although Canon had mandated that the Cat’s programmability be suppressed, the IAI team nevertheless maintained the ability to compute expressions, which Canon permitted as an extension of the editor metaphor. Simple arithmetic such as 355/113 could be calculated in place by selecting the text and pressing USE FRONT-CALC (Control-G), which yields the answer with a dotted underline to indicate the result of a computation. (Here, the answer is computed to the default two decimal digits of precision, which is configurable.) Pressing USE FRONT-CALC within that answer reopens the expression to change it.

Computations weren’t merely limited to simple figures, though; the Cat also allowed users to store the result of a computation to a variable and reference that variable in other computations. If the variables underlying a particular computation were changed, its result would automatically update.

A spreadsheet built with expressions on the Cat. Credit: Cameron Kaiser

This capability, along with the Cat’s non-proportional font, made it possible to construct simple spreadsheets right in the editor using nothing more than expressions and the TAB key to create rows and columns. Cells can be referred to by expressions in other cells using a special function use() with relative coordinates. Constant values in “cells” can simply be entered as plain text; if recalculation is necessary, USE FRONT-CALC will figure it out. The Cat could also maintain and sort simple line lists, which, when combined with the LEARN macro facility, could be used to automate common tasks like mail merges.

The Canon Cat’s built-in on-line help facility. Credit: Cameron Kaiser

The Cat also maintained an extensive set of help screens built into ROM that the SwyftCard, for capacity reasons, was forced to load from floppy disk. Almost every built-in function had a documentation screen accessible from USE FRONT-HELP (Control-N): keep USE FRONT down, release the N key, and then press another key to learn about it. When the USE FRONT key is also released, the Cat instantly returns to the editor. Similarly, if the Cat beeped to indicate an error, pressing USE FRONT-HELP could also explain why. Errors didn’t trigger a modal dialogue or lock out system functions; you could always continue.

Internally, the current workspace contained not only the visible text documents but also any custom words the user added to the dictionary and any additional tForth words defined in memory. Ordinarily, there wouldn’t be any, given that Canon didn’t officially permit the user to program their own software, but there were a very small number of software applications Canon itself distributed on floppy disk: CATFORM, which allowed the user to create, fill out, and print form templates, and CATFILE, Canon’s official mailing list application. Dealers were instructed to provide new users with copies, though the Cat here didn’t come with them. Dealers also had special floppies of their own for in-store demos and customization.

The backdoor to Canon Cat tForth. Credit: Cameron Kaiser

Still, IAI’s back door to Forth quietly shipped in every Cat, and the clue was a curious omission in the online help: USE FRONT-ANSWER. This otherwise unexplained and unused key combination was the gateway. If you entered the string Enable Forth Language, highlighted it, and evaluated it with USE FRONT-ANSWER (not CALC; usually Control-Backspace in MAME), you’d get a Forth ok prompt, and the system was now yours. Reset the Cat or type re to return to the editor.

With Forth enabled, you could either enter code at the prompt, or do so within the editor and press USE FRONT-ANSWER to evaluate it, putting any output into the document just like Applesoft BASIC did on the SwyftCard. Through the Forth interface it was possible to define your own words, saved as part of the workspace, or even hack in 68000 machine code and completely take control of the machine. Extensive documentation on the Cat’s internals eventually surfaced, but no third-party software was ever written for the platform during its commercial existence.

As it happened, whatever commercial existence the Cat did have turned out to be brief and unprofitable anyway. It sold badly, blamed in large part on Canon’s poor marketing, which positioned it as an expensive dedicated word processor in an era where general-purpose PCs and, yes, Macintoshes were getting cheaper and could do more.

Various apocryphal stories circulate about why the Cat was killed—one theory cites internal competition between the typewriter and computer divisions; another holds that Jobs demanded the Cat be killed if Canon wanted a piece of his new venture, NeXT (and Owen Linzmeyer reports that Canon did indeed buy a 16 percent stake in 1989)—but regardless of the reason, it lasted barely six months on the market before it was canceled. The 1987 stock market crash was a further blow to the small company and an additional strain on its finances.

Despite the Cat’s demise, Raskin’s team at IAI attempted to move forward with a successor machine, a portable laptop that would have reportedly weighed just four pounds. The new laptop, christened the Swyft III, used a ROM-based operating system based on the Cat’s but with a newer, more sophisticated “leaping” technology called Hyperleap. At $999, it was to include a 640×200 supertwist LCD, a 2400 bps modem and 512K of RAM (a smaller $799 Swyft I would have had less memory and no modem), as well as an external floppy drive and an interchange facility for file transfers with PCs and Macs.

As Raskin had originally intended, the device achieved its claimed six-hour battery life (NiCad or longer with alkaline) primarily by aggressively sleeping when idle but immediately resuming full functionality when a key was pressed. Only two prototypes were ever made before IAI’s investors, considering the company risky after the Cat’s market failure and little money coming in, finally pulled the plug and caused the company to shut down in 1992. Raskin retained patents on the “leaping” method and the Swyft/Cat’s means of saving and restoring from disk, but their subsequent licensees did little with the technology, and the patents in the present day have lapsed.

If you can’t beat ’em, write software

The Cat is probably the best known of Raskin’s designs (notwithstanding the Macintosh, for reasons discussed earlier), especially as Raskin never led the development of another computer again. Nevertheless, his interface ideas remained influential, and after IAI’s closing, he continued as an author and frequent consultant and reviewer for various consumer products. These observations and others were consolidated into his later book The Humane Interface, from which this article has already liberally quoted. On the page before the table of contents, the book observes that “[w]e are oppressed by our electronic servants. This book is dedicated to our liberation.”

In The Humane Interface, Raskin not only discusses concepts such as leaping and habitual command behaviors but means of quantitative assessment as well. One of the more well-known is Fitts’ Law, after psychologist Paul Fitts, Jr., that predicts the time needed to quickly move to a target area is correlated with both the size of the target and its distance from the starting position.

This has been most famously used to justify the greater utility of a global menu bar completely occupying the edge of a screen (such as in macOS) because the mouse pointer stops at the edge, making the menu bar effectively infinitely large and therefore easy to “hit.” Similarly, Hick’s law (or the Hick-Hyman law, named for psychologists William Edmund Hick and Ray Hyman) asserts that increasing the number of choices a user is presented with will increase their decision time logarithmically. Given experimental constants, both laws can predict how long a user will need to hit a target or make a choice.

Notably, none of Raskin’s systems (at least as designed) superficially depended on either law because they had no explicit pointing device and no menus to select from. A more meaningful metric he also considers might be the Card-Moran-Newell GOMS model (“goals, objects, methods and selection rules”) and how it applies to user motion. While the time needed to mentally prepare, press a key, point to a particular position on the display or move from input device to input device (say, mouse to-and-from keyboard) will vary from person to person, most users will have similar times, and general heuristics exist (e.g., nonsense is easier to type than structured data).

However, the length of time the computer takes to respond is within the designer’s control, and its perception can be reduced by giving prompt and accurate feedback, even if the operation’s actual execution time is longer. Similarly, if we reduce keystrokes or reduce having to move from mouse to keyboard for a given task, the total time to perform that task becomes less for any user.

Although these timings can help to determine experimentally which interface is better for a given task, Raskin points out we can use the same principles to also determine the ideal efficiency of such interfaces. An interface that gives the user no choices but still must be interacted with is maximally inefficient because the user must do some non-zero amount of work to communicate absolutely no information.

A classic example might be a modal alert box with only one button—asynchronous or transparent notifications could be better used instead. Likewise, an interface with multiple choices will nevertheless become less efficient if certain choices are harder or more improbable to access, such as buttons or click areas being smaller than others, or a particular choice needing more typing to select than other choices.

Raskin’s book also considers alternative means of navigation, pointing out that “natural” and “intuitive” are not necessarily synonyms for “easy to use.” (A mouse can be easy to use, but it’s not necessarily natural or intuitive. Recall Scotty in Star Trek IV picking up the Macintosh Plus mouse and talking to it instead of trying to move it, and then eventually having to use the keyboard. Raskin cites this very scene, in fact.)

Besides leaping, Raskin also presents the idea of a zooming user interface (ZUI), allowing the user an easier way to not only reach their goal but also see themselves in relationship to that goal and within the entire workspace. If you see what you want, zoom in. If you’ve lost your place, zoom out. One could access a filesystem this way, or a collection of applications or associated websites. Raskin was hardly the first to propose the ZUI—Ivan Sutherland developed a primitive ZUI for graphics in his 1962 Sketchpad, along with the Spatial Dataland at MIT and Xerox PARC’s Smalltalk with “infinite” desktops—but he recognized its unique abilities to keep a user mentally grounded while navigating large structures that would otherwise become unwieldy. This, he asserts, made it more humane.

To crystallize these concepts, rather than create another new computer, Raskin instead started work on a software package with a team that included his son, Aza, initially called The Humane Environment. THE’s HumaneEditorProject was first unveiled to the world on Christmas Eve 2002, though initially only as a SourceForge CVS tree, since it was considered very unfinished. The original early builds of the Humane Editor were open-source and intended to run on classic Mac OS 9, though QEMU, SheepShaver and Classic under Tiger and earlier will also run it.

Default document. Credit: Cameron Kaiser

As before, the Humane Editor uses a large central workspace subdivided into individual documents, here separated by backtick characters. Our familiar two-tone cursor is also maintained. However, although font sizes, boldface, italic, and underlining were supported, colors (and, additionally, font sizes) were still selected by traditional Mac pulldown menus.

Leaping with the SHIFT and angle bracket keys. Credit: Cameron Kaiser

Leaping, here with a trademark, is again front and center in THE. However, instead of dedicated keys, leaping is merely a part of THE’s internal command line, termed the Humane Quasimode, where other commands can be sent. Notice that the prompt is displayed as translucent text over the work area.

The Deletion Document. Credit: Cameron Kaiser

When text was deleted, either by backspacing over it or pressing DELETE with a selected region, it went to an automatically created and maintained “DELETION DOCUMENT” from which it could be rescued. Effectively, this turned the workspace into a yank buffer along with all your documents, and undoing any destructive editing operation thus became merely another cut and paste. (Deleting from the deletion document just deleted.)

Command listing. Credit: Cameron Kaiser

A full list of commands accepted by the Quasimode was available by typing COMMANDS, which in turn emitted them to the document. These are based on precompiled Python files, which the user could edit or add to, and arbitrary Python expressions and code could also be inserted and run from the document workspace directly.

THE was a fully functioning editor, albeit incomplete, but nevertheless capable enough to write its own documentation with. Despite that, the intention was never to make something that was just an editor, and this aspiration became more obvious as development progressed. To make the software available on more platforms, development subsequently changed to wxPython in 2004, and later Python and Pygame to handle the screen display. The main development platform switched at the same time to Windows, and a Windows demo version of this release was made, although Mac OS X and Linux could still theoretically run it if you installed the prerequisites.

With the establishment of the Raskin Center for Humane Interfaces (RCHI), THE’s development continued under a new name, Archy. (This Wayback Machine link is the last version of the site before it was defaced and eventually domain-parked.) The new name was both a pun on “RCHI” and a reference to the Don Marquis characters, Archy and Mehitabel, specifically Archy the typewriting cockroach, whose alleged writings largely lack capital letters or punctuation because he couldn’t hit the SHIFT key at the same time. Archy’s final release shown here was the unfinished build 124, dated December 15, 2005.

The initial Archy window. Credit: Cameron Kaiser

Archy had come a long way from the original Mac THE, finally including the same sort of online help tutorial that the SwyftCard and Cat featured. It continued the use of a dedicated key to enter commands—in this case, CAPS LOCK. Hold it down, type the command, and then release it.

Leaping in Archy. Credit: Cameron Kaiser

Likewise, dedicated LEAP keys returned in Archy, in this case Left and Right Alt, and as before, selection was done by pressing both LEAP keys. A key advancement here is that any text that would be selected, if you chose to select it, is highlighted beforehand in a light shade of yellow so you no longer had to remember where your ranges were.

A list of commands in Archy. Credit: Cameron Kaiser

As before, the COMMANDS verb gave you a list of commands. While THE’s command suite was almost entirely specific to an editor application, Archy’s aspirations as a more complete all-purpose environment were evident. In particular, in addition to many of the same commands we saw on the Mac, there were now special Internet-oriented commands like EMAIL and GOOGLE. These commands were now just small documents containing Python embedded in the same workspace—no more separate files you had to corral. You could even change built-in commands, and even LEAP itself.

As you might expect, besides the deletion document (now just “DELETIONS”), things like your email were also now subdocuments, and your email server settings were a subdocument, too. While this was never said explicitly, a logical extension of the metaphor would have been to subsume webpage contents as in-place parts of the workspace as well—your history, bookmarks, and even the pages themselves could be subdocuments of their own, restored immediately and ready for access when entering Archy. Each time you exited, the entire workspace was saved out into a versioned file, so you could even go back in time to a recent backup if you blew it.

Raskin’s legacy

Raskin was found to have pancreatic cancer in December 2004 and, after transitioning the project to become Archy the following January, died shortly afterward on February 26, 2005. In Raskin’s New York Times obituary, Apple software designer Bill Atkinson lauded his work, saying, “He wanted to make them [computers] more usable and friendly to people who weren’t geeks.” Technology journalist Steven Levy agreed, adding that “[h]e really spent his life urging a degree of simplicity where computers would be not only easy to use but delightful.” He left behind his wife Linda Blum and his three children, Aza, Aviva, and Aenea.

Archy was the last project Raskin was directly involved in, and to date it remains unfinished. Some work continued on the environment after his death—this final release came out in December 2005, nearly 10 months later—but the project was ultimately abandoned, and many planned innovations, such as a ZUI of its own, were never fully developed beyond a separate proof of concept.

Similarly, many of Raskin’s more unique innovations have yet to reappear in modern mainstream interfaces. RCHI closed as well and was succeeded in spirit by the Chicago-based Humanized, co-founded by his son Aza. Humanized reworked ideas from Archy into Enso, which expanded the CAPS LOCK-as-command interface with a variety of verbs such as OPEN (to start applications) and DEFINE (to get the dictionary definition of a word), and the ability to perform direct web searches.

By using a system-wide translucent overlay similar to Archy and THE, the program was intended to minimize the need for switching back and forth between multiple applications to complete a task. In 2008, Enso was made free for download, and Humanized’s staff joined Mozilla, where the concept became a Firefox browser extension called Ubiquity, in which web-specific command verbs could be written in JavaScript and executed in an opaque pop-up window activated by a hotkey combination. However, the project was placed on “indefinite hiatus” in 2009 and was never revisited, and it no longer works with current versions of the browser.

Using Raskin 2 on a MacBook Air to browse images. Credit: Cameron Kaiser

The idea of a single workspace that you “leap through” also never resurfaced. Likewise, although ZUI-like animations have appeared more or less as eye candy in environments such as iOS and GNOME, a pervasive ZUI has yet to appear in (or as) any major modern desktop environment. That said, the idea is visually appealing, and some specific applications have made heavier use of the concept.

Microsoft’s 2007 Deepfish project for Windows Mobile conceived of visually shrunken webpages for mobile devices that users could zoom into, but it was dependent on a central server and had high bandwidth requirements, and Microsoft canceled it in 2008. A Swiss company named Raskin Software LLC (apparently no official relation) offers a macOS ZUI file and media browser called Raskin, which has free and paid tiers; on other platforms, the free open-source Eagle Mode project offers a similar file manager with media previews, but also a chess application, a fractal viewer, and even a Linux kernel configuration tool.

A2 desktop with installer, calendar and clock. Credit: LoganJustice via Wikimedia (CC0)

Perhaps the most complete example of an operating environment built around a ZUI might be A2, a branch of the ETH-Zürich Oberon System. The Oberon System, based around the Oberon programming language descended from Modula-2 and Pascal, was already notable for its unique paneled text user interface, where text is clickable, including text you type; Native Oberon can be booted directly as an operating system by itself.

In 2002, A2 spun off initially as Active Object System, using an updated dialect called Active Oberon supporting improved scheduling, exception handling, and object-oriented programming with processes and threads able to run within an object’s context to make that object “active.” While A2 kept the Oberon System’s clickable text metaphor, windows and gadgets can also be zoomed in or out of on an infinitely scrolling desktop, which is best appreciated in action. It is still being developed, and older live CDs are still available. However, the Oberon System has never achieved general market awareness beyond its small niche, and any forks less so, limiting it to a practical curiosity for most users.

This isn’t to say that Raskin’s quest for a truly humane computer has completely come to naught. Unfortunately, in some respects, we’re truly backsliding, with opaque operating systems that can limit your application choices or your ability to alter or customize them, and despite very public changes in skinning and aesthetics, the key ways that we interact with our computers have not substantially changed since the wide deployment of the Xerox PARC-derived “WIMP” paradigm (windows, icons, menus and pointers)—ironically most visibly promoted by the 1984 post-Raskin Macintosh.

A good interface unavoidably requires work and study, two things that take too long in today’s fast-paced product cycle. Furthermore, Raskin’s emphasis on built-in programmability nevertheless rings a bit quaint in our era, when many home users’ only computer may be a tablet. By his standards, there is little humane about today’s computers, and they may well be less humane than yesterday’s.

Nevertheless, while Raskin’s ideas may have few present-day implementations, that doesn’t mean the spirit in which they were proposed is dead, too. At the very least, some greater consideration is given to the traditional WIMP paradigm’s deficiencies today, particularly with multiple applications and windows, and how it can poorly serve some classes of users, such as those requiring assistive technology. That said, I hold guarded optimism about how much change we’ll see in mainstream systems, and Raskin’s editor-centric, application-less interface becomes more and more alien the more the current app ecosystem reigns dominant.

But as cul-de-sacs go, you can pick far worse places to get lost in than his, and it might even make it out to the main street someday. Until then, at least, you can always still visit—in an upcoming article, we’ll show you how.

Selected bibliography

Folklore.org

CanonCat.net

Linzmeyer, Owen W (2004). Apple Confidential 2.0. No Starch Press, San Francisco, CA.

Raskin, Jef (2000). The humane interface: new directions for designing interactive systems. Addison-Wesley, Boston, MA.

Making the Macintosh: Technology and Culture in Silicon Valley. https://web.stanford.edu/dept/SUL/sites/mac/earlymac.html

Canon’s Cat Computer: The Real Macintosh. https://www.landsnail.com/apple/local/cat/canon.html

Prototype to the Canon Cat: the “Swyft.” https://forum.vcfed.org/index.php?threads/prototype-to-the-canon-cat-the-swyft.12225/

Apple //e and Cat. http://www.regnirps.com/Apple6502stuff/apple_iie_cat.htm

Jef Raskin’s cul-de-sac and the quest for the humane computer Read More »

court-rejects-verizon-claim-that-selling-location-data-without-consent-is-legal

Court rejects Verizon claim that selling location data without consent is legal

Instead of providing notice to customers and obtaining or verifying customer consent itself, Verizon “largely delegated those functions via contract,” the court said. This system and its shortcomings were revealed in 2018 when “the New York Times published an article reporting security breaches involving Verizon’s (and other major carriers’) location-based services program,” the court said.

Securus Technologies, a provider of communications services to correctional facilities, “was misusing the program to enable law enforcement officers to access location data without customers’ knowledge or consent, so long as the officers uploaded a warrant or some other legal authorization,” the ruling said. A Missouri sheriff “was able to access customer data with no legal process at all” because Securus did not review the documents that law enforcement uploaded.

Verizon claimed that Section 222 of the Communications Act covers only call-location data, as opposed to device location data. The court disagreed, pointing to the law’s text stating that customer proprietary network information includes data that is related to the location of a telecommunications service, and which is made available to the carrier “solely by virtue of the carrier-customer relationship.”

“Device-location data comfortably satisfies both conditions,” the court said.

Verizon chose to pay fine, giving up right to jury trial

As for Verizon’s claim that the FCC violated its right to a jury trial, the court said that “Verizon could have gotten such a trial” if it had “declined to pay the forfeiture and preserved its opportunity for a de novo jury trial if the government sought to collect.” Instead, Verizon chose to pay the fine “and seek immediate review in our Court.”

By contrast, the 5th Circuit decision in AT&T’s favor said the FCC “acted as prosecutor, jury, and judge,” violating the right to a jury trial. The 5th Circuit said it was guided by the Supreme Court’s June 2024 ruling in Securities and Exchange Commission v. Jarkesy, which held that “when the SEC seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial.”

The 2nd Circuit ruling said there are key differences between US telecom law and the securities laws considered in Jarkesy. It’s because of those differences that Verizon had the option of declining to pay the penalty and preserving its right to a jury trial, the court said.

In the Jarkesy case, the problem “was that the SEC could ‘siphon’ its securities fraud claims away from Article III courts and compel payment without a jury trial,” the 2nd Circuit panel said. “The FCC’s forfeiture order, however, does not, by itself, compel payment. The government needs to initiate a collection action to do that. Against this backdrop, the agency’s proceedings before a § 504(a) trial create no Seventh Amendment injury.”

Court rejects Verizon claim that selling location data without consent is legal Read More »

pay-per-output?-ai-firms-blindsided-by-beefed-up-robotstxt-instructions.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions.


“Really Simple Licensing” makes it easier for creators to get paid for AI scraping.

Logo for the “Really Simply Licensing” (RSL) standard. Credit: via RSL Collective

Leading Internet companies and publishers—including Reddit, Yahoo, Quora, Medium, The Daily Beast, Fastly, and more—think there may finally be a solution to end AI crawlers hammering websites to scrape content without permission or compensation.

Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

The standard was created by the RSL Collective, which was founded by Doug Leeds, former CEO of Ask.com, and Eckart Walther, a former Yahoo vice president of products and co-creator of the RSS standard, which made it easy to syndicate content across the web.

Based on the “Really Simply Syndication” (RSS) standard, RSL terms can be applied to protect any digital content, including webpages, books, videos, and datasets. The new standard supports “a range of licensing, usage, and royalty models, including free, attribution, subscription, pay-per-crawl (publishers get compensated every time an AI application crawls their content), and pay-per-inference (publishers get compensated every time an AI application uses their content to generate a response),” the press release said.

Leeds told Ars that the idea to use the RSS “playbook” to roll out the RSL standard arose after he invited Walther to speak to University of California, Berkeley students at the end of last year. That’s when the longtime friends with search backgrounds began pondering how AI had changed the search industry, as publishers today are forced to compete with AI outputs referencing their own content as search traffic nosedives.

Eckart had watched the RSS standard quickly become adopted by millions of sites, and he realized that RSS had actually always been a licensing standard, Leeds said. Essentially, by adopting the RSS standard, publishers agreed to let search engines license a “bit” of their content in exchange for search traffic, and Eckart realized that it could be just as straightforward to add AI licensing terms in the same way. That way, publishers could strive to recapture lost search revenue by agreeing to license all or some part of their content to train AI in return for payment each time AI outputs link to their content.

Leeds told Ars that the RSL standard doesn’t just benefit publishers, though. It also solves a problem for AI companies, which have complained in litigation over AI scraping that there is no effective way to license content across the web.

“We have listened to them, and what we’ve heard them say is… we need a new protocol,” Leeds said. With the RSL standard, AI firms get a “scalable way to get all the content” they want, while setting an incentive that they’ll only have to pay for the best content that their models actually reference.

“If they’re using it, they pay for it, and if they’re not using it, they don’t pay for it,” Leeds said.

No telling yet how AI firms will react to RSL

At this point, it’s hard to say if AI companies will embrace the RSL standard. Ars reached out to Google, Meta, OpenAI, and xAI—some of the big tech companies whose crawlers have drawn scrutiny—to see if it was technically feasible to pay publishers for every output referencing their content. xAI did not respond, and the other companies declined to comment without further detail about the standard, appearing to have not yet considered how a licensing layer beefing up robots.txt could impact their scraping.

Today will likely be the first chance for AI companies to wrap their heads around the idea of paying publishers per output. Leeds confirmed that the RSL Collective did not consult with AI companies when developing the RSL standard.

But AI companies know that they need a constant stream of fresh content to keep their tools relevant and to continually innovate, Leeds suggested. In that way, the RSL standard “supports what supports them,” Leeds said, “and it creates the appropriate incentive system” to create sustainable royalty streams for creators and ensure that human creativity doesn’t wane as AI evolves.

While we’ll have to wait to see how AI firms react to RSL, early adopters of the standard celebrated the launch today. That included Neil Vogel, CEO of People Inc., who said that “RSL moves the industry forward—evolving from simply blocking unauthorized crawlers, to setting our licensing terms, for all AI use cases, at global web scale.”

Simon Wistow, co-founder of Fastly, suggested the solution “is a timely and necessary response to the shifting economics of the web.”

“By making it easy for publishers to define and enforce licensing terms, RSL lays the foundation for a healthy content ecosystem—one where innovation and investment in original work are rewarded, and where collaboration between publishers and AI companies becomes frictionless and mutually beneficial,” Wistow said.

Leeds noted that a key benefit of the RSL standard is that even small creators will now have an opportunity to generate revenue for helping to train AI. Tony Stubblebine, CEO of Medium, did not mince words when explaining the battle that bloggers face as AI crawlers threaten to divert their traffic without compensating them.

“Right now, AI runs on stolen content,” Stubblebine said. “Adopting this RSL Standard is how we force those AI companies to either pay for what they use, stop using it, or shut down.”

How will the RSL standard be enforced?

On the RSL standard site, publishers can find common terms to add templated or customized text to their robots.txt files to adopt the RSL standard today and start protecting their content from unfettered AI scraping. Here’s an example of how machine-readable licensing terms could look, added directly to robots.txt files:

# NOTICE: all crawlers and bots are strictly prohibited from using this

# content for AI training without complying with the terms of the RSL

# Collective AI royalty license. Any use of this content for AI training

# without a license is a violation of our intellectual property rights.

License: https://rslcollective.org/royalty.xml

Through RSL terms, publishers can automate licensing, with the cloud company Fastly partnering with the collective to provide technical enforcement that Leeds described as tech that acts as a bouncer to keep unapproved bots away from valuable content. It seems likely that Cloudflare, which launched a pay-per-crawl program blocking greedy crawlers in July, could also help enforce the RSL standard.

For publishers, the standard “solves a business problem immediately,” Leeds told Ars, so the collective is hopeful that RSL will be rapidly and widely adopted. As further incentive, publishers can also rely on the RSL standard to “easily encrypt and license non-published, proprietary content to AI companies, including paywalled articles, books, videos, images, and data,” the RSL Collective site said, and that potentially could expand AI firms’ data pool.

On top of technical enforcement, Leeds said that publishers and content creators could legally enforce the terms, noting that the recent $1.5 billion Anthropic settlement suggests “there’s real money at stake” if you don’t train AI “legitimately.”

Should the industry adopt the standard, it could “establish fair market prices and strengthen negotiation leverage for all publishers,” the press release said. And Leeds noted that it’s very common for regulations to follow industry solutions (consider the Digital Millennium Copyright Act). Since the RSL Collective is already in talks with lawmakers, Leeds thinks “there’s good reason to believe” that AI companies will soon “be forced to acknowledge” the standard.

“But even better than that,” Leeds said, “it’s in their interest” to adopt the standard.

With RSL, AI firms can license content at scale “in a way that’s fair [and] preserves the content that they need to make their products continue to innovate.”

Additionally, the RSL standard may solve a problem that risks gutting trust and interest in AI at this early stage.

Leeds noted that currently, AI outputs don’t provide “the best answer” to prompts but instead rely on mashing up answers from different sources to avoid taking too much content from one site. That means that not only do AI companies “spend an enormous amount of money on compute costs to do that,” but AI tools may also be more prone to hallucination in the process of “mashing up” source material “to make something that’s not the best answer because they don’t have the rights to the best answer.”

“The best answer could exist somewhere,” Leeds said. But “they’re spending billions of dollars to create hallucinations, and we’re talking about: Let’s just solve that with a licensing scheme that allows you to use the actual content in a way that solves the user’s query best.”

By transforming the “ecosystem” with a standard that’s “actually sustainable and fair,” Leeds said that AI companies could also ensure that humanity never gets to the point where “humans stop producing” and “turn to AI to reproduce what humans can’t.”

Failing to adopt the RSL standard would be bad for AI innovation, Leeds suggested, perhaps paving the way for AI to replace search with a “sort of self-fulfilling swap of bad content that actually one doesn’t have any current information, doesn’t have any current thinking, because it’s all based on old training information.”

To Leeds, the RSL standard is ultimately “about creating the system that allows the open web to continue. And that happens when we get adoption from everybody,” he said, insisting that “literally the small guys are as important as the big guys” in pushing the entire industry to change and fairly compensate creators.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Pay-per-output? AI firms blindsided by beefed up robots.txt instructions. Read More »

girlsdoporn-owner-michael-pratt-gets-27-years-for-sex-trafficking-conspiracy

GirlsDoPorn owner Michael Pratt gets 27 years for sex trafficking conspiracy

“For almost a decade, the Defendant led the scheme to systematically coerce and defraud women into engaging in filmed sexual activity for profit,” the sentencing recommendation said. “A sentence of 260 months is warranted, given the longevity of the scheme, the amount of profit, and the extent of the damage to the victims.”

Pratt’s plea agreement limited his rights to appeal the sentencing, but said he “may appeal a custodial sentence above 260 months.” The 27-year (324-month) sentence exceeds that. While the government agreed to recommend no more than 260 months, the plea agreement said the government “may support on appeal the sentence or restitution order actually imposed.”

Pratt fled the US in 2019, shortly before being charged with sex trafficking crimes. “He was named to the FBI’s Ten Most Wanted list and lived as an international fugitive for more than three years until his arrest in Spain in December 2022 and extradition to San Diego in March 2024,” the DOJ said.

Pratt tried to minimize his role

Pratt is the fourth person to be sentenced in the GirlsDoPorn case. Pratt’s business partner, Matthew Wolfe, received 14 years. Ruben Andre Garcia was sentenced to 20 years, and Theodore Gyi was sentenced to four years. Defendant Valorie Moser is scheduled to be sentenced on Friday this week.

Pratt’s sentencing memorandum tried to minimize his role in the conspiracy. “Circa 2014, Mr. Pratt’s childhood friend, Matt Wolfe, took over as the cameraman and Mr. Pratt spent more time in the office doing post-production work and other business related activities,” the filing said.

Pratt argued that Garcia exhibited “erratic and unpredictable” behavior and that “much of this conduct occurred outside of Mr. Pratt’s presence.” Pratt’s filing said he should not receive a sentence as long as Garcia’s.

Garcia “was a rapist,” Pratt’s filing said. “Mr. Pratt had no involvement in Garcia’s sexual activities with the models before or after filming, nor did he condone it. When he received some complaints about Garcia’s behavior, Mr. Pratt took precautions to ensure the safety of the models, including setting up nanny video cameras, securing hotel incidental refrigerators, and ensuring everyone left the hotel as a group.”

The government’s sentencing memorandum described Pratt as “the ringleader in a wide-ranging sex-trafficking conspiracy during which many women were defrauded into engaging in sex acts on camera, destroying many of their lives.” The “scheme would never have occurred” if not for Pratt’s actions, “and hundreds of women would not have been victimized,” the government filing said.

GirlsDoPorn owner Michael Pratt gets 27 years for sex trafficking conspiracy Read More »

chatgpt’s-new-branching-feature-is-a-good-reminder-that-ai-chatbots-aren’t-people

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people

On Thursday, OpenAI announced that ChatGPT users can now branch conversations into multiple parallel threads, serving as a useful reminder that AI chatbots aren’t people with fixed viewpoints but rather malleable tools you can rewind and redirect. The company released the feature for all logged-in web users following years of user requests for the capability.

The feature works by letting users hover over any message in a ChatGPT conversation, click “More actions,” and select “Branch in new chat.” This creates a new conversation thread that includes all the conversation history up to that specific point, while preserving the original conversation intact.

Think of it almost like creating a new copy of a “document” to edit while keeping the original version safe—except that “document” is an ongoing AI conversation with all its accumulated context. For example, a marketing team brainstorming ad copy can now create separate branches to test a formal tone, a humorous approach, or an entirely different strategy—all stemming from the same initial setup.

A screenshot of conversation branching in ChatGPT. OpenAI

The feature addresses a longstanding limitation in the AI model where ChatGPT users who wanted to try different approaches had to either overwrite their existing conversation after a certain point by changing a previous prompt or start completely fresh. Branching allows exploring what-if scenarios easily—and unlike in a human conversation, you can try multiple different approaches.

A 2024 study conducted by researchers from Tsinghua University and Beijing Institute of Technology suggested that linear dialogue interfaces for LLMs poorly serve scenarios involving “multiple layers, and many subtasks—such as brainstorming, structured knowledge learning, and large project analysis.” The study found that linear interaction forces users to “repeatedly compare, modify, and copy previous content,” increasing cognitive load and reducing efficiency.

Some software developers have already responded positively to the update, with some comparing the feature to Git, the version control system that lets programmers create separate branches of code to test changes without affecting the main codebase. The comparison makes sense: Both allow you to experiment with different approaches while preserving your original work.

ChatGPT’s new branching feature is a good reminder that AI chatbots aren’t people Read More »

honda-combines-type-r-handling-with-hybrid-efficiency-for-2026-prelude

Honda combines Type-R handling with hybrid efficiency for 2026 Prelude

The chassis benefits from parts from a different Civic—the Type-R hot hatch. Ars has sadly yet to sample the current-generation Type-R, but everyone I know who has driven one has come away smiling. Type-R parts include the front suspension’s dual-axis struts and the Brembo brakes, which are there for when regen braking via the hybrid system is no longer sufficient.

Adaptive dampers control the Prelude’s ride, and there are four different drive modes. The powertrain simulates a manual transmission with something called S+ Shift, which “delivers quick simulated gearshift responses through seamless coordination between the engine and high-power motor, including downshift blips, rev matching, and gear holding.”

The shape is dictated by airflow. Honda

If the end result is as good as Hyundai’s N E-shift, it should be fun to play with. And if it isn’t, you can just leave the car in automatic mode.

Beyond that, expect all the latest Honda advanced driver assistance systems (also known as Honda Sensing), and an Android Automotive-based infotainment system with Google built in and wireless Apple CarPlay and Android Auto.

We’ll have to wait until sooner to the car’s arrival to get pricing, but expect the Prelude to start somewhere between $38,000 and $40,000.

Honda combines Type-R handling with hybrid efficiency for 2026 Prelude Read More »

former-nasa-chief-says-united-states-likely-to-lose-second-lunar-space-race

Former NASA chief says United States likely to lose second lunar space race

The hearing, titled “There’s a Bad Moon on the Rise: Why Congress and NASA Must Thwart China in the Space Race,” had no witnesses who disagreed with this viewpoint. They included Allen Cutler, CEO of the Coalition for Deep Space Exploration, the chief lobbying organization for SLS, Orion, and Gateway; Jim Bridenstine, former NASA Administrator who now leads government operations for United Launch Alliance; Mike Gold of Redwire, a Gateway contractor; and Lt. General John Shaw, former Space Command official.

The hearing before the committee chaired by Cruz, Commerce, Science, and Transportation, included the usual mishmash of parochial politics, lobbying for traditional space, back slapping, and fawning—at one point, Gold, a Star Trek fan, went so far as to assert that Cruz is the “Captain Kirk” of the US Senate.

Beyond this, however, there was a fair amount of teeth gnashing about the fact that the United States faces a serious threat from China, which appears to be on course to put humans on the Moon before NASA can return there with the Artemis Program. China aims to land humans at the South Pole before the year 2030.

NASA likely to lose “race”

Bridenstine, who oversaw the creation of the Artemis Program half a decade ago, put it most bluntly: “Unless something changes, it is highly unlikely the United States will beat China’s projected timeline to the Moon’s surface,” he said.

Bridenstine and others on the panel criticized the complex nature of SpaceX’s Starship-based lunar lander, which NASA selected in April 2021 as a means to get astronauts down to the lunar surface and back. The proposal relies on Starship being refueled in low-Earth orbit by multiple Starship tanker launches.

Former NASA chief says United States likely to lose second lunar space race Read More »

trump’s-move-of-spacecom-to-alabama-has-little-to-do-with-national-security

Trump’s move of SPACECOM to Alabama has little to do with national security


The Pentagon says the move will save money, but acknowledges risk to military readiness.

President Donald Trump speaks to the media in the Oval Office at the White House on September 2, 2025 in Washington, DC. Credit: Alex Wong/Getty Images

President Donald Trump announced Tuesday that US Space Command will be relocated from Colorado to Alabama, returning to the Pentagon’s plans for the command’s headquarters from the final days of Trump’s first term in the White House.

The headquarters will move to the Army’s Redstone Arsenal in Huntsville, Alabama. Trump made the announcement in the Oval Office, flanked by Republican members of the Alabama congressional delegation.

The move will “help America defend and dominate the high frontier,” Trump said. It also marks another twist on a contentious issue that has pitted Colorado and Alabama against one another in a fight for the right to be home to the permanent headquarters of Space Command (SPACECOM), a unified combatant command responsible for carrying out military operations in space.

Space Command is separate from the Space Force and is made up of personnel from all branches of the armed services. The Space Force, on the other hand, is charged with supplying personnel and technology for use by multiple combatant commands. The newest armed service, established in 2019 during President Trump’s first term, is part of the Department of the Air Force, which also had the authority for recommending where to base Space Command’s permanent headquarters.

“US Space Command stands ready to carry out the direction of the president following today’s announcement of Huntsville, Alabama, as the command’s permanent headquarters location,” SPACECOM wrote on its official X account.

Military officials in the first Trump administration considered potential sites in Colorado, Florida, Nebraska, New Mexico, and Texas before the Air Force recommended basing Space Command in Huntsville, Alabama, on January 13, 2021, a week before Trump left office.

Members of Colorado’s congressional delegation protested the decision, suggesting the recommendation was political. Trump won a larger share of votes in Alabama in 2016, 2020, and 2024 than in any of the other states in contention. On average, a higher percentage of Colorado’s citizens cast their votes against Trump than in the other five states vying for Space Command’s permanent headquarters.

Trump’s reasons

Trump cited three reasons Tuesday for basing Space Command in Alabama. He noted Redstone Arsenal’s proximity to other government and industrial space facilities, the persistence of Alabama officials in luring the headquarters away from Colorado, and Colorado’s use of mail-in voting, a policy that has drawn Trump’s ire but is wholly unrelated to military space matters.

“That played a big factor, also,” Trump said of Colorado’s mail-in voting law.

None of the reasons for the relocation that Trump mentioned in his remarks on Tuesday explained why Alabama is a better place for Space Command’s headquarters than Colorado, although the Air Force has pointed to cost savings as a rationale for the move.

A Government Accountability Office (GAO) investigation concluded in 2022 that the Air Force did not follow “best practices” in formulating its recommendation to place Space Command at Redstone Arsenal, leading to “significant shortfalls in its transparency and credibility.”

A separate report in 2022 from the Pentagon’s own inspector general concluded the Air Force’s basing decision process was “reasonable” and complied with military policy and federal law, but criticized the decision-makers’ record-keeping.

Former President Joe Biden’s secretary of the Air Force, Frank Kendall, stood by the recommendation in 2023 to relocate Space Command to Alabama, citing an estimated $426 million in cost savings due to lower construction and personnel costs in Huntsville relative to Colorado Springs. However, since then, Space Command achieved full operational capability at Peterson Space Force Base, Colorado.

Now-retired Army Gen. James Dickinson raised concerns about moving Space Command from Colorado to Alabama. Credit: US Space Force/Tech. Sgt. Luke Kitterman

Army Gen. James Dickinson, head of Space Command from 2020 until 2023, favored keeping the headquarters in Colorado, according to a separate inspector general report released earlier this year.

“Mission success is highly dependent on human capital and infrastructure,” Dickinson wrote in a 2023 memorandum to the secretary of the Air Force. “There is risk that most of the 1,000 civilians, contractors, and reservists will not relocate to another location.”

One division chief within Space Command’s plans and policy directorate told the Pentagon’s inspector general in May 2024 that they feared losing 90 percent of their civilian workforce if the Air Force announced a relocation. A representative of another directorate told the inspector general’s office that they could say “with certainty” only one of 25 civilian employees in their division would move to a new headquarters location.

Officials at Redstone Arsenal and information technology experts at Space Command concluded it would take three to four years to construct temporary facilities in Huntsville with the same capacity, connectivity, and security as those already in use in Colorado Springs, according to the DoD inspector general.

Tension under Biden

Essentially, the inspector general reported, officials at the Pentagon made cost savings their top consideration in where to garrison Space Command. Leaders at Space Command prioritized military readiness.

President Biden decided in July 2023 that Space Command’s headquarters would remain in Colorado Springs. The decision, according to the Pentagon’s press secretary at the time, would “ensure peak readiness in the space domain for our nation during a critical period.” Alabama lawmakers decried Biden’s decision in favor of Colorado, claiming it, too, was politically motivated.

Space Command reached full operational capability at its headquarters at Peterson Space Force Base, Colorado, two years ahead of schedule in December 2023. At the time, Space Command leaders said they could only declare Space Command fully operational upon the selection of a permanent headquarters.

Now, a year-and-a-half later, the Trump administration will uproot the headquarters and move it more than 1,000 miles to Alabama. But it hasn’t been smooth sailing for Space Command in Colorado.

A new report by the GAO published in May said Space Command faced “ongoing personnel, facilities, and communications challenges” at Peterson, despite the command’s declaration of full operational capability. Space Command officials told the GAO the command’s posture at Peterson is “not sustainable long term and new military construction would be needed” in Colorado Springs.

Space Command was originally established in 1985. The George W. Bush administration later transferred responsibility for military space activities to the US Strategic Command, as part of a post-9/11 reorganization of the military’s command structure. President Trump reestablished Space Command in 2019, months before Congress passed legislation to make the Space Force the nation’s newest military branch.

Throughout its existence, Space Command has been headquartered at Peterson Space Force Base in Colorado Springs. But now, Pentagon officials say the growing importance of military space operations and potentially space warfare requires Space Command to occupy a larger headquarters than the existing facility at Peterson.

Peterson Space Force Base is also the headquarters of North American Aerospace Defense Command, or NORAD, US Northern Command, and Space Operations Command, all of which work closely with Space Command. Space Command officials told the GAO there were benefits in being co-located with operational space missions and centers, where engineers and operators control some of the military’s most important spacecraft in orbit.

Several large space companies also have significant operations or headquarters in the Denver metro area, including Lockheed Martin, United Launch Alliance, BAE Systems, and Sierra Space.

In Alabama, ULA and Blue Origin operate rocket and engine factories near Huntsville. NASA’s Marshall Space Flight Center and the Army’s Space and Missile Defense Command are located at Redstone Arsenal itself.

The headquarters building at Peterson Space Force Base, Colorado. Credit: US Space Force/Keefer Patterson

Colorado’s congressional delegation—six Democrats and four Republicansissued a joint statement Tuesday expressing their disappointment in Trump’s decision.

“Today’s decision to move US Space Command’s headquarters out of Colorado and to Alabama will directly harm our state and the nation,” the delegation said in a statement. “We are united in fighting to reverse this decision. Bottom line—moving Space Command headquarters weakens our national security at the worst possible time.”

The relocation of Space Command headquarters is estimated to bring about 1,600 direct jobs to Huntsville, Alabama. The area surrounding the headquarters will also derive indirect economic benefits, something Colorado lawmakers said they fear will come at the expense of businesses and workers in Colorado Springs.

“Being prepared for any threats should be the nation’s top priority; a crucial part of that is keeping in place what is already fully operational,” the Colorado lawmakers wrote. “Moving Space Command would not result in any additional operational capabilities than what we have up and running in Colorado Springs now. Colorado Springs is the appropriate home for US Space Command, and we will take the necessary action to keep it there.”

Alabama’s senators and representatives celebrated Trump’s announcement Tuesday.

“The Air Force originally selected Huntsville in 2021 based 100 percent on merit as the best choice,” said Rep. Robert Aderholt (R-Alabama). “President Biden reversed that decision based on politics. This wrong has been righted and Space Command will take its place among Huntsville’s world-renowned space, aeronautics, and defense leaders.”

Democratic Colorado Gov. Jared Polis said in a statement that the Trump administration should provide “full transparency” and the “full details of this poor decision.”

“We hope other vital military units and missions are retained and expanded in Colorado Springs. Colorado remains an ideal location for future missions, including Golden Dome,” Polis said, referring to the Pentagon’s proposed homeland missile defense system.

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Trump’s move of SPACECOM to Alabama has little to do with national security Read More »