Author name: Kris Guyer

fcc-derided-as-“federal-censorship-commission”-after-pushing-jimmy-kimmel-off-abc

FCC derided as “Federal Censorship Commission” after pushing Jimmy Kimmel off ABC


Disney does FCC chair’s bidding, suspends Kimmel show over Charlie Kirk comment.

Jimmy Kimmel at The Walt Disney Company’s 77th Emmy Awards Party on September 14, 2025 in Los Angeles Credit: Getty Images | Chad Salvador

ABC pulled Jimmy Kimmel’s show off the air yesterday, shortly after Federal Communications Commission Chairman Brendan Carr urged the Disney-owned company to take action against Kimmel or face consequences at the FCC over Kimmel’s comments about Charlie Kirk’s killer.

Carr appeared on right-wing commentator Benny Johnson’s podcast yesterday and said, “We can do this the easy way or the hard way. These companies can find ways to change conduct, to take action, frankly on Kimmel, or there’s going to be additional work for the FCC ahead.” Carr urged Disney to suspend Kimmel and said broadcast stations that carry ABC content should refuse to carry Kimmel’s show.

After Carr’s comments and a statement by Nexstar that it would preempt Kimmel’s show on its ABC-affiliated stations, ABC confirmed in a statement that “Jimmy Kimmel Live! will be preempted indefinitely.” The decision was made by Disney CEO Robert Iger and TV division head Dana Walden, The New York Times reported. We contacted ABC today and will update this article if we get a response.

Several House Democratic leaders accused Carr of “engag[ing] in the corrupt abuse of power. He has disgraced the office he holds by bullying ABC, the employer of Jimmy Kimmel, and forcing the company to bend the knee to the Trump administration. FCC Chair Brendan Carr should resign immediately.” The top Democrat on the House Oversight and Government Reform Committee plans an investigation.

Anna Gomez, the only Democrat on the Republican-majority FCC, said the Kimmel suspension is “cowardly corporate capitulation by ABC that has put the foundation of the First Amendment in danger.” She said the “FCC does not have the authority, the ability, or the constitutional right to police content or punish broadcasters for speech the government dislikes,” but that “billion-dollar companies with pending business before the agency” are “vulnerable to pressure to bend to the government’s ideological demands.”

Former President Barack Obama criticized the Trump administration’s actions on Kimmel. “After years of complaining about cancel culture, the current administration has taken it to a new and dangerous level by routinely threatening regulatory action against media companies unless they muzzle or fire reporters and commentators it doesn’t like,” Obama wrote today.

Disney has pending business before the Trump administration, as Justice Department antitrust officials are investigating its pending merger with FuboTV.

“Federal Censorship Commission”

Media advocacy group Free Press said that Carr’s “Federal Censorship Commission” reached a “new low” in its push to get Kimmel off the air.

“Donald Trump and Brendan Carr have turned the FCC into the Federal Censorship Commission, ignoring the First Amendment and replacing the rule of law with the whims of right-wing bloggers,” Free Press co-CEO Craig Aaron said. “They’re abusing their power to shake down media companies with their dangerous demands for dishonest coverage and Orwellian compliance with the administration’s political agenda. This is nothing more than censorship and extortion. Worse still, the nation’s largest media companies are playing along.”

The FCC has sway over ABC and other major networks because it licenses the broadcast stations that carry the networks’ content. Although previous FCC chairs from both major parties avoided regulating TV news content, Carr has repeatedly threatened to punish stations accused of bias against Republicans.

“There’s calls for Kimmel to be fired. You could certainly see a path forward for suspension over this. The FCC is going to have remedies that we can look at. We may ultimately be called to be a judge on that,” Carr said.

The Kimmel controversy began Monday when he said during a monologue, “We hit some new lows over the weekend with the MAGA gang desperately trying to characterize this kid who murdered Charlie Kirk as anything other than one of them and with everything they can to score political points from it.”

Tyler Robinson, the man charged with murdering Kirk, reportedly came from a conservative-leaning family. But Robinson’s mother told police that he “had become more political and had started to lean more to the left,” according to a probable cause statement filed in a Utah court.

Kimmel was planning to explain his comments during last night’s show before it got pulled, according to Deadline. “He was expected to unpack the statement on tonight’s show, highlighting that he was not saying Tyler Robinson, who allegedly shot Kirk, was ‘one of them,’ referring to Republicans, or MAGA supporters, but rather was highlighting how right-wing supporters were trying to distance themselves from the alleged shooter, who was charged with aggravated murder this week,” the article said.

Carr: “Some of the sickest conduct possible”

Carr said that Kimmel’s monologue “appears to be some of the sickest conduct possible,” and that he wants to “reinvigorate the public interest standard” that applies to licensed broadcasters.

Johnson asked Carr if Kimmel’s statement rises to the level of news distortion. As we explained in a feature article in April, Carr has repeatedly threatened to punish companies that violate the FCC news distortion policy that dates to the 1960s. The FCC technically has no rule or regulation against news distortion, which is why it is called a policy and not a rule. The FCC apparently hasn’t made a finding of news distortion since 1993.

In response to Johnson’s news distortion question, Carr said, “the FCC could be called upon to be an ultimate judge in that. But at this point it appears to be clear that you can make a strong argument that this is sort of an intentional effort to mislead the American people about a very core fundamental fact… this is a very, very serious issue right now for Disney.”

Revoking licenses difficult legally

Any FCC attempt to revoke licenses based on news distortion allegations could be challenged in court. Moreover, as we’ve written, revoking a license in the middle of a license term is so difficult legally that it has been described as effectively impossible. The FCC can go after a license when it’s up for renewal, but there are no TV station licenses up for renewal until 2028.

Despite those factors weighing in favor of broadcast stations, Carr can influence the decisions of major media companies with threats alone. “Broadcasters are entirely different than people who use other forms of communication,” Carr said on Johnson’s podcast. “They have a license granted by us at the FCC and that comes with it an obligation to operate in the public interest.”

Trump’s various demands for license revocations have omitted the fact that it is broadcast stations, not networks, that hold FCC licenses. Carr is aware of the distinction and made a point of saying that broadcast stations could lose their licenses if they continue to carry Kimmel’s show:

There’s actions we can take on licensed broadcasters and, frankly, I think it’s past time that a lot of these licensed broadcasters themselves push back on [NBC owner] Comcast and Disney and say, ‘listen, we are going to preempt, we are not going to run Kimmel anymore until you straighten this out because we, the licensed broadcaster, are running the possibility of fines or license revocations from the FCC if we continue to run content that ends up being a pattern of news distortion.’ Disney needs to see some change here, but the individual licensed stations that are taking their content, it’s time for them to step up and say this garbage isn’t something that we think serves the needs of our local communities. This status quo is obviously not acceptable where we are.”

Carr also said the FCC is investigating Disney’s DEI (diversity, equity, and inclusion) practices “for potentially violating the FCC’s equal employment opportunity rules. We’ve issued Disney a letter of inquiry on that, we’ve received some documents from them.” Carr has made ending DEI practices a condition for getting mergers approved.

Trump hails “great news,” wants other hosts fired

Trump posted on social media that Kimmel being taken off the air is great for America and urged NBC to cancel late-night hosts Jimmy Fallon and Seth Meyers. “Great News for America: The ratings challenged Jimmy Kimmel Show is CANCELLED,” Trump wrote. “Congratulations to ABC for finally having the courage to do what had to be done. Kimmel has ZERO talent, and worse ratings than even Colbert, if that’s possible. That leaves Jimmy and Seth, two total losers, on Fake News NBC. Their ratings are also horrible. Do it NBC!!! President DJT.”

Last year, Trump obtained a $15 million settlement with ABC over false statements made on air by George Stephanopoulos. More recently, he struck a $16 million settlement with CBS owner Paramount over his claim that 60 Minutes deceptively manipulated a pre-election interview with Kamala Harris.

Trump’s 60 Minutes claim was widely described as frivolous, but Paramount settled with Trump at a time when it was seeking FCC permission to complete an $8 billion merger with Skydance. Carr’s FCC subsequently approved the merger and imposed a condition requiring a CBS ombudsman, which Carr described as a “bias monitor.” After his victory over CBS, Trump called on the FCC to revoke ABC and NBC licenses.

Late-night host Stephen Colbert called the settlement with Trump “a big fat bribe.” CBS subsequently announced it would cancel Colbert’s show when the season ends in May 2026.

Kimmel urged people to stop “angry finger-pointing”

The Nexstar statement on Kimmel said the host’s comments “are offensive and insensitive at a critical time in our national political discourse… Continuing to give Mr. Kimmel a broadcast platform in the communities we serve is simply not in the public interest at the current time, and we have made the difficult decision to preempt his show in an effort to let cooler heads prevail as we move toward the resumption of respectful, constructive dialogue.” Nexstar is trying to complete a $6.2 billion purchase of Tegna, and needs the FCC to relax its ownership-cap rule.

Conservative broadcaster Sinclair issued a statement praising Carr’s remarks and urging the FCC to “take immediate regulatory action to address control held over local broadcasters by the big national networks.” Even if ABC reinstates Kimmel, Sinclair said it will not air the show on its stations “until formal discussions are held with ABC regarding the network’s commitment to professionalism and accountability.”

Sinclair urged Kimmel to apologize to Kirk’s family and “make a meaningful personal donation” to the Kirk Family and the Kirk group Turning Point USA. “Sinclair’s ABC stations will air a special in remembrance of Charlie Kirk this Friday, during Jimmy Kimmel Live’s timeslot,” the statement said. “The special will also air across all Sinclair stations this weekend. In addition, Sinclair is offering the special to all ABC affiliates across the country.”

SAG-AFTRA called Kimmel’s suspension “the type of suppression and retaliation that endangers everyone’s freedoms,” and said that “Democracy thrives when diverse points of view are expressed.” A Writers Guild of America statement said, “If free speech applied only to ideas we like, we needn’t have bothered to write it into the Constitution… Shame on those in government who forget this founding truth. As for our employers, our words have made you rich. Silencing us impoverishes the whole world.”

In a social media post on the day of Kirk’s death, Kimmel expressed sadness about the killing and urged people to avoid angry finger-pointing. “Instead of the angry finger-pointing, can we just for one day agree that it is horrible and monstrous to shoot another human?” Kimmel wrote. “On behalf of my family, we send love to the Kirks and to all the children, parents and innocents who fall victim to senseless gun violence.”

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

FCC derided as “Federal Censorship Commission” after pushing Jimmy Kimmel off ABC Read More »

some-dogs-can-classify-their-toys-by-function

Some dogs can classify their toys by function

Certain dogs can not only memorize the names of objects like their favorite toys, but they can also extend those labels to entirely new objects with a similar function, regardless of whether or not they are similar in appearance, according to a new paper published in the journal Current Biology. It’s a cognitively advanced ability known as “label extension,” and for animals to acquire it usually involves years of intensive training in captivity. But the dogs in this new study developed the ability to classify their toys by function with no formal training, merely by playing naturally with their owners.

Co-author Claudia Fugazza of Eötvös Loránd University in Budapest, Hungary, likens this ability to a person calling a hammer and a rock by the same name, or a child understanding that “cup” can describe a mug, a glass, or a tumbler, because they serve the same function. “The rock and the hammer look physically different, but they can be used for the same function,” she said. “So now it turns out that these dogs can do the same.”

Fugazza and her Hungarian colleagues have been studying canine behavior and cognition for several years. For instance, in 2023, we reported on the group’s experiments on how dogs interpret gestures, such as pointing at a specific object. A dog will interpret the gesture as a directional cue, unlike a human toddler, who will more likely focus on the object itself. It’s called spatial bias, and the team concluded that the phenomenon arises from a combination of how dogs see (visual acuity) and how they think, with “smarter” dog breeds prioritizing an object’s appearance as much as its location. This suggests the smarter dogs’ information processing is more similar to that of humans.

Another aspect of the study involved measuring the length of a dog’s head, which prior research has shown is correlated with visual acuity. The shorter a dog’s head, the more similar their visual acuity is to human vision. That’s because there is a higher concentration of retinal ganglion cells in the center of their field of vision, making vision sharper and giving such dogs binocular depth vision. The testing showed that dogs with better visual acuity, and who also scored higher on the series of cognitive tests, also exhibited less spatial bias. This suggests that canine spatial bias is not simply a sensory matter but is also influenced by how they think. “Smarter” dogs have less spatial bias.

Some dogs can classify their toys by function Read More »

meta’s-$799-ray-ban-display-is-the-company’s-first-big-step-from-vr-to-ar

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR

Zuckerberg also showed how the neural interface can be used to compose messages (on WhatsApp, Messenger, Instagram, or via a connected phone’s messaging apps) by following your mimed “handwriting” across a flat surface. Though this feature reportedly won’t be available at launch, Zuckerberg said he had gotten up to “about 30 words per minute” in this silent input mode.

The most impressive part of Zuckerberg’s on-stage demo that will be available at launch was probably a “live caption” feature that automatically types out the words your partner is saying in real time. The feature reportedly filters out background noise to focus on captioning just the person you’re looking at, too.

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited).

Credit: Meta

A Meta video demos how live captioning works on the Ray-Ban Display (though the field-of-view on the actual glasses is likely much more limited). Credit: Meta

Beyond those “gee whiz” kinds of features, the Meta Ray-Ban Display can basically mirror a small subset of your smartphone’s apps on its floating display. Being able to get turn-by-turn directions or see recipe steps on the glasses without having to glance down at a phone feels like genuinely useful new interaction modes. Using the glasses display as a viewfinder to line up a photo or video (using the built-in 12 megapixel, 3x zoom camera) also seems like an improvement over previous display-free smartglasses.

But accessing basic apps like weather, reminders, calendar, and emails on your tiny glasses display strikes us as probably less convenient than just glancing at your phone. And hosting video calls via the glasses by necessity forces your partner to see what you’re seeing via the outward-facing camera, rather than seeing your actual face.

Meta also showed off some pie-in-the-sky video about how future “Agentic AI” integration would be able to automatically make suggestions and note follow-up tasks based on what you see and hear while wearing the glasses. For now, though, the device represents what Zuckerberg called “the next chapter in the exciting story of the future of computing,” which should serve to take focus away from the failed VR-based metaverse that was the company’s last “future of computing.”

Meta’s $799 Ray-Ban Display is the company’s first big step from VR to AR Read More »

report:-apple-inches-closer-to-releasing-an-oled-touchscreen-macbook-pro

Report: Apple inches closer to releasing an OLED touchscreen MacBook Pro

At multiple points over many years, Apple executives have taken great pains to point out that they think touchscreen Macs are a silly idea. But it remains one of those persistent Mac rumors that crops up over and over again every couple of years, from sources that are reliable enough that they shouldn’t be dismissed out of hand.

Today’s contribution comes from supply chain analyst Ming Chi-Kuo, who usually has some insight into what Apple is testing and manufacturing. Kuo says that touchscreen MacBook Pros are “expected to enter mass production by late 2026,” and that the devices will also shift to using OLED display panels instead of the Mini LED panels on current-generation MacBook Pros.

Kuo says that Apple’s interest in touchscreen Macs comes from “long-term observation of iPad user behavior.” Apple’s tablet hardware launches in the last few years have also included keyboard and touchpad accessories, and this year’s iPadOS 26 update in particular has helped to blur the line between the touch-first iPad and the keyboard-and-pointer-first Mac. In other words, Apple has already acknowledged that both kinds of input can be useful when combined in the same device; taking that same jump on the Mac feels like a natural continuation of work Apple is already doing.

Touchscreens became much more common on Windows PCs starting in 2012 when Windows 8 was released, itself a response to Apple’s introduction of the iPad a couple of years before. Microsoft backed off on almost all of Windows 8’s design decisions in the following years after the dramatic UI shift proved unpopular with traditional mouse-and-keyboard users, but touchscreen PCs like Microsoft’s Surface lineup have persisted even as the software has changed.

Report: Apple inches closer to releasing an OLED touchscreen MacBook Pro Read More »

ios-26-review:-a-practical,-yet-playful,-update

iOS 26 review: A practical, yet playful, update


More than just Liquid Glass

Spotlighting the most helpful new features of iOS 26.

The new Clear icons look in iOS 26 can make it hard to identify apps, since they’re all the same color. Credit: Scharon Harding

iOS 26 became publicly available this week, ushering in a new OS naming system and the software’s most overhauled look since 2013. It may take time to get used to the new “Liquid Glass” look, but it’s easier to appreciate the pared-down controls.

Beyond a glassy, bubbly new design, the update’s flashiest new features also include new Apple Intelligence AI integration that varies in usefulness, from fluffy new Genmoji abilities to a nifty live translation feature for Phones, Messages, and FaceTime.

New tech is often bogged down with AI-based features that prove to be overhyped, unreliable, or just not that useful. iOS 26 brings a little of each, so in this review, we’ll home in on the iOS updates that will benefit both mainstream and power users the most.

Table of Contents

Let’s start with Liquid Glass

If we’re talking about changes that you’re going to use a lot, we should start with the new Liquid Glass software design that Apple is applying across all of its operating systems. iOS hasn’t had this much of a makeover since iOS 7. However, where iOS 7 applied a flatter, minimalist effect to windows and icons and their edges, iOS 26 adds a (sometimes frosted) glassy look and a mildly fluid movement to actions such as pulling down menus or long-pressing controls. All the while, windows look like they’re reflecting the content underneath them. When you pull Safari’s menu atop a webpage, for example, blurred colors from the webpage’s images and text are visible on empty parts of the menu.

Liquid Glass is now part of most of Apple’s consumer devices, including Macs and Apple TVs, but the dynamic visuals and motion are especially pronounced as you use your fingers to poke, slide, and swipe across your iPhone’s screen.

For instance, when you use a tinted color theme or the new clear theme for Home Screen icons, colors from the Home Screen’s background look like they’re refracting from under the translucent icons. It’s especially noticeable when you slide to different Home Screen pages. And in Safari, the address bar shrinks down and becomes more translucent as you scroll to read an article.

Because the theme is incorporated throughout the entire OS, the Liquid Glass effect can be cheesy at times. It feels forced in areas such as Settings, where text that just scrolled past looks slightly blurred at the top of the screen.

Liquid Glass makes the top of the Settings menu look blurred.

Liquid Glass makes the top of the Settings menu look blurred.

Credit: Scharon Harding

Liquid Glass makes the top of the Settings menu look blurred. Credit: Scharon Harding

Other times, the effect feels fitting, like when pulling the Control Center down and its icons appear to stretch down to the bottom of the screen and then quickly bounce into their standard size as you release your finger. Another place Liquid Glass flows nicely is in Photos. As you browse your pictures, colors subtly pop through the translucent controls at the bottom of the screen.

This is a matter of appearance, so you may have your own take on whether Liquid Glass looks tasteful or not. But overall, it’s the type of redesign that’s distinct enough to be a fun change, yet mild enough that you can grow accustomed to it if you’re not immediately impressed.

Liquid Glass simplifies navigation (mostly)

There’s more to Liquid Glass than translucency. Part of the redesign is simplifying navigation in some apps by displaying fewer controls.

Opening Photos is now cleaner at launch, bringing you to all of your photos instead of the Collections section, like iOS 18 does. At the bottom are translucent tabs for Library and Collections, plus a Search icon. Once you start browsing, the Library and Collections tabs condense into a single icon, and Years, Months, and All tabs appear, maintaining a translucence that helps keep your focus on your pictures.

You can still bring up more advanced options (such as Flash, Live, Timer) with one tap. And at the top of the camera’s field of view are smaller toggles for night mode and flash. But for when you want to take a quick photo, iOS 26 makes it easier to focus on the necessities while keeping the extraneous within short reach.

Similarly, the initial controls displayed at the bottom of the screen when you open Camera are pared down from six different photo- and video-shooting modes to the two that really matter: Photo and Video.

iOS 26 camera app

If you long-press Photo, options for the Time-Lapse, Slow-Mo, Cinematic, Portrait, Spatial, and Pano modes appear.

Credit: Scharon Harding

If you long-press Photo, options for the Time-Lapse, Slow-Mo, Cinematic, Portrait, Spatial, and Pano modes appear. Credit: Scharon Harding

iOS 26 takes the same approach with Video mode by focusing on the essentials (zoom, resolution, frame rate, and flash) at launch.New layout options for navigating Safari, however, slowed me down. In a new Compact view, the address bar lives at the bottom of the screen without a dedicated toolbar, giving the web page more screen space. But this setup makes accessing common tasks, like opening a new or old tab, viewing bookmarks, or sharing a link, tedious because they’re hidden behind a menu button.

If you tend to have multiple browser tabs open, you’ll want to stick with the classic layout, now called Top (where the address bar is at the top of the screen and the toolbar is at the bottom) or the Bottom layout (where the address bar and toolbar are at the bottom of the screen).

On the more practical side of Safari updates is a new ability to turn any webpage into a web app, making favorite and important URLs accessible quickly and via a dedicated Home Screen icon. This has been an iOS feature for a long time, but until now the pages always opened in Safari. Users can still do this if they like, but by default these sites now open as their own distinct apps, with dedicated icons in the app switcher. Web apps open full-screen, but in my experience, back and forward buttons only come up if you go to a new website. Sliding left and right replaces dedicated back and forward controls, but sliding isn’t as reliable as just tapping a button.

Viewing Ars Technica as a web app.

Viewing Ars Technica as a web app.

Credit: Scharon Harding

Viewing Ars Technica as a web app. Credit: Scharon Harding

iOS 26 remembers that iPhones are telephones

With so much focus on smartphone chips, screens, software, and AI lately, it can be easy to forget that these devices are telephones. iOS 26 doesn’t overlook the core purpose of iPhones, though. Instead, the new operating system adds a lot to the process of making and receiving phone calls, video calls, and text messages, starting with the look of the Phone app.

Continuing the streamlined Liquid Glass redesign, the Phone app on iOS 26 consolidates the bottom controls from Favorites, Recents, Contacts, Keypad, and Voicemail, to Calls (where voicemails also live), Contacts, and Keypad, plus Search.

I’d rather have a Voicemails section at the bottom of the screen than Search, though. The Voicemails section is still accessible by opening a menu at the top-right of the screen, but it’s less prominent, and getting to it requires more screen taps than before.

On Phone’s opening screen, you’ll see the names or numbers of missed calls and voicemails in red. But voicemails also have a blue dot next to the red phone number or name (along with text summarizing or transcribing the voicemail underneath if those settings are active). This setup caused me to overlook missed calls initially. Missed calls with voicemails looked more urgent because of the blue dot. For me, at first glance, it appeared as if the blue dots represented unviewed missed calls and that red numbers/names without a blue dot were missed calls that I had already viewed. It’s taking me time to adjust, but there’s logic behind having all missed phone activity in one place.

Fighting spam calls and messages

For someone like me, whose phone number seems to have made it to every marketer and scammers’ contact lists, it’s empowering to have iOS 26’s screening features help reduce time spent dealing with spam.

The phone can be set to automatically ask callers with unsaved numbers to state their name. As this happens, iOS displays the caller’s response on-screen, so you can decide if you want to answer or not. If you’re not around when the phone rings, you can view the transcript later and then mark the caller as known, if desired. This has been my preferred method of screening calls and reduces the likelihood of missing a call I want to answer.

There are also options for silencing calls and voicemails from unknown numbers and having them only show in a section of the app that’s separate from the Calls tab (and accessible via the aforementioned Phone menu).

iOS 26's new Phone menu

A new Phone menu helps sort important calls from calls that are likely spam.

Credit: Scharon Harding

A new Phone menu helps sort important calls from calls that are likely spam. Credit: Scharon Harding

You could also have iOS direct calls that your cell phone carrier identifies as spam to voicemail and only show the missed calls in the Phone menu’s dedicated Spam list. I found that, while the spam blocker is fairly reliable, silencing calls from unsaved numbers resulted in me missing unexpected calls from, say, an interview source or my bank. And looking through my spam and unknown callers lists sounds like extra work that I’m unlikely to do regularly.

Messages

iOS 26 applies the same approach to Messages. You can now have texts from unknown senders and spam messages automatically placed into folders that are separate from your other texts. It’s helpful for avoiding junk messages, but it can be confusing if you’re waiting for something like a two-factor authentication text, for example.

Elsewhere in Messages is a small but effective change to browsing photos, links, and documents previously exchanged via text. Upon tapping the name of a person in a conversation in Messages, you’ll now see tabs for viewing that conversation’s settings (such as the recipient’s number and a toggle for sending read receipts), as well as separate tabs for photos and links. Previously, this was all under one tab, so if you wanted to find a previously sent link, you had to scroll through the conversation’s settings and photos. Now, you can get to links with a couple of quick taps. Additionally, with iOS 26 you can finally set up custom iMessage backgrounds, including premade ones and ones that you can make from your own photos or by using generative AI. It’s not an essential update but is an easy way to personalize your iPhone by brightening up texts.

Hold Assist

Another time saver is Hold Assist. It makes calling customer service slightly more tolerable by allowing you to hang up during long wait times and have your iPhone ring when someone’s ready to talk to you. It’s a feature that some customer service departments have offered for years already, but it’s handy to always have it available.

You have to be quick to respond, though. One time I answered the phone after using Hold Assist, and the caller informed me that they had said “hello” a few times already. This is despite the fact that iOS is supposed to let the agent know that you’ll be on the phone shortly. If I had waited a couple more seconds to pick up the phone, it’s likely that the customer service rep would have hung up.

Live translations

One of the most novel features that iOS 26 brings to iPhone communication is real-time translations for Spanish, Mandarin, French, German, Italian, Japanese, Korean, and Portuguese. After downloading the necessary language libraries, iOS can translate one of those languages to another in real time when you’re talking on the phone or FaceTime or texting.

The feature worked best in texts, where the software doesn’t have to deal with varying accents, people speaking fast or over one another, stuttering, or background noise. Translated texts and phone calls always show the original text written in the sender’s native language, so you can double-check translations or see things that translations can miss, like acronyms, abbreviations, and slang.

iOS 26 Translating some basic Spanish.

Translating some basic Spanish.

Credit: Scharon Harding

Translating some basic Spanish. Credit: Scharon Harding

During calls or FaceTime, Live Translation sometimes struggled to keep up while it tried to manage the nuances and varying speeds of how different people speak, as well as laughs and other interjections.

However, it’s still remarkable that the iPhone can help remove language barriers without any additional hardware, apps, or fees. It will be even better if Apple can improve reliability and add more languages.

Spatial images on the Home and Lock Screen

The new spatial images feature is definitely on the fluffier side of this iOS update, but it is also a practical way to spice up your Lock Screen, Home Screen, and the Home Screen’s Photos widget.

Basically, it applies a 3D effect to any photo in your library, which is visible as you move your phone around in your hand. Apple says that to do this, iOS 26 uses the same generative AI models that the Apple Vision Pro uses and creates a per-pixel depth map that makes parts of the image appear to pop out as you move the phone within six degrees of freedom.

The 3D effect is more powerful on some images than others, depending on the picture’s composition. It worked well on a photo of my dog sitting in front of some plants and behind a leaf of another plant. I set the display time so that it appears tucked behind her fur, and when I move the phone around, the dog and the leaf in front of her appear to move around, while the background plants stay still.

But in images with few items and sparser backgrounds, the spatial effect looks unnatural. And oftentimes, the spatial effect can be quite subtle.

Still, for those who like personalizing their iPhone with Home and Lock Screen customization, spatial scenes are a simple and harmless way to liven things up. And, if you like the effect enough, a new spatial mode in the Camera app allows you to create new spatial photos.

A note on Apple Intelligence notification summaries

As we’ve already covered in our macOS 26 Tahoe review, Apple Intelligence-based notification summaries haven’t improved much since their 2024 debut in iOS 18 and macOS 15 Sequoia. After problems with showing inaccurate summaries of news notifications, Apple updated the feature to warn users that the summaries may be inaccurate. But it’s still hit or miss when it comes to how easy it is to decipher the summaries.

I did have occasional success with notification summaries in iOS 26. For instance, I understood a summary of a voicemail that said, “Payment may have appeared twice; refunds have been processed.” Because I had already received a similar message via email (a store had accidentally charged me twice for a purchase and then refunded me), I knew I didn’t need to open that voicemail.

Vague summaries sometimes tipped me off as to whether a notification was important. A summary reading “Townhall meeting was hosted; call [real phone number] to discuss issues” was enough for me to know that I had a voicemail about a meeting that I never expressed interest in. It wasn’t the most informative summary, but in this case, I didn’t need a lot of information.

However, most of the time, it was still easier to just open the notification than try to decipher what Apple Intelligence was trying to tell me. Summaries aren’t really helpful and don’t save time if you can’t fully trust their accuracy or depth.

Playful, yet practical

With iOS 26, iPhones get a playful new design that’s noticeable and effective but not so drastically different that it will offend or distract those who are happy with the way iOS 18 works. It’s exciting to experience one of iOS’s biggest redesigns, but what really stands out are the thoughtful tweaks that bring practical improvements to core features, like making and receiving phone calls and taking pictures.

Some additions and changes are superfluous, but the update generally succeeds at improving functionality without introducing jarring changes that isolate users or force them to relearn how to use their phone.

I can’t guarantee that you’ll like the Liquid Glass design, but other updates should make it simpler to do some of the most important tasks with iPhones, and it should be a welcome improvement for long-time users.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

iOS 26 review: A practical, yet playful, update Read More »

northrop-grumman’s-new-spacecraft-is-a-real-chonker

Northrop Grumman’s new spacecraft is a real chonker

What happens when you use a SpaceX Falcon 9 rocket to launch Northrop Grumman’s Cygnus supply ship? A record-setting resupply mission to the International Space Station.

The first flight of Northrop’s upgraded Cygnus spacecraft, called Cygnus XL, is on its way to the international research lab after launching Sunday evening from Cape Canaveral Space Force Station, Florida. This mission, known as NG-23, is set to arrive at the ISS early Wednesday with 10,827 pounds (4,911 kilograms) of cargo to sustain the lab and its seven-person crew.

By a sizable margin, this is the heaviest cargo load transported to the ISS by a commercial resupply mission. NASA astronaut Jonny Kim will use the space station’s Canadian-built robotic arm to capture the cargo ship on Wednesday, then place it on an attachment port for crew members to open hatches and start unpacking the goodies inside.

A bigger keg

The Cygnus XL spacecraft looks a lot like Northrop’s previous missions to the station. It has a service module manufactured at the company’s factory in Northern Virginia. This segment of the spacecraft provides power, propulsion, and other necessities to keep Cygnus operating in orbit.

The most prominent features of the Cygnus cargo freighter are its circular, fan-like solar arrays and an aluminum cylinder called the pressurized cargo module that bears some resemblance to a keg of beer. This is the element that distinguishes the Cygnus XL from earlier versions of the Cygnus supply ship.

The cargo module is 5.2 feet (1.6 meters) longer on the Cygnus XL. The full spacecraft is roughly the size of two Apollo command modules, according to Ryan Tintner, vice president of civil space systems at Northrop Grumman. Put another way, the volume of the cargo section is equivalent to two-and-a-half minivans.

“The most notable thing on this mission is we are debuting the Cygnus XL configuration of the spacecraft,” Tintner said. “It’s got 33 percent more capacity than the prior Cygnus spacecraft had. Obviously, more may sound like better, but it’s really critical because we can deliver significantly more science, as well as we’re able to deliver a lot more cargo per launch, really trying to drive down the cost per kilogram to NASA.”

A SpaceX Falcon 9 rocket ascends to orbit Sunday after launching from Cape Canaveral Space Force Station, Florida, carrying Northrop Grumman’s Cygnus XL cargo spacecraft toward the International Space Station. Credit: Manuel Mazzanti/NurPhoto via Getty Images

Cargo modules for Northrop’s Cygnus spacecraft are built by Thales Alenia Space in Turin, Italy, employing a similar design to the one Thales used for several of the space station’s permanent modules. Officials moved forward with the first Cygnus XL mission after the preceding cargo module was damaged during shipment from Italy to the United States earlier this year.

Northrop Grumman’s new spacecraft is a real chonker Read More »

rfk-jr.-adds-more-anti-vaccine-members-to-cdc-vaccine-advisory-panel

RFK Jr. adds more anti-vaccine members to CDC vaccine advisory panel

Kirk Milhoan, a pediatric cardiologist who is a senior fellow at the Independent Medical Alliance (formerly Front Line COVID-19 Critical Care Alliance), which promotes misinformation about COVID-19 vaccines and touts unproven and dubious COVID-19 treatments. Those include the malaria drug hydroxychloroquine, the de-worming drug ivermectin, and various concoctions of vitamins and other drugs. Milhoan has stated that mRNA COVID-19 vaccines should be removed from the market, telling KFF in March: “We should stop it and test it more before we move forward.”

Evelyn Griffin, an obstetrician and gynecologist in Louisiana who reportedly lost her job for refusing to get a COVID-19 vaccine. In a speech at a Louisiana Health Freedom Day in May 2024, Griffin claimed that doctors “blindly believed” that mRNA COVID-19 vaccines were safe. She has also claimed that the vaccines cause “bizarre and rare conditions,” according to the Post.

Hillary Blackburn, a pharmacist in St. Louis. Reuters reports that she is the daughter-in-law of Sen. Marsha Blackburn (R-Tenn.), who has opposed vaccine mandates.

Raymond Pollak, a semi-retired transplant surgeon who filed a whistleblower lawsuit against the University of Illinois Hospital in 1999, alleging the hospital manipulated patient data to increase their chances of receiving livers. The hospital settled the suit, paying $2.5 million, while denying wrongdoing.

ACIP is scheduled to meet at the end of this week, on September 18 and September 19. According to an agenda recently posted online, the committee will vote on recommendations for a measles, mumps, rubella, and varicella (MMRV) combination vaccine, the Hepatitis B vaccine, and this year’s updated COVID-19 vaccines. Vaccine experts widely fear that the committee will rescind recommendations and restrict access to those vaccines. Such moves will likely create new, potentially insurmountable barriers for people, including children, to get vaccines.

ACIP-recommended vaccines are required to be covered by private health insurance plans and the Vaccines for Children program for Medicaid-eligible and under- or uninsured kids, which covers about half of American children. Without ACIP recommendations for a vaccine, insurance coverage would be an open question, and vulnerable children would simply lose access entirely.

RFK Jr. adds more anti-vaccine members to CDC vaccine advisory panel Read More »

nasa-closing-its-original-repository-for-columbia-artifacts-to-tours

NASA closing its original repository for Columbia artifacts to tours

NASA is changing the way that its employees come in contact with, and remember, one of its worst tragedies.

In the wake of the 2003 loss of the space shuttle Columbia and its STS-107 crew, NASA created a program to use the orbiter’s debris for research and education at Kennedy Space Center in Florida. Agency employees were invited to see what remained of the space shuttle as a powerful reminder as to why they had to be diligent in their work. Access to the Columbia Research and Preservation Office, though, was limited as a result of its location and related logistics.

To address that and open up the experience to more of the workforce at Kennedy, the agency has quietly begun work to establish a new facility.

“The room, titled Columbia Learning Center (CLC), is a whole new concept,” a NASA spokesperson wrote in an email. “There are no access requirements; anyone at NASA Kennedy can go in any day of the week and stay as long as they like. The CLC will be available whenever employees need the inspiration and message for generations to come.”

Debris depository

On February 1, 2003, Columbia was making its way back from a 16-day science mission in Earth orbit when the damage that it suffered during its launch resulted in the orbiter breaking apart over East Texas. Instead of landing at Kennedy as planned, Columbia fell to the ground in more than 85,000 pieces.

The tragedy claimed the lives of commander Rick Husband, pilot Willie McCool, mission specialists David Brown, Kalpana Chawla, Michael Anderson, and Laurel Clark, and payload specialist Ilan Ramon of Israel.

NASA closing its original repository for Columbia artifacts to tours Read More »

60-years-after-gemini,-newly-processed-images-reveal-incredible-details

60 years after Gemini, newly processed images reveal incredible details


“It’s that level of risk that they were taking. I think that’s what really hit home.”

Before / after showing the image transformation. Buzz Aldrin is revealed as he takes the first selfie in space on Gemini 12, November 12, 1966. Credit: NASA / ASU / Andy Saunders

Before / after showing the image transformation. Buzz Aldrin is revealed as he takes the first selfie in space on Gemini 12, November 12, 1966. Credit: NASA / ASU / Andy Saunders

Six decades have now passed since some of the most iconic Project Gemini spaceflights. The 60th anniversary of Gemini 4, when Ed White conducted the first US spacewalk, came in June. The next mission, Gemini 5, ended just two weeks ago, in 1965. These missions are now forgotten by most Americans, as most of the people alive during that time are now deceased.

However, during these early years of spaceflight, NASA engineers and astronauts cut their teeth on a variety of spaceflight firsts, flying a series of harrowing missions during which it seems a miracle that no one died.

Because the Gemini missions, as well as NASA’s first human spaceflight program Mercury, yielded such amazing stories, I was thrilled to realize that a new book has recently been published—Gemini & Mercury Remastered—that brings them back to life in vivid color.

The book is a collection of 300 photographs from NASA’s Mercury and Gemini programs during the 1960s, in which Andy Saunders has meticulously restored the images and then deeply researched their background to more fully tell the stories behind them. The end result is a beautiful and powerful reminder of just how brave America’s first pioneers in space were. What follows is a lightly edited conversation with Saunders about how he developed the book and some of his favorite stories from it.

Ars: Why put out a book on Mercury and Gemini now?

Andy Saunders: Well, it’s the 60th anniversaries of the Gemini missions, but the book is really the prequel to my first book, Apollo Remastered. This is about the missions that came before. So it takes us right back to the very dawn of human space exploration, back to the very beginning, and this was always a project I was going to work on next. Because, as well as being obviously very important in spaceflight history, they’re very important in terms of human history, the human evolution, even, you know, the first time we were able to escape Earth.

For tens of thousands of years, civilizations have looked up and dreamt of leaving Earth and voyaging to the stars. And this golden era in the early 1960s is when that ancient dream finally became a reality. Also, of course, the first opportunity to look back at Earth and give us that unique perspective. But I think it’s really the photographs specifically that will just forever symbolize and document at the beginning of our expansion out into the cosmos. You know, of course, we went to the Moon with Apollo. We’ll go back with Artemis. We spent long periods on the International Space Station. We’ll walk on Mars. We’ll eventually become a multi-planetary species. But this is where it all began and how it all began.

Ars: They used modified Hasselblad cameras during Apollo to capture these amazing images. What types of cameras were used during Mercury and Gemini?

Saunders: Mercury was more basic cameras. So on the very first missions, NASA didn’t want the astronaut to take a camera on board. The capsules were tiny. They were very busy. They’re very short missions, obviously very groundbreaking missions. So, the first couple of missions, there was a camera out of the porthole window, just taking photographs automatically. But it was John Glenn on his mission (Mercury-Atlas 6) who said, “No, I want to take a camera. People want to know what it’s going to be like to be an astronaut. They’re going to want to look at Earth through the window. I’m seeing things no humans ever seen before.” So he literally saw a $40 camera in a drugstore on his way after a haircut at Cocoa Beach. He thought, “That’s perfect.” And he bought it himself, and then NASA adapted it. They put a pistol grip on to help him to use it. And with it, he took the first still photographs of Earth from space.

So it was the early astronauts that kind of drove the desire to take cameras themselves, but they were quite basic. Wally Schirra (Mercury-Atlas 8) then took the first Hasselblad. He wanted medium format, better quality, but really, the photographs from Mercury aren’t as stunning as Gemini. It’s partly the windows and the way they took the photos, and they’d had little experience. Also, preservation clearly wasn’t high up on the agenda in Mercury, because the original film is evidently in a pretty bad state. The first American in space is an incredibly important moment in history. But every single frame of the original film of Alan Shepard’s flight was scribbled over with felt pen, it’s torn, and it’s fixed with like a piece of sticky tape. But it’s a reminder that these weren’t taken for their aesthetic quality. They weren’t taken for posterity. You know, they were technical information. The US was trying to catch up with the Soviets. Preservation wasn’t high up on the agenda.

This is not some distant planet seen in a sci-fi movie, it’s our Earth, in real life, as we explored space in the 1960s. The Sahara desert, photographed from Gemini 11, September 14, 1966. As we stand at the threshold of a new space age, heading back to the Moon, onward to Mars and beyond, the photographs taken during Mercury and Gemini will forever symbolize and document the beginning of humankind’s expansion out into the cosmos. NASA / ASU / Andy Saunders

Ars: I want to understand your process. How many photos did you consider for this book?

Saunders: With Apollo, they took about 35,000 photographs. With Mercury and Gemini, there were about 5,000. Which I was quite relieved about.  So yeah, I went through all 5,000 they took. I’m not sure how much 16 millimeter film in terms of time, because it was at various frame rates, but a lot of 16 millimeter film. So I went through every frame of film that was captured from launch to splashdown on every mission.

Ars: Out of that material, how much did you end up processing?

Saunders: What I would first do is have a quick look, particularly if there’s apparently nothing in them, because a lot of them are very underexposed. But with digital processing, like I did with the cover of the Apollo book, we can pull out stuff that you actually can’t see in the raw file. So it’s always worth taking a look. So do a very quick edit, and then if it’s not of interest, it’s discarded. Or it might be that clearly an important moment was happening, even if it’s not a particularly stunning photograph, I would save that one. So I was probably down from 5,000 to maybe 800, and then do a better edit on it.

And then the final 300 that are in the book are those that are either aesthetically stunning, or they’re a big transformation, or they show something important that happened on the mission, or a historically significant moment. But also, what I want to do with the book, as well as showing the photographs, is tell the stories, these incredible human stories that, because of the risks they were taking. So to do that, I effectively reconstructed every mission from launch to splashdown by using lots of different pieces of information in order to effectively map the photography onto a timeline so that it can then tell the story through the captions. So a photograph might be in there simply to help tell part of the story.

Ars: What was your favorite story to tell?

Saunders: Well, perhaps in terms of a chapter and a mission, I’d say Gemini 4 is kind of the heart of the book. You know, first US space walk, quite a lot of drama occurred when they couldn’t close the hatch. There’s some quite poignant shots, particularly of Ed White, of course, who later lost his life in the Apollo 1 fire. But in terms of the story, I mean, Gemini 9A was just, there needs to be a movie about just Gemini 9A. Right from the start, from losing the prime crew, and then just what happened out on Gene Cernan’s EVA, how he got back into the capsule alive is quite incredible, and all this detail I’ve tried to cover because he took his camera. So he called it the spacewalk from hell. Everything that could go wrong went wrong. He was incredibly exhausted, overheated. His visor steamed over. He went effectively blind, and he was at the back of the adapter section. This is at a point when NASA just hadn’t mastered EVA. So, simply how you maneuver in space, they just haven’t mastered, so he was exhausted. He was almost blind. Then he lost communication with Tom Stafford, his command pilot. He tore his suit, because, of course, back then, there were all kinds of jagged parts on the spacecraft.

And then when he’s finally back in the hatch, he was quite a big chap, and they couldn’t close the hatch, so he was bent double trying to close the hatch. He started to see stars. He said, Tom, if we don’t close this hatch now and re-pressurize, I am going to die. They got it closed, got his helmet off, and Tom Stafford said he just looked like someone that had spent far too long in a sauna. Stafford sprayed him with a water hose to kind of cool him down. So what happened on that mission is just quite incredible. But there was something on every mission, you know, from Gus Grissom sinking of the Liberty Bell and him almost drowning, the heat shield coming loose, or an indicator that suggested the heat shield was loose on Glenn’s mission. There’s an image of that in the book. Like I said, I mapped everything to the timeline, and worked out the frame rates, and we’ve got the clock we can see over his shoulder. So I could work out exactly when he was at the point of maximum heating through reentry, when part of the strapping that kept the retro pack on, to try and hold a heat shield on that hit the window, and he’s talking, but no one was listening, because it was during radio blackout.

After being informed his heat shield may have come loose, John Glenn is holding steadfast in the face of real uncertainty, as he observes the retro pack burn up outside his window, illuminating the cabin in an orange glow, during re-entry on February 20, 1962. “This is Friendship Seven. I think the pack just let go … A real fireball outside! … Great chunks of that retro pack breaking off all the way through!”

Credit: NASA / Andy Saunders

After being informed his heat shield may have come loose, John Glenn is holding steadfast in the face of real uncertainty, as he observes the retro pack burn up outside his window, illuminating the cabin in an orange glow, during re-entry on February 20, 1962. “This is Friendship Seven. I think the pack just let go … A real fireball outside! … Great chunks of that retro pack breaking off all the way through!” Credit: NASA / Andy Saunders

The process I used for this, on the low-quality 16 mm film, was to stack hundreds and hundreds of frames to bring out incredible detail. You can almost see the pores in his skin. To see this level of detail, to me, it’s just like a portrait of courage. There he is, holding steadfast, not knowing if he’s about to burn up in the atmosphere. So that was quite a haunting image, if you like, to be able to help you step on board, you know, these tiny Mercury spacecraft, to see them, to see what they saw, to look out the windows and see how they saw it.

Ars: What was new or surprising to you as you spent so much time with these photos and looking at the details?

Saunders: The human side to them. Now that we can see them this clearly, they seem to have an emotional depth to them. And it’s that level of risk that they were taking. I think that’s what really hit home. The Earth shots are stunning. You know, you can almost feel the scale, particularly with a super wide lens, and the altitudes they flew to. And you can just imagine what it must have been like out on an EVA, for example. I think Gene Cernan said it was like sitting on God’s front porch, the view he had on his EVA. So those Earth shots are stunning, but it’s really those the human side that really hits home for me. I read every word of every transcript of every mission. All the conversations were recorded on tape between the air and the ground, and between the astronauts when they were out of ground contact, and reading those it really hits home what they were doing. I found myself holding my breath, and, you know, my shoulders were stiff.

Ars: So what’s next? I mean, there’s only about 100 million photos from the Space Shuttle era.

Saunders: Thankfully, they weren’t all taken on film. So if I wanted to complete space on film, then what I haven’t yet done is Apollo-Soyuz, Skylab, and the first, whatever it is, 20 percent of the shuttle. So maybe that’s next. But I would just like a rest, because I’ve been doing this now since the middle of 2019, literally nonstop. It’s all I’ve done with Apollo and now Mercury and Gemini. The books make a really nice set in that they’re exactly the same size. So it covers the first view of the curvature of Earth and space right through to our last steps on the Moon.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

60 years after Gemini, newly processed images reveal incredible details Read More »

education-report-calling-for-ethical-ai-use-contains-over-15-fake-sources

Education report calling for ethical AI use contains over 15 fake sources

AI language models like the kind that power ChatGPT, Gemini, and Claude excel at producing exactly this kind of believable fiction when they lack actual information on a topic because they first and foremost produce plausible outputs, not accurate ones. If there are no patterns in the dataset that match what the user is seeking they will create the best approximation based on statistical patterns learned during training. Even AI models that can search the web for real sources can potentially fabricate citations, choose the wrong ones, or mischaracterize them.

“Errors happen. Made-up citations are a totally different thing where you essentially demolish the trustworthiness of the material,” Josh Lepawsky, the former president of the Memorial University Faculty Association who resigned from the report’s advisory board in January, told CBC, citing a “deeply flawed process.”

The irony runs deep

The presence of potentially AI-generated fake citations becomes especially awkward given that one of the report’s 110 recommendations specifically states the provincial government should “provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use.”

Sarah Martin, a Memorial political science professor who spent days reviewing the document, discovered multiple fabricated citations. “Around the references I cannot find, I can’t imagine another explanation,” she told CBC. “You’re like, ‘This has to be right, this can’t not be.’ This is a citation in a very important document for educational policy.”

When contacted by CBC, co-chair Karen Goodnough declined an interview request, writing in an email: “We are investigating and checking references, so I cannot respond to this at the moment.”

The Department of Education and Early Childhood Development acknowledged awareness of “a small number of potential errors in citations” in a statement to CBC from spokesperson Lynn Robinson. “We understand that these issues are being addressed, and that the online report will be updated in the coming days to rectify any errors.”

Education report calling for ethical AI use contains over 15 fake sources Read More »

jef-raskin’s-cul-de-sac-and-the-quest-for-the-humane-computer

Jef Raskin’s cul-de-sac and the quest for the humane computer


“He wanted to make [computers] more usable and friendly to people who weren’t geeks.”

Consider the cul-de-sac. It leads off the main street past buildings of might-have-been to a dead-end disconnected from the beaten path. Computing history, of course, is filled with such terminal diversions, most never to be fully realized, and many for good reason. Particularly when it comes to user interfaces and how humans interact with computers, a lot of wild ideas deserved the obscure burials they got.

But some deserved better. Nearly every aspiring interface designer believed the way we were forced to interact with computers was limiting and frustrating, but one man in particular felt the emphasis on design itself missed the forest for the trees. Rather than drowning in visual metaphors or arcane iconographies doomed to be as complex as the systems they represented, the way we deal and interact with computers should stress functionality first, simultaneously considering both what users need to do and the cognitive limits they have. It was no longer enough that an interface be usable by a human—it must be humane as well.

What might a computer interface based on those principles look like? As it turns out, we already know.

The man was Jef Raskin, and this is his cul-de-sac.

The Apple core of the Macintosh

It’s sometimes forgotten that Raskin was the originator of the Macintosh project in 1979. Raskin had come to Apple with a master’s in computer science from Penn State University, six years as an assistant professor of visual arts at the University of California, San Diego (UCSD), and his own consulting company. Apple co-founder Steve Jobs subsequently hired Raskin’s company to write the Apple II’s BASIC programming manual, and Raskin joined Apple as manager of publications in 1978.

Raskin’s work on documentation and testing, combined with his technical acumen, gave him outsized influence within the young company. As the 40-column uppercase-only Apple II was ill-suited for Raskin’s writing, Apple developed a text editor and an 80-column display card, and Raskin leveraged his UCSD contacts to port UCSD Pascal and the p-System virtual machine to the Apple II when Steve Wozniak developed the Apple II’s floppy disk drives. (Apple sold this as Apple Pascal, and many landmark software programs like the Apple Presents Apple tutorial were written in it.)

But Raskin nevertheless concluded that a complex computer (by the standards of the day) could never exist in quantity, nor be usable by enough people to matter. In his 1979 essay “Computers by the Millions,” he argued against systems like the Apple II and the in-development Apple III that relied on expansion slots and cards for many advanced features. “What was not said was that you then had the rather terrible task of writing software to support these new ‘boards,’” he wrote. “Even the more sophisticated operating systems still required detailed understanding of the add-ons… This creates a software nightmare.”

Instead, he felt that “personal computers will be self-contained, complete, and essentially un-expandable. As we’ll see, this strategy not only makes it possible to write complete software but also makes the hardware much cheaper and producible.” Ultimately, Raskin believed, only a low-priced, low-complexity design could be manufactured in large enough numbers for a future world and be functional there.

The original Macintosh was designed as an embodiment of some of these concepts. Apple chairman Mike Markkula had a $500 (around $2,200 in 2025) game machine concept in mind called “Annie,” named after the Playboy comic character and intended as a low-end system paired with the Apple II—starting at around double that price at the time—and the higher-end Apple III and Lisa, which were then in development. Raskin wasn’t interested in developing a game console, but he did suggest to Markkula that a $500 computer could have more appeal, and he spent several months writing specifications and design documents for the proposed system before it was approved.

“My message,” wrote Raskin in The Book of Macintosh, “is that computers are easy to use, and useful in everyday life, and I want to see them out there, in people’s hands, and being used.” Finding female codenames sexist, he changed Annie to Macintosh after his favorite variety of apple, though using a variant spelling to avoid a lawsuit with the previously existing McIntosh Laboratory. (His attempt was ultimately for naught, as Apple later ended up having to license the trademark from the hi-fi audio manufacturer and then purchase it outright anyway.)

Raskin’s small team developed the hardware at Apple’s repurposed original Cupertino offices separate from the main campus. Initially, he put together a rough all-in-one concept, originally based on an Apple II (reportedly serial number 2) with a “jury-rigged” monitor. This evolved into a prototype chiefly engineered by Burrell Smith, selecting for its CPU the 8-bit Motorola 6809 as an upgrade from the Apple II’s MOS 6502 but still keeping costs low.

Similarly, a color display and a larger amount of RAM would have also added expense, so the prototype had a small 256×256 monochrome CRT driven by the ubiquitous Motorola 6845 CRTC, plus 64K of RAM. A battery and built-in printer were considered early on but ultimately rejected. The interface emphasized text and keyboard: There was no mouse, and the display was character-based instead of graphical.

Raskin was aware of early graphical user interfaces in development, particularly Xerox PARC’s, and he had even contributed to early design work on the Lisa, but he believed the mouse was inferior to trackballs and tablets and felt such pointing devices were more appropriate for graphics than text. Instead, function keys allowed the user to select built-in applications, and the machine could transparently shift between simple text entry or numeric evaluation in a “calculator-based language” depending on what the user was typing.

During the project’s development, Apple management had recurring concerns about its progress, and it was nearly canceled several times. This changed in late 1980 when Jobs was removed from the Lisa project by President Mike Scott, after which Jobs moved to unilaterally take over the Macintosh, which at that time was otherwise considered a largely speculative affair.

Raskin initially believed the change would be positive, as Jobs stated he was only interested in developing the hardware, and his presence and interest quickly won the team new digs and resources. New team member Bud Tribble suggested that it should be able to take advantage of the Lisa’s powerful graphics routines by migrating to its Motorola 68000, and by February 1981, Smith was able to duly redesign the prototype for the more powerful CPU while maintaining its lower-cost 8-bit data bus.

This new prototype expanded graphics to 384×256, allowed the use of more RAM, and ran at 8 MHz, making the prototype noticeably faster than the 5 MHz Lisa yet substantially cheaper. However, by sharing so much of Lisa’s code, the interface practically demanded a pointing device, and the mouse was selected, even though Raskin had so carefully tried to avoid it. (Raskin later said he did prevail with Jobs on the mouse only having one button, which he believed would be easier for novices, though other Apple employees like Larry Tesler have contested his influence on this decision.)

As Jobs started to take over more and more portions of the project, the two men came into more frequent conflict, and Raskin eventually quit Apple for good in March 1982. The extent of Raskin’s residual impact on the Macintosh’s final form is often debated, but the resulting 1984 Macintosh 128K is clearly a different machine from what Raskin originally envisioned. Apple acknowledged Raskin’s contributions in 1987 by presenting him with one of the six “millionth” Macintoshes, which he auctioned off in 1999 along with the Apple II used in the original concept.

A Swyftly tilting project

After Raskin’s departure from Apple, he established Information Appliance, Inc. in Palo Alto to develop his original concept on his own terms. By this time, it was almost a foregone conclusion that microcomputers would sooner or later make their way to everyone; indeed, home computer pioneers like Jack Tramiel’s Commodore were already selling inexpensive “computers by the millions”—literally. With the technology now evolving at a rapid pace, Raskin wanted to concentrate more on the user interface and the concept’s built-in functionality, reviving the ideas he believed had become lost in the Macintosh’s transition. He christened it with a new name: Swyft.

In terms of industrial design, the Swyft owed a fair bit to Raskin’s prior prototype as it was also an all-in-one machine, using a built-in 9” monochrome CRT display. Unlike the Macintosh, however, the screen was set back at an angle and the keyboard was built-in; it also had a small handle at the base of its sloped keyboard making it at least notionally portable.

Disk technology had advanced, so it sported a 3.5-inch floppy drive (also like the Macintosh, albeit hidden behind a door), though initially the prototype used a less-powerful 8-bit MOS 6502 CPU running at 2MHz. The 6502’s 64K addressing limit and the additional memory banking logic it required eventually proved inadequate, and the CPU was changed during development to the Motorola 68008, a cheaper version of the 68000 with an 8-bit data bus and a maximum address space of 1MB. Raskin intended the Swyft to act like an always-on appliance, always ready and always instant, so it had a lower-power mode and absolutely no power switch.

Instead of Pascal or assembly language, Swyft’s ROM operating system was primarily written in Forth. To reduce the size of the compiled code, developer Terry Holmes created a “tokenized” version that embedded smaller tokens instead of execution addresses into Forth word definitions, trading the overhead of an additional lookup step (which was written in hand-coded assembly and made very quick) for a smaller binary size. This modified dialect was called tForth (for “token,” or “Terry”). The operating system supported the hardware and the demands of the on-screen bitmapped display, which could handle true proportional text.

Swyft’s user interface was also radically different and was based on a “document” metaphor. Most computers of that time and today, mobile devices included, divide functionality among separate applications that access files. Raskin believed this approach was excessive and burdensome, writing in 1986 that “[b]y choosing to focus on computers rather than the tasks we wanted done, we inherited much of the baggage that had accumulated around earlier generations of computers. It is more a matter of style and operating systems that need elaborate user interfaces to support huge application programs.”

He expanded on this point in his 2000 book The Humane Interface: “[Y]ou start in the generating application. Your first step is to get to the desktop. You must also know which icons correspond to the desired documents, and you or someone else had to have gone through the steps of naming those documents. You will also have to know in which folder they are stored.”

Raskin thus conceived of a unified workspace in which everything was stored, accessed through one single interface appearing to the user as a text editor editing one single massive document. The editor was intelligent and could handle different types of text according to its context, and the user could subdivide the large document workspace into multiple subdocuments, all kept together. (This even included Forth code, which the user could write and evaluate in place to expand the system as they wished.) Data received from the serial port was automatically “typed” into the same document, and any or all text could be sent over the serial port or to a printer. Instead of function keys, a USE FRONT key acted like an Option or Command key to access special features.

Because everything was kept in one place, when the user saved the system state to a floppy disk, their entire workspace was frozen and stored in its entirety. Swyft additionally tagged the disk with a unique identifier so it knew when a disk was changed. When that disk was reinserted and resumed, the user picked up exactly where they left off, at exactly the same point, with everything they had been working on. Since everything was kept together and loaded en masse, there was no need for a filesystem.

Swyft also lacked a mouse—or indeed any conventional means of moving the cursor around. To navigate through the document, Swyft instead had LEAP keys, which when pressed alone would “creep” forward or backward by single characters. But when held down, you could type a string of characters and release the key, and the system would search forward or backward for that string and highlight it, jumping entire pages and subdocuments if necessary.

If you knew what was in a particular subdocument, you could find it or just LEAP forward to the next document marker to scan through what was there. Additionally, by leaping to one place, leaping again to another, and then pressing both LEAP keys together, you could select text as well. The steps to send, delete, change, or copy anything in the document are the same for everything in the document. “So the apparent simplicity [of other systems] is arrived at only after considerable work has been done and the user has shouldered a number of mental burdens,” wrote Raskin, adding, “the conceptual simplicity of the methods outlined here would be preferable. In most cases, the work required is also far less.”

Get something on sale faster, said Tom Swyftly

While around 60 Swyft prototypes of varying functionality were eventually made, IAI’s backers balked at the several million dollars additionally required to launch the product under the company’s own name. To increase their chances of a successful return on investment, they demanded a licensee for the design instead that would insulate the small company from the costs of manufacturing and sales. They found it in Japanese manufacturer Canon, which had expanded from its core optical and imaging lines into microcomputers but had spent years unsuccessfully trying to crack the market. However, possibly because of its unusual interface, Canon unexpectedly put its electronic typewriter division in charge of the project, and the IAI team began work with Canon’s engineers to refine the hardware for mass production.

SwyftCard advertisement in Byte, October 1985, with Jef Raskin and Steve Wozniak.

In the meantime, IAI investors prevailed upon management to find a way to release some of the Swyft technology early in a less expensive incarnation. This concept eventually turned into an expansion card for the Apple IIe. Raskin’s team was able to adapt some of the code written for the Swyft to the new device, but because the IIe is also a 6502-based system and is itself limited to a 64K address space, it required its own onboard memory banking hardware as well. With the card installed, the IIe booted into a scaled-down Swyft environment using its onboard 16K EPROM, with the option of disabling it temporarily to boot regular Apple software. Unlike the original Swyft, the Apple II SwyftCard does not use the bitmap display and appears strictly in 80-column non-proportional text. The SwyftCard went on sale in 1985 for $89.95, approximately $270 in 2025 dollars.

The initial SwyftCard tutorial page. Credit: Cameron Kaiser

The SwyftCard’s unified workspace can be subdivided into various “subdocuments,” which appear as hard page breaks with equals signs. Although up to 200 pages were supported, in practice, the available workspace limits you to about 15 or 20, “densely typed.” It came with a built-in tutorial which began with orienting you to the LEAP keys (i.e., the two Apple keys) and how to navigate: hold one of them down and type the text to leap to (or equals signs to jump to the next subdocument), or tap them repeatedly to slowly “creep.”

The two-tone cursor. Credit: Cameron Kaiser

Swyft and the SwyftCard implement a two-phased cursor, which the SwyftCard calls either “wide” or “narrow.” By default, the cursor is “narrow,” alternating between a solid and a partially filled block. As you type, the cursor splits into a “wide” form—any text shown in inverse, usually the last character you entered, is what is removed when you press DELETE, with the blinking portion after the inverse text indicating the insertion point. When you creep or leap, the cursor merges back into the “narrow” form. When narrow, DELETE deletes right as a true delete, instead of a backspace. If you selected text by pressing both LEAP keys together, those become highlighted in inverse and can be cut and pasted.

The SwyftCard software defines a USE FRONT key (i.e., the Control key) as well. This was most noticeable as a quick key combination for saving your work to disk, to which the entire workspace was saved in one go with no filenames (i.e., one disk equated one workspace), though it had many other such functions within the program. Since it could be tricky to juggle floppies without overwriting them, the software also took pains to ensure each formatted disk was tagged with a unique identifier to avoid accidental erasure. It also implemented serial communications such that you could dial up a remote system and use USE FRONT-SEND to send it or be dialed into and receive text into the workspace automatically.

SwyftCards didn’t sell in massive numbers, but their users loved them, particularly the speed and flexibility the system afforded. David Thornburg (the designer of the KoalaPad tablet), writing for A+ in November 1985, said it “accomplished something that I never knew was possible. It not only outperforms any Apple II word-processing system, but it lets the Apple IIe outperform the Macintosh… Will Rogers was right: it does take genius to make things simple.”

The Swyft and SwyftCard, however, were as much philosophy as interface; they represented Raskin’s clear desire to “abolish the application.” Rather than starting a potentially different interface to do a particular task, the task should be part of the machine’s standard interface and be launched by direct command. Similarly, even within the single user interface, there should be no “modes” and no switching between different minor behaviors: the interface ought to follow the same rules as much of the time as possible.

“Modes are a significant source of errors, confusion, unnecessary restrictions, and complexity in interfaces,” Raskin wrote in The Humane Interface, illustrating it with the example of “at one moment, tapping Return inserts a return character into the text, whereas at another time, tapping Return cases the text typed immediately prior to that tap to be executed as a command.”

Even a device as simple as a push-button flashlight is modal, argued Raskin, because “[i]f you do not know the present state of the flashlight, you cannot predict what a press of the flashlight’s button will do.” Even if an individual application itself is notionally modeless, Raskin presented the real-world example of Command-N commonly used to open a new document but AOL’s client using Command-M for a new E-mail message; the situation “that gives rise to a mode in this example consists of having a particular application active. The problem occurs when users employ the Command-N command habitually,” he wrote.

Ultimately, wrote Raskin, “[a]n interface is humane if it is responsive to human needs and considerate of human frailties.” In this case, the particular frailty Raskin concentrated on is the natural unconscious human tendency to form habitual behaviors. Because such habits are hard to break, command actions and gestures in an interface should be consistent enough that their becoming habitual makes them more effective, allowing a user to “do the task without having to think about it… We must design interfaces that (1) deliberately take advantage of the human trait of habit development and (2) allow users to develop habits that smooth the flow of their work.” If a task is always accomplished the same way, he asserted, then when the user has acquired the habit of doing so, they will have simultaneously mastered that task.

The Canon Cat’s one and only life

Raskin’s next computer preserved many such ideas from the Swyft, but it only did so in spite of the demands of Canon management, who forced multiple changes during development. Although the original Swyft (though not the SwyftCard) had true proportional text and at least the potential for user-created graphics, Canon’s electric typewriter division was then in charge of the project and insisted on non-proportional fixed-width text and no graphics, because that’s all the official daisywheel printer could generate—even though the system’s bitmapped display remained. (A laser printer option was later added but was nevertheless still limited to text.)

Raskin wanted to use a Mac-like floppy drive that could automatically detect floppy disk insertion, but Canon required the system to use their own floppy drives, which didn’t. Not every change during development was negative. Much of the more complicated Swyft logic board was consolidated into smaller custom gate array chips for mass production, along with the use of a regular 68000 instead of the more limited 68008, which was also cheaper in volume despite only being run at 5MHz.

However, against his repeated demands to the contrary and lengthy explanations of the rationale, Raskin was dismayed to find the device was nevertheless fitted with a power switch; Canon’s engineering staff said they simply thought an error had been made and added it, and by then, it was too late in development to remove it.

Canon management also didn’t understand the new machine’s design philosophy, treating it as an overgrown word processor (dubbed a “WORK Processor [sic]”) instead of the general-purpose computer Raskin intended, and required its programmability in Forth to be removed. This was unpopular with Raskin’s team, so rather than remove it completely, they simply hid it behind an unlikely series of keystrokes and excised it from the manual. On the other hand, because Canon considered it an overgrown word processor, it seemed entirely consistent to keep the Swyft’s primary interface intact otherwise, including its telecommunication features. The new system also got a new name: the Cat.

Canon Cat advertising brochure.

Thus was released the Canon Cat, announced in July 1987, for $1,495 (about $4,150 in 2025 dollars ). The released version came with 256K of RAM, with sockets to add an optional 128K more for 384K total, shared between the video circuitry, Forth dictionary, settings, and document text, all of which could be stored to the 3.5-inch floppy. (Another row of solder pads could potentially hold yet another 128K, but no shipping Cat ever populated it.)

Its 256K of system ROM contained the entirety of the editor and tForth runtime, plus built-in help screens, all immediately available as soon as you turned it on. An additional 128K ROM provided a 90,000-word dictionary to which the user could add words that were also automatically saved to the same disk. The system and dictionary ROMs came in versions for US and UK English, French, and German.

The Canon Cat. Cameron Kaiser

Like the Swyft it was based on, the Cat was an all-in-one system. The 9-inch monochrome CRT was retained, but the floppy drive no longer had a door, and the keyboard was extended with several special keys. In particular, the LEAP keys, as befitting their central importance, were given a row to themselves in an eye-catching shade of pink.

Function key combinations with USE FRONT are printed on the front of the keycaps. The Cat provided both a 1200 baud modem and a 9600bps RS-232 connector for serial data; it could dial out or be dialed into to upload text. Text transmitted to the Cat via the serial port was inserted into the document as if it had been typed in at the console. A Centronics-style printer port connected Canon’s official printer options, though many printers were compatible.

The Cat can be (imperfectly) emulated with MAME; the Internet Archive has a preconfigured Wasm version with Canon ROMs that you can also run in your browser. Note that the current MAME driver, as of this writing, will freeze if the emulated Cat makes a beep, and the ROM’s default keyboard layout assumes you’re using a real Cat, not a PC or Mac. These minor issues can be worked around in the emulated Cat’s setup menu by setting the problem signal to Flash (without a beep) and the keyboard to ASCII. The screenshots here are taken from MAME and adjusted to resemble the Cat’s display aspect ratio.

The Swyft and SwyftCard’s editing paradigm transferred to the Canon Cat nearly exactly. Preserved is the “wide” and “narrow” cursor, showing both the deletion range and the insertion point, as well as the use of the LEAP keys to creep, search, and select text ranges. (In MAME, the emulated LEAP keys are typically mapped to both Alt or Option keys.) SHIFT-LEAP can also be used to scroll the screen line by line, tapping LEAP repeatedly with SHIFT down to continue motion, and the Cat additionally implements a single level of undo with a dedicated UNDO key. The USE FRONT key also persisted, usually mapped in MAME to the Control key(s). Text could be bolded or underlined.

Similarly, the Cat inherits the same “multiple document interface” as the Swyfts: the workspace can be arbitrarily divided into documents, here using the DOCUMENT/PAGE key (mapped usually to Page Down in MAME), and the next or previous document can be LEAPed to by using the DOCUMENT/PAGE key as the target.

However, the Cat has an expanded interface compared to the SwyftCard, with a ruler (in character positions) at the bottom, text and keyboard modes, and open areas for on-screen indicators when disk access or computations are in progress.

Calculating data with the Canon Cat. Credit: Cameron Kaiser

Although Canon had mandated that the Cat’s programmability be suppressed, the IAI team nevertheless maintained the ability to compute expressions, which Canon permitted as an extension of the editor metaphor. Simple arithmetic such as 355/113 could be calculated in place by selecting the text and pressing USE FRONT-CALC (Control-G), which yields the answer with a dotted underline to indicate the result of a computation. (Here, the answer is computed to the default two decimal digits of precision, which is configurable.) Pressing USE FRONT-CALC within that answer reopens the expression to change it.

Computations weren’t merely limited to simple figures, though; the Cat also allowed users to store the result of a computation to a variable and reference that variable in other computations. If the variables underlying a particular computation were changed, its result would automatically update.

A spreadsheet built with expressions on the Cat. Credit: Cameron Kaiser

This capability, along with the Cat’s non-proportional font, made it possible to construct simple spreadsheets right in the editor using nothing more than expressions and the TAB key to create rows and columns. Cells can be referred to by expressions in other cells using a special function use() with relative coordinates. Constant values in “cells” can simply be entered as plain text; if recalculation is necessary, USE FRONT-CALC will figure it out. The Cat could also maintain and sort simple line lists, which, when combined with the LEARN macro facility, could be used to automate common tasks like mail merges.

The Canon Cat’s built-in on-line help facility. Credit: Cameron Kaiser

The Cat also maintained an extensive set of help screens built into ROM that the SwyftCard, for capacity reasons, was forced to load from floppy disk. Almost every built-in function had a documentation screen accessible from USE FRONT-HELP (Control-N): keep USE FRONT down, release the N key, and then press another key to learn about it. When the USE FRONT key is also released, the Cat instantly returns to the editor. Similarly, if the Cat beeped to indicate an error, pressing USE FRONT-HELP could also explain why. Errors didn’t trigger a modal dialogue or lock out system functions; you could always continue.

Internally, the current workspace contained not only the visible text documents but also any custom words the user added to the dictionary and any additional tForth words defined in memory. Ordinarily, there wouldn’t be any, given that Canon didn’t officially permit the user to program their own software, but there were a very small number of software applications Canon itself distributed on floppy disk: CATFORM, which allowed the user to create, fill out, and print form templates, and CATFILE, Canon’s official mailing list application. Dealers were instructed to provide new users with copies, though the Cat here didn’t come with them. Dealers also had special floppies of their own for in-store demos and customization.

The backdoor to Canon Cat tForth. Credit: Cameron Kaiser

Still, IAI’s back door to Forth quietly shipped in every Cat, and the clue was a curious omission in the online help: USE FRONT-ANSWER. This otherwise unexplained and unused key combination was the gateway. If you entered the string Enable Forth Language, highlighted it, and evaluated it with USE FRONT-ANSWER (not CALC; usually Control-Backspace in MAME), you’d get a Forth ok prompt, and the system was now yours. Reset the Cat or type re to return to the editor.

With Forth enabled, you could either enter code at the prompt, or do so within the editor and press USE FRONT-ANSWER to evaluate it, putting any output into the document just like Applesoft BASIC did on the SwyftCard. Through the Forth interface it was possible to define your own words, saved as part of the workspace, or even hack in 68000 machine code and completely take control of the machine. Extensive documentation on the Cat’s internals eventually surfaced, but no third-party software was ever written for the platform during its commercial existence.

As it happened, whatever commercial existence the Cat did have turned out to be brief and unprofitable anyway. It sold badly, blamed in large part on Canon’s poor marketing, which positioned it as an expensive dedicated word processor in an era where general-purpose PCs and, yes, Macintoshes were getting cheaper and could do more.

Various apocryphal stories circulate about why the Cat was killed—one theory cites internal competition between the typewriter and computer divisions; another holds that Jobs demanded the Cat be killed if Canon wanted a piece of his new venture, NeXT (and Owen Linzmeyer reports that Canon did indeed buy a 16 percent stake in 1989)—but regardless of the reason, it lasted barely six months on the market before it was canceled. The 1987 stock market crash was a further blow to the small company and an additional strain on its finances.

Despite the Cat’s demise, Raskin’s team at IAI attempted to move forward with a successor machine, a portable laptop that would have reportedly weighed just four pounds. The new laptop, christened the Swyft III, used a ROM-based operating system based on the Cat’s but with a newer, more sophisticated “leaping” technology called Hyperleap. At $999, it was to include a 640×200 supertwist LCD, a 2400 bps modem and 512K of RAM (a smaller $799 Swyft I would have had less memory and no modem), as well as an external floppy drive and an interchange facility for file transfers with PCs and Macs.

As Raskin had originally intended, the device achieved its claimed six-hour battery life (NiCad or longer with alkaline) primarily by aggressively sleeping when idle but immediately resuming full functionality when a key was pressed. Only two prototypes were ever made before IAI’s investors, considering the company risky after the Cat’s market failure and little money coming in, finally pulled the plug and caused the company to shut down in 1992. Raskin retained patents on the “leaping” method and the Swyft/Cat’s means of saving and restoring from disk, but their subsequent licensees did little with the technology, and the patents in the present day have lapsed.

If you can’t beat ’em, write software

The Cat is probably the best known of Raskin’s designs (notwithstanding the Macintosh, for reasons discussed earlier), especially as Raskin never led the development of another computer again. Nevertheless, his interface ideas remained influential, and after IAI’s closing, he continued as an author and frequent consultant and reviewer for various consumer products. These observations and others were consolidated into his later book The Humane Interface, from which this article has already liberally quoted. On the page before the table of contents, the book observes that “[w]e are oppressed by our electronic servants. This book is dedicated to our liberation.”

In The Humane Interface, Raskin not only discusses concepts such as leaping and habitual command behaviors but means of quantitative assessment as well. One of the more well-known is Fitts’ Law, after psychologist Paul Fitts, Jr., that predicts the time needed to quickly move to a target area is correlated with both the size of the target and its distance from the starting position.

This has been most famously used to justify the greater utility of a global menu bar completely occupying the edge of a screen (such as in macOS) because the mouse pointer stops at the edge, making the menu bar effectively infinitely large and therefore easy to “hit.” Similarly, Hick’s law (or the Hick-Hyman law, named for psychologists William Edmund Hick and Ray Hyman) asserts that increasing the number of choices a user is presented with will increase their decision time logarithmically. Given experimental constants, both laws can predict how long a user will need to hit a target or make a choice.

Notably, none of Raskin’s systems (at least as designed) superficially depended on either law because they had no explicit pointing device and no menus to select from. A more meaningful metric he also considers might be the Card-Moran-Newell GOMS model (“goals, objects, methods and selection rules”) and how it applies to user motion. While the time needed to mentally prepare, press a key, point to a particular position on the display or move from input device to input device (say, mouse to-and-from keyboard) will vary from person to person, most users will have similar times, and general heuristics exist (e.g., nonsense is easier to type than structured data).

However, the length of time the computer takes to respond is within the designer’s control, and its perception can be reduced by giving prompt and accurate feedback, even if the operation’s actual execution time is longer. Similarly, if we reduce keystrokes or reduce having to move from mouse to keyboard for a given task, the total time to perform that task becomes less for any user.

Although these timings can help to determine experimentally which interface is better for a given task, Raskin points out we can use the same principles to also determine the ideal efficiency of such interfaces. An interface that gives the user no choices but still must be interacted with is maximally inefficient because the user must do some non-zero amount of work to communicate absolutely no information.

A classic example might be a modal alert box with only one button—asynchronous or transparent notifications could be better used instead. Likewise, an interface with multiple choices will nevertheless become less efficient if certain choices are harder or more improbable to access, such as buttons or click areas being smaller than others, or a particular choice needing more typing to select than other choices.

Raskin’s book also considers alternative means of navigation, pointing out that “natural” and “intuitive” are not necessarily synonyms for “easy to use.” (A mouse can be easy to use, but it’s not necessarily natural or intuitive. Recall Scotty in Star Trek IV picking up the Macintosh Plus mouse and talking to it instead of trying to move it, and then eventually having to use the keyboard. Raskin cites this very scene, in fact.)

Besides leaping, Raskin also presents the idea of a zooming user interface (ZUI), allowing the user an easier way to not only reach their goal but also see themselves in relationship to that goal and within the entire workspace. If you see what you want, zoom in. If you’ve lost your place, zoom out. One could access a filesystem this way, or a collection of applications or associated websites. Raskin was hardly the first to propose the ZUI—Ivan Sutherland developed a primitive ZUI for graphics in his 1962 Sketchpad, along with the Spatial Dataland at MIT and Xerox PARC’s Smalltalk with “infinite” desktops—but he recognized its unique abilities to keep a user mentally grounded while navigating large structures that would otherwise become unwieldy. This, he asserts, made it more humane.

To crystallize these concepts, rather than create another new computer, Raskin instead started work on a software package with a team that included his son, Aza, initially called The Humane Environment. THE’s HumaneEditorProject was first unveiled to the world on Christmas Eve 2002, though initially only as a SourceForge CVS tree, since it was considered very unfinished. The original early builds of the Humane Editor were open-source and intended to run on classic Mac OS 9, though QEMU, SheepShaver and Classic under Tiger and earlier will also run it.

Default document. Credit: Cameron Kaiser

As before, the Humane Editor uses a large central workspace subdivided into individual documents, here separated by backtick characters. Our familiar two-tone cursor is also maintained. However, although font sizes, boldface, italic, and underlining were supported, colors (and, additionally, font sizes) were still selected by traditional Mac pulldown menus.

Leaping with the SHIFT and angle bracket keys. Credit: Cameron Kaiser

Leaping, here with a trademark, is again front and center in THE. However, instead of dedicated keys, leaping is merely a part of THE’s internal command line, termed the Humane Quasimode, where other commands can be sent. Notice that the prompt is displayed as translucent text over the work area.

The Deletion Document. Credit: Cameron Kaiser

When text was deleted, either by backspacing over it or pressing DELETE with a selected region, it went to an automatically created and maintained “DELETION DOCUMENT” from which it could be rescued. Effectively, this turned the workspace into a yank buffer along with all your documents, and undoing any destructive editing operation thus became merely another cut and paste. (Deleting from the deletion document just deleted.)

Command listing. Credit: Cameron Kaiser

A full list of commands accepted by the Quasimode was available by typing COMMANDS, which in turn emitted them to the document. These are based on precompiled Python files, which the user could edit or add to, and arbitrary Python expressions and code could also be inserted and run from the document workspace directly.

THE was a fully functioning editor, albeit incomplete, but nevertheless capable enough to write its own documentation with. Despite that, the intention was never to make something that was just an editor, and this aspiration became more obvious as development progressed. To make the software available on more platforms, development subsequently changed to wxPython in 2004, and later Python and Pygame to handle the screen display. The main development platform switched at the same time to Windows, and a Windows demo version of this release was made, although Mac OS X and Linux could still theoretically run it if you installed the prerequisites.

With the establishment of the Raskin Center for Humane Interfaces (RCHI), THE’s development continued under a new name, Archy. (This Wayback Machine link is the last version of the site before it was defaced and eventually domain-parked.) The new name was both a pun on “RCHI” and a reference to the Don Marquis characters, Archy and Mehitabel, specifically Archy the typewriting cockroach, whose alleged writings largely lack capital letters or punctuation because he couldn’t hit the SHIFT key at the same time. Archy’s final release shown here was the unfinished build 124, dated December 15, 2005.

The initial Archy window. Credit: Cameron Kaiser

Archy had come a long way from the original Mac THE, finally including the same sort of online help tutorial that the SwyftCard and Cat featured. It continued the use of a dedicated key to enter commands—in this case, CAPS LOCK. Hold it down, type the command, and then release it.

Leaping in Archy. Credit: Cameron Kaiser

Likewise, dedicated LEAP keys returned in Archy, in this case Left and Right Alt, and as before, selection was done by pressing both LEAP keys. A key advancement here is that any text that would be selected, if you chose to select it, is highlighted beforehand in a light shade of yellow so you no longer had to remember where your ranges were.

A list of commands in Archy. Credit: Cameron Kaiser

As before, the COMMANDS verb gave you a list of commands. While THE’s command suite was almost entirely specific to an editor application, Archy’s aspirations as a more complete all-purpose environment were evident. In particular, in addition to many of the same commands we saw on the Mac, there were now special Internet-oriented commands like EMAIL and GOOGLE. These commands were now just small documents containing Python embedded in the same workspace—no more separate files you had to corral. You could even change built-in commands, and even LEAP itself.

As you might expect, besides the deletion document (now just “DELETIONS”), things like your email were also now subdocuments, and your email server settings were a subdocument, too. While this was never said explicitly, a logical extension of the metaphor would have been to subsume webpage contents as in-place parts of the workspace as well—your history, bookmarks, and even the pages themselves could be subdocuments of their own, restored immediately and ready for access when entering Archy. Each time you exited, the entire workspace was saved out into a versioned file, so you could even go back in time to a recent backup if you blew it.

Raskin’s legacy

Raskin was found to have pancreatic cancer in December 2004 and, after transitioning the project to become Archy the following January, died shortly afterward on February 26, 2005. In Raskin’s New York Times obituary, Apple software designer Bill Atkinson lauded his work, saying, “He wanted to make them [computers] more usable and friendly to people who weren’t geeks.” Technology journalist Steven Levy agreed, adding that “[h]e really spent his life urging a degree of simplicity where computers would be not only easy to use but delightful.” He left behind his wife Linda Blum and his three children, Aza, Aviva, and Aenea.

Archy was the last project Raskin was directly involved in, and to date it remains unfinished. Some work continued on the environment after his death—this final release came out in December 2005, nearly 10 months later—but the project was ultimately abandoned, and many planned innovations, such as a ZUI of its own, were never fully developed beyond a separate proof of concept.

Similarly, many of Raskin’s more unique innovations have yet to reappear in modern mainstream interfaces. RCHI closed as well and was succeeded in spirit by the Chicago-based Humanized, co-founded by his son Aza. Humanized reworked ideas from Archy into Enso, which expanded the CAPS LOCK-as-command interface with a variety of verbs such as OPEN (to start applications) and DEFINE (to get the dictionary definition of a word), and the ability to perform direct web searches.

By using a system-wide translucent overlay similar to Archy and THE, the program was intended to minimize the need for switching back and forth between multiple applications to complete a task. In 2008, Enso was made free for download, and Humanized’s staff joined Mozilla, where the concept became a Firefox browser extension called Ubiquity, in which web-specific command verbs could be written in JavaScript and executed in an opaque pop-up window activated by a hotkey combination. However, the project was placed on “indefinite hiatus” in 2009 and was never revisited, and it no longer works with current versions of the browser.

Using Raskin 2 on a MacBook Air to browse images. Credit: Cameron Kaiser

The idea of a single workspace that you “leap through” also never resurfaced. Likewise, although ZUI-like animations have appeared more or less as eye candy in environments such as iOS and GNOME, a pervasive ZUI has yet to appear in (or as) any major modern desktop environment. That said, the idea is visually appealing, and some specific applications have made heavier use of the concept.

Microsoft’s 2007 Deepfish project for Windows Mobile conceived of visually shrunken webpages for mobile devices that users could zoom into, but it was dependent on a central server and had high bandwidth requirements, and Microsoft canceled it in 2008. A Swiss company named Raskin Software LLC (apparently no official relation) offers a macOS ZUI file and media browser called Raskin, which has free and paid tiers; on other platforms, the free open-source Eagle Mode project offers a similar file manager with media previews, but also a chess application, a fractal viewer, and even a Linux kernel configuration tool.

A2 desktop with installer, calendar and clock. Credit: LoganJustice via Wikimedia (CC0)

Perhaps the most complete example of an operating environment built around a ZUI might be A2, a branch of the ETH-Zürich Oberon System. The Oberon System, based around the Oberon programming language descended from Modula-2 and Pascal, was already notable for its unique paneled text user interface, where text is clickable, including text you type; Native Oberon can be booted directly as an operating system by itself.

In 2002, A2 spun off initially as Active Object System, using an updated dialect called Active Oberon supporting improved scheduling, exception handling, and object-oriented programming with processes and threads able to run within an object’s context to make that object “active.” While A2 kept the Oberon System’s clickable text metaphor, windows and gadgets can also be zoomed in or out of on an infinitely scrolling desktop, which is best appreciated in action. It is still being developed, and older live CDs are still available. However, the Oberon System has never achieved general market awareness beyond its small niche, and any forks less so, limiting it to a practical curiosity for most users.

This isn’t to say that Raskin’s quest for a truly humane computer has completely come to naught. Unfortunately, in some respects, we’re truly backsliding, with opaque operating systems that can limit your application choices or your ability to alter or customize them, and despite very public changes in skinning and aesthetics, the key ways that we interact with our computers have not substantially changed since the wide deployment of the Xerox PARC-derived “WIMP” paradigm (windows, icons, menus and pointers)—ironically most visibly promoted by the 1984 post-Raskin Macintosh.

A good interface unavoidably requires work and study, two things that take too long in today’s fast-paced product cycle. Furthermore, Raskin’s emphasis on built-in programmability nevertheless rings a bit quaint in our era, when many home users’ only computer may be a tablet. By his standards, there is little humane about today’s computers, and they may well be less humane than yesterday’s.

Nevertheless, while Raskin’s ideas may have few present-day implementations, that doesn’t mean the spirit in which they were proposed is dead, too. At the very least, some greater consideration is given to the traditional WIMP paradigm’s deficiencies today, particularly with multiple applications and windows, and how it can poorly serve some classes of users, such as those requiring assistive technology. That said, I hold guarded optimism about how much change we’ll see in mainstream systems, and Raskin’s editor-centric, application-less interface becomes more and more alien the more the current app ecosystem reigns dominant.

But as cul-de-sacs go, you can pick far worse places to get lost in than his, and it might even make it out to the main street someday. Until then, at least, you can always still visit—in an upcoming article, we’ll show you how.

Selected bibliography

Folklore.org

CanonCat.net

Linzmeyer, Owen W (2004). Apple Confidential 2.0. No Starch Press, San Francisco, CA.

Raskin, Jef (2000). The humane interface: new directions for designing interactive systems. Addison-Wesley, Boston, MA.

Making the Macintosh: Technology and Culture in Silicon Valley. https://web.stanford.edu/dept/SUL/sites/mac/earlymac.html

Canon’s Cat Computer: The Real Macintosh. https://www.landsnail.com/apple/local/cat/canon.html

Prototype to the Canon Cat: the “Swyft.” https://forum.vcfed.org/index.php?threads/prototype-to-the-canon-cat-the-swyft.12225/

Apple //e and Cat. http://www.regnirps.com/Apple6502stuff/apple_iie_cat.htm

Jef Raskin’s cul-de-sac and the quest for the humane computer Read More »

court-rejects-verizon-claim-that-selling-location-data-without-consent-is-legal

Court rejects Verizon claim that selling location data without consent is legal

Instead of providing notice to customers and obtaining or verifying customer consent itself, Verizon “largely delegated those functions via contract,” the court said. This system and its shortcomings were revealed in 2018 when “the New York Times published an article reporting security breaches involving Verizon’s (and other major carriers’) location-based services program,” the court said.

Securus Technologies, a provider of communications services to correctional facilities, “was misusing the program to enable law enforcement officers to access location data without customers’ knowledge or consent, so long as the officers uploaded a warrant or some other legal authorization,” the ruling said. A Missouri sheriff “was able to access customer data with no legal process at all” because Securus did not review the documents that law enforcement uploaded.

Verizon claimed that Section 222 of the Communications Act covers only call-location data, as opposed to device location data. The court disagreed, pointing to the law’s text stating that customer proprietary network information includes data that is related to the location of a telecommunications service, and which is made available to the carrier “solely by virtue of the carrier-customer relationship.”

“Device-location data comfortably satisfies both conditions,” the court said.

Verizon chose to pay fine, giving up right to jury trial

As for Verizon’s claim that the FCC violated its right to a jury trial, the court said that “Verizon could have gotten such a trial” if it had “declined to pay the forfeiture and preserved its opportunity for a de novo jury trial if the government sought to collect.” Instead, Verizon chose to pay the fine “and seek immediate review in our Court.”

By contrast, the 5th Circuit decision in AT&T’s favor said the FCC “acted as prosecutor, jury, and judge,” violating the right to a jury trial. The 5th Circuit said it was guided by the Supreme Court’s June 2024 ruling in Securities and Exchange Commission v. Jarkesy, which held that “when the SEC seeks civil penalties against a defendant for securities fraud, the Seventh Amendment entitles the defendant to a jury trial.”

The 2nd Circuit ruling said there are key differences between US telecom law and the securities laws considered in Jarkesy. It’s because of those differences that Verizon had the option of declining to pay the penalty and preserving its right to a jury trial, the court said.

In the Jarkesy case, the problem “was that the SEC could ‘siphon’ its securities fraud claims away from Article III courts and compel payment without a jury trial,” the 2nd Circuit panel said. “The FCC’s forfeiture order, however, does not, by itself, compel payment. The government needs to initiate a collection action to do that. Against this backdrop, the agency’s proceedings before a § 504(a) trial create no Seventh Amendment injury.”

Court rejects Verizon claim that selling location data without consent is legal Read More »