Features

tool-preventing-ai-mimicry-cracked;-artists-wonder-what’s-next

Tool preventing AI mimicry cracked; artists wonder what’s next

Tool preventing AI mimicry cracked; artists wonder what’s next

Aurich Lawson | Getty Images

For many artists, it’s a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.

Defenses against AI training exist—like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists’ styles. But they don’t provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists’ brands and replace them in the market.

In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer’s style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it’s not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist’s downfall, potentially watering down demand for their own work each time they promote new pieces online.

Unsurprisingly, artists have increasingly sought protections to diminish or dodge these AI risks. As tech companies update their products’ terms—like when Meta suddenly announced that it was training AI on a billion Facebook and Instagram user photos last December—artists frantically survey the landscape for new defenses. That’s why, counting among those offering scarce AI protections available today, The Glaze Project recently reported a dramatic surge in requests for its free tools.

Designed to help prevent style mimicry and even poison AI models to discourage data scraping without an artist’s consent or compensation, The Glaze Project’s tools are now in higher demand than ever. University of Chicago professor Ben Zhao, who created the tools, told Ars that the backlog for approving a “skyrocketing” number of requests for access is “bad.” And as he recently posted on X (formerly Twitter), an “explosion in demand” in June is only likely to be sustained as AI threats continue to evolve. For the foreseeable future, that means artists searching for protections against AI will have to wait.

Even if Zhao’s team did nothing but approve requests for WebGlaze, its invite-only web-based version of Glaze, “we probably still won’t keep up,” Zhao said. He’s warned artists on X to expect delays.

Compounding artists’ struggles, at the same time as demand for Glaze is spiking, the tool has come under attack by security researchers who claimed it was not only possible but easy to bypass Glaze’s protections. For security researchers and some artists, this attack calls into question whether Glaze can truly protect artists in these embattled times. But for thousands of artists joining the Glaze queue, the long-term future looks so bleak that any promise of protections against mimicry seems worth the wait.

Attack cracking Glaze sparks debate

Millions have downloaded Glaze already, and many artists are waiting weeks or even months for access to WebGlaze, mostly submitting requests for invites on social media. The Glaze Project vets every request to verify that each user is human and ensure bad actors don’t abuse the tools, so the process can take a while.

The team is currently struggling to approve hundreds of requests submitted daily through direct messages on Instagram and Twitter in the order they are received, and artists requesting access must be patient through prolonged delays. Because these platforms’ inboxes aren’t designed to sort messages easily, any artist who follows up on a request gets bumped to the back of the line—as their message bounces to the top of the inbox and Zhao’s team, largely volunteers, continues approving requests from the bottom up.

“This is obviously a problem,” Zhao wrote on X while discouraging artists from sending any follow-ups unless they’ve already gotten an invite. “We might have to change the way we do invites and rethink the future of WebGlaze to keep it sustainable enough to support a large and growing user base.”

Glaze interest is likely also spiking due to word of mouth. Reid Southen, a freelance concept artist for major movies, is advocating for all artists to use Glaze. Reid told Ars that WebGlaze is especially “nice” because it’s “available for free for people who don’t have the GPU power to run the program on their home machine.”

Tool preventing AI mimicry cracked; artists wonder what’s next Read More »

surface-pro-11-and-laptop-7-review:-an-apple-silicon-moment-for-windows

Surface Pro 11 and Laptop 7 review: An Apple Silicon moment for Windows

Microsoft's Surface Pro 11, the first flagship Surface to ship exclusively using Arm processors.

Enlarge / Microsoft’s Surface Pro 11, the first flagship Surface to ship exclusively using Arm processors.

Andrew Cunningham

Microsoft has been trying to make Windows-on-Arm-processors a thing for so long that, at some point, I think I just started assuming it was never actually going to happen.

The first effort was Windows RT, which managed to run well enough on the piddly Arm hardware available at the time but came with a perplexing new interface and couldn’t run any apps designed for regular Intel- and AMD-based Windows PCs. Windows RT failed, partly because a version of Windows that couldn’t run Windows apps and didn’t use a familiar Windows interface was ignoring two big reasons why people keep using Windows.

Windows-on-Arm came back in the late 2010s, with better performance and a translation layer for 32-bit Intel apps in tow. This version of Windows, confined mostly to oddball Surface hardware and a handful of barely promoted models from the big PC OEMs, has quietly percolated for years. It has improved slowly and gradually, as have the Qualcomm processors that have powered these devices.

That brings us to this year’s flagship Microsoft Surface hardware: the 7th-edition Surface Laptop and the 11th-edition Surface Pro.

These devices are Microsoft’s first mainstream, flagship Surface devices to use Arm chips, whereas previous efforts have been side projects or non-default variants. Both hardware and software have improved enough that I finally feel I could recommend a Windows-on-Arm device to a lot of people without having to preface it with a bunch of exceptions.

Unfortunately, Microsoft has chosen to launch this impressive and capable Arm hardware and improved software alongside a bunch of generative AI features, including the Recall screen recorder, a feature that became so radioactively unpopular so quickly that Microsoft was forced to delay it to address major security problems (and perception problems stemming from the security problems).

The remaining AI features are so superfluous that I’ll ignore them in this review and cover them later on when we look closer at Windows 11’s 24H2 update. This is hardware that is good enough that it doesn’t need buzzy AI features to sell it. Windows on Arm continues to present difficulties, but the new Surface Pro and Surface Laptop—and many of the other Arm-based Copilot+ PCs that have launched in the last couple of weeks—are a whole lot better than Arm PCs were even a year or two ago.

Familiar on the outside

The Surface Laptop 7 (left) and Surface Pro 11 (right) are either similar or identical to their Intel-powered predecessors on the outside.

Enlarge / The Surface Laptop 7 (left) and Surface Pro 11 (right) are either similar or identical to their Intel-powered predecessors on the outside.

Andrew Cunningham

When Apple released the first couple of Apple Silicon Macs back in late 2020, the one thing the company pointedly did not change was the exterior design. Apple didn’t comment much on it at the time, but the subliminal message was that these were just Macs, they looked the same as other Macs, and there was nothing to worry about.

Microsoft’s new flagship Surface hardware, powered exclusively by Arm-based chips for the first time rather than a mix of Arm and Intel/AMD, takes a similar approach: inwardly overhauled, externally unremarkable. These are very similar to the last (and the current) Intel-powered Surface Pro and Surface Laptop designs, and in the case of the Surface Pro, they actually look identical.

Both PCs still include some of the defining elements of Surface hardware designs. Both have screens with 3:2 aspect ratios that make them taller than most typical laptop displays, which still use 16: 10 or 16:9 aspect ratios. Those screens also support touch input via fingers or the Surface Pen, and they still use gently rounded corners (which Windows doesn’t formally recognize in-software, so the corners of your windows will get cut off, not that it has ever been a problem for me).

Surface Pro 11 and Laptop 7 review: An Apple Silicon moment for Windows Read More »

30-years-later,-freedos-is-still-keeping-the-dream-of-the-command-prompt-alive

30 years later, FreeDOS is still keeping the dream of the command prompt alive

Preparing to install the floppy disk edition of FreeDOS 1.3 in a virtual machine.

Enlarge / Preparing to install the floppy disk edition of FreeDOS 1.3 in a virtual machine.

Andrew Cunningham

Two big things happened in the world of text-based disk operating systems in June 1994.

The first is that Microsoft released MS-DOS version 6.22, the last version of its long-running operating system that would be sold to consumers as a standalone product. MS-DOS would continue to evolve for a few years after this, but only as an increasingly invisible loading mechanism for Windows.

The second was that a developer named Jim Hall wrote a post announcing something called “PD-DOS.” Unhappy with Windows 3.x and unexcited by the project we would come to know as Windows 95, Hall wanted to break ground on a new “public domain” version of DOS that could keep the traditional command-line interface alive as most of the world left it behind for more user-friendly but resource-intensive graphical user interfaces.

PD-DOS would soon be renamed FreeDOS, and 30 years and many contributions later, it stands as the last MS-DOS-compatible operating system still under active development.

While it’s not really usable as a standalone modern operating system in the Internet age—among other things, DOS is not really innately aware of “the Internet” as a concept—FreeDOS still has an important place in today’s computing firmament. It’s there for people who need to run legacy applications on modern systems, whether it’s running inside of a virtual machine or directly on the hardware; it’s also the best way to get an actively maintained DOS offshoot running on legacy hardware going as far back as the original IBM PC and its Intel 8088 CPU.

To mark FreeDOS’ 20th anniversary in 2014, we talked with Hall and other FreeDOS maintainers about its continued relevance, the legacy of DOS, and the developers’ since-abandoned plans to add ambitious modern features like multitasking and built-in networking support (we also tried, earnestly but with mixed success, to do a modern day’s work using only FreeDOS). The world of MS-DOS-compatible operating systems moves slowly enough that most of this information is still relevant; FreeDOS was at version 1.1 back in 2014, and it’s on version 1.3 now.

For the 30th anniversary, we’ve checked in with Hall again about how the last decade or so has treated the FreeDOS project, why it’s still important, and how it continues to draw new users into the fold. We also talked, strange as it might seem, about what the future might hold for this inherently backward-looking operating system.

FreeDOS is still kicking, even as hardware evolves beyond it

Running AsEasyAs, a Lotus 1-2-3-compatible spreadsheet program, in FreeDOS.

Running AsEasyAs, a Lotus 1-2-3-compatible spreadsheet program, in FreeDOS.

Jim Hall

If the last decade hasn’t ushered in The Year of FreeDOS On The Desktop, Hall says that interest in and usage of the operating system has stayed fairly level since 2014. The difference is that, as time has gone on, more users are encountering FreeDOS as their first DOS-compatible operating system, not as an updated take on Microsoft and IBM’s dusty old ’80s- and ’90s-era software.

“Compared to about 10 years ago, I’d say the interest level in FreeDOS is about the same,” Hall told Ars in an email interview. “Our developer community has remained about the same over that time, I think. And judging by the emails that people send me to ask questions, or the new folks I see asking questions on our freedos-user or freedos-devel email lists, or the people talking about FreeDOS on the Facebook group and other forums, I’d say there are still about the same number of people who are participating in the FreeDOS community in some way.”

“I get a lot of questions around September and October from people who ask, basically, ‘I installed FreeDOS, but I don’t know how to use it. What do I do?’ And I think these people learned about FreeDOS in a university computer science course and wanted to learn more about it—or maybe they are already working somewhere and they read an article about it, never heard of this “DOS” thing before, and wanted to try it out. Either way, I think more folks in the user community are learning about “DOS” at the same time they are learning about FreeDOS.”

30 years later, FreeDOS is still keeping the dream of the command prompt alive Read More »

the-world’s-toughest-race-starts-saturday,-and-it’s-delightfully-hard-to-call-this-year

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year

Is it Saturday yet? —

Setting the stage for what could be a wild ride across France.

The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

Enlarge / The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

David Ramos/Getty Images

Most readers probably did not anticipate seeing a Tour de France preview on Ars Technica, but here we are. Cycling is a huge passion of mine and several other staffers, and this year, a ton of intrigue surrounds the race, which has a fantastic route. So we’re here to spread Tour fever.

The three-week race starts Saturday, paradoxically in the Italian region of Tuscany. Usually, there is a dominant rider, or at most two, and a clear sense of who is likely to win the demanding race. But this year, due to rider schedules, a terrible crash in early April, and new contenders, there is more uncertainty than usual. A solid case could be made for at least four riders to win this year’s Tour de France.

For people who aren’t fans of pro road cycling—which has to be at least 99 percent of the United States—there’s a great series on Netflix called Unchained to help get you up to speed. The second season, just released, covers last year’s Tour de France and introduces you to most of the protagonists in the forthcoming edition. If this article sparks your interest, I recommend checking it out.

Anyway, for those who are cycling curious, I want to set the stage for this year’s race by saying a little bit about the four main contenders, from most likely to least likely to win, and provide some of the backstory to what could very well be a dramatic race this year.

Tadej Pogačar

Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d'Italia in May.

Enlarge / Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d’Italia in May.

Tim de Waele/Getty Images

  • Slovenia
  • 25 years old
  • UAE Team Emirates
  • Odds: -190

Pogačar burst onto the scene in 2019 at the very young age of 20 by finishing third in the Vuelta a España, one of the three grand tours of cycling. He then went on to win the 2020 and 2021 Tours de France, first by surprising fellow countryman Primož Roglič (more on him below) in 2020 and then utterly dominating in 2021. Given his youth, it seemed he would be the premiere grand tour competitor for the next decade.

But then another slightly older rider, a teammate of Roglič’s named Jonas Vingegaard, emerged in 2022 and won the next two races. Last year, in fact, Vingegaard cracked Pogačar by 7 minutes and 29 seconds in the Tour, a huge winning margin, especially for two riders of relatively close talent. This established Vingegaard as the alpha male of grand tour cyclists, having proven himself a better climber and time trialist than Pogačar, especially in the highest and hardest stages.

So this year, Pogačar decided to change up his strategy. Instead of focusing on the Tour de France, Pogačar participated in the first grand tour of the season, the Giro d’Italia, which occurred in May. He likely did so for a couple of reasons. First of all, he almost certainly received a generous appearance fee from the Italian organizers. And secondly, riding the Giro would give him a ready excuse for not beating Vingegaard in France.

Why is this? Because there are just five weeks between the end of the Giro and the start of the Tour. So if a rider peaks for the Giro and exerts himself in winning the race, it is generally thought that he can’t arrive at the Tour in winning form. He will be a few percent off, not having ideal preparation.

Predictably, Pogačar smashed the lesser competition at the Giro and won the race by 9 minutes and 56 seconds. Because he was so far ahead, he was able to take the final week of the race a bit easier. The general thinking in the cycling community is that Pogačar is arriving at the Tour in excellent but not peak form. But given everything else that has happened so far this season, the bettors believe that will be enough for him to win. Maybe.

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year Read More »

t-mobile-users-enraged-as-“un-carrier”-breaks-promise-to-never-raise-prices

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices

Illustration of T-Mobile customers protesting price hikes

Aurich Lawson

In 2017, Kathleen Odean thought she had found the last cell phone plan she would ever need. T-Mobile was offering a mobile service for people age 55 and over, with an “Un-contract” guarantee that it would never raise prices.

“I thought, wow, I can live out my days with this fixed plan,” Odean, a Rhode Island resident who is now 70 years old, told Ars last week. Odean and her husband switched from Verizon to get the T-Mobile deal, which cost $60 a month for two lines.

Despite its Un-contract promise, T-Mobile in May 2024 announced a price hike for customers like Odean who thought they had a lifetime price guarantee on plans such as T-Mobile One, Magenta, and Simple Choice. The $5-per-line price hike will raise her and her husband’s monthly bill from $60 to $70, Odean said.

As we’ve reported, T-Mobile’s January 2017 announcement of its “Un-contract” for T-Mobile One plans said that “T-Mobile One customers keep their price until THEY decide to change it. T-Mobile will never change the price you pay for your T-Mobile One plan. When you sign up for T-Mobile One, only YOU have the power to change the price you pay.”

T-Mobile contradicted that clear promise on a separate FAQ page, which said the only real guarantee was that T-Mobile would pay your final month’s bill if the company raised the price and you decided to cancel. Customers like Odean bitterly point to the press release that made the price guarantee without including the major caveat that essentially nullifies the promise.

“I gotta tell you, it really annoys me”

T-Mobile’s 2017 press release even blasted other carriers for allegedly being dishonest, saying that “customers are subjected to a steady barrage of ads for wireless deals—only to face bill shock and wonder what the hell happened when their Verizon or AT&T bill arrives.”

T-Mobile made the promise under the brash leadership of CEO John Legere, who called the company the “Un-carrier” and frequently insulted its larger rivals while pledging that T-Mobile would treat customers more fairly. Legere left T-Mobile in 2020 after the company completed a merger with Sprint in a deal that made T-Mobile one of three major nationwide carriers alongside AT&T and Verizon.

Then-CEO of T-Mobile John Legere at the company's Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Enlarge / Then-CEO of T-Mobile John Legere at the company’s Un-Carrier X event in Los Angeles on Tuesday, Nov. 10, 2015.

Getty Images | Bloomberg

After being notified of the price hike, Odean filed complaints with the Federal Communications Commission and the Rhode Island attorney general’s office. “I can afford it, but I gotta tell you, it really annoys me because the promise was so absolutely clear… It’s right there in writing: ‘T-Mobile will never change the price you pay for your T-Mobile One plan.’ It couldn’t be more clear,” she said.

Now, T-Mobile is “acting like, oh, well, we gave ourselves a way out,” Odean said. But the caveat that lets T-Mobile raise prices whenever it wants, “as far as I can tell, was never mentioned to the customers… I don’t care what they say in the FAQ,” she said.

T-Mobile users enraged as “Un-carrier” breaks promise to never raise prices Read More »

taking-a-closer-look-at-ai’s-supposed-energy-apocalypse

Taking a closer look at AI’s supposed energy apocalypse

Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers!

Enlarge / Someone just asked what it would look like if their girlfriend was a Smurf. Better add another rack of servers!

Getty Images

Late last week, both Bloomberg and The Washington Post published stories focused on the ostensibly disastrous impact artificial intelligence is having on the power grid and on efforts to collectively reduce our use of fossil fuels. The high-profile pieces lean heavily on recent projections from Goldman Sachs and the International Energy Agency (IEA) to cast AI’s “insatiable” demand for energy as an almost apocalyptic threat to our power infrastructure. The Post piece even cites anonymous “some [people]” in reporting that “some worry whether there will be enough electricity to meet [the power demands] from any source.”

Digging into the best available numbers and projections available, though, it’s hard to see AI’s current and near-future environmental impact in such a dire light. While generative AI models and tools can and will use a significant amount of energy, we shouldn’t conflate AI energy usage with the larger and largely pre-existing energy usage of “data centers” as a whole. And just like any technology, whether that AI energy use is worthwhile depends largely on your wider opinion of the value of generative AI in the first place.

Not all data centers

While the headline focus of both Bloomberg and The Washington Post’s recent pieces is on artificial intelligence, the actual numbers and projections cited in both pieces overwhelmingly focus on the energy used by Internet “data centers” as a whole. Long before generative AI became the current Silicon Valley buzzword, those data centers were already growing immensely in size and energy usage, powering everything from Amazon Web Services servers to online gaming services, Zoom video calls, and cloud storage and retrieval for billions of documents and photos, to name just a few of the more common uses.

The Post story acknowledges that these “nondescript warehouses packed with racks of servers that power the modern Internet have been around for decades.” But in the very next sentence, the Post asserts that, today, data center energy use “is soaring because of AI.” Bloomberg asks one source directly “why data centers were suddenly sucking up so much power” and gets back a blunt answer: “It’s AI… It’s 10 to 15 times the amount of electricity.”

The massive growth in data center power usage mostly predates the current mania for generative AI (red 2022 line added by Ars).

Enlarge / The massive growth in data center power usage mostly predates the current mania for generative AI (red 2022 line added by Ars).

Unfortunately for Bloomberg, that quote is followed almost immediately by a chart that heavily undercuts the AI alarmism. That chart shows worldwide data center energy usage growing at a remarkably steady pace from about 100 TWh in 2012 to around 350 TWh in 2024. The vast majority of that energy usage growth came before 2022, when the launch of tools like Dall-E and ChatGPT largely set off the industry’s current mania for generative AI. If you squint at Bloomberg’s graph, you can almost see the growth in energy usage slowing down a bit since that momentous year for generative AI.

Determining precisely how much of that data center energy use is taken up specifically by generative AI is a difficult task, but Dutch researcher Alex de Vries found a clever way to get an estimate. In his study “The growing energy footprint of artificial intelligence,” de Vries starts with estimates that Nvidia’s specialized chips are responsible for about 95 percent of the market for generative AI calculations. He then uses Nvidia’s projected production of 1.5 million AI servers in 2027—and the projected power usage for those servers—to estimate that the AI sector as a whole could use up anywhere from 85 to 134 TWh of power in just a few years.

To be sure, that is an immense amount of power, representing about 0.5 percent of projected electricity demand for the entire world (and an even greater ratio in the local energy mix for some common data center locations). But measured against other common worldwide uses of electricity, it’s not representative of a mind-boggling energy hog. A 2018 study estimated that PC gaming as a whole accounted for 75 TWh of electricity use per year, to pick just one common human activity that’s on the same general energy scale (and that’s without console or mobile gamers included).

Worldwide projections for AI energy use in 2027 are on the same scale as the energy used by PC gamers.

Enlarge / Worldwide projections for AI energy use in 2027 are on the same scale as the energy used by PC gamers.

More to the point, de Vries’ AI energy estimates are only a small fraction of the 620 to 1,050 TWh that data centers as a whole are projected to use by 2026, according to the IEA’s recent report. The vast majority of all that data center power will still be going to more mundane Internet infrastructure that we all take for granted (and which is not nearly as sexy of a headline bogeyman as “AI”).

Taking a closer look at AI’s supposed energy apocalypse Read More »

decades-later,-john-romero-looks-back-at-the-birth-of-the-first-person-shooter

Decades later, John Romero looks back at the birth of the first-person shooter

Daikatana didn’t come up —

Id Software co-founder talks to Ars about everything from Catacomb 3-D to “boomer shooters.”

Decades later, John Romero looks back at the birth of the first-person shooter

John Romero remembers the moment he realized what the future of gaming would look like.

In late 1991, Romero and his colleagues at id Software had just released Catacomb 3-D, a crude-looking, EGA-colored first-person shooter that was nonetheless revolutionary compared to other first-person games of the time. “When we started making our 3D games, the only 3D games out there were nothing like ours,” Romero told Ars in a recent interview. “They were lockstep, going through a maze, do a 90-degree turn, that kind of thing.”

Despite Catacomb 3-D‘s technological advances in first-person perspective, though, Romero remembers the team at id followed its release by going to work on the next entry in the long-running Commander Keen series of 2D platform games. But as that process moved forward, Romero told Ars that something didn’t feel right.

Catacombs 3-D is less widely remembered than its successor, Wolfenstein 3D.

“Within two weeks, [I was up] at one in the morning and I’m just like, ‘Guys we need to not make this game [Keen],'” he said. “‘This is not the future. The future is getting better at what we just did with Catacomb.’ … And everyone was immediately was like, ‘Yeah, you know, you’re right. That is the new thing, and we haven’t seen it, and we can do it, so why aren’t we doing it?'”

The team started working on Wolfenstein 3D that very night, Romero said. And the rest is history.

Going for speed

What set Catacomb 3-D and its successors apart from other first-person gaming experiments of the time, Romero said, “was our speed—the speed of the game was critical to us having that massive differentiation. Everyone else was trying to do a world that was proper 3D—six degrees of freedom or representation that was really detailed. And for us, the way that we were going to go was a simple rendering at a high speed with good gameplay. Those were our pillars, and we stuck with them, and that’s what really differentiated them from everyone else.”

That focus on speed extended to id’s development process, which Romero said was unrecognizable compared to even low-budget indie games of today. The team didn’t bother writing out design documents laying out crucial ideas beforehand, for instance, because Romero said “the design doc was next to us; it was the creative director… The games weren’t that big back then, so it was easy for us to say, ‘this is what we’re making’ and ‘things are going to be like this.’ And then we all just work on our own thing.”

John Carmack (left) and John Romero (second from right) pose with their id Software colleagues in the early '90s.

Enlarge / John Carmack (left) and John Romero (second from right) pose with their id Software colleagues in the early ’90s.

The early id designers didn’t even use basic development tools like version control systems, Romero said. Instead, development was highly compartmentalized between different developers; “the files that I’m going to work on, he doesn’t touch, and I don’t touch his files,” Romero remembered of programming games alongside John Carmack. “I only put the files on my transfer floppy disk that he needs, and it’s OK for him to copy everything off of there and overwrite what he has because it’s only my files, and vice versa. If for some reason the hard drive crashed, we could rebuild the source from anyone’s copies of what they’ve got.”

Decades later, John Romero looks back at the birth of the first-person shooter Read More »

internet-archive-forced-to-remove-500,000-books-after-publishers’-court-win

Internet Archive forced to remove 500,000 books after publishers’ court win

Internet Archive forced to remove 500,000 books after publishers’ court win

As a result of book publishers successfully suing the Internet Archive (IA) last year, the free online library that strives to keep growing online access to books recently shrank by about 500,000 titles.

IA reported in a blog post this month that publishers abruptly forcing these takedowns triggered a “devastating loss” for readers who depend on IA to access books that are otherwise impossible or difficult to access.

To restore access, IA is now appealing, hoping to reverse the prior court’s decision by convincing the US Court of Appeals in the Second Circuit that IA’s controlled digital lending of its physical books should be considered fair use under copyright law. An April court filing shows that IA intends to argue that the publishers have no evidence that the e-book market has been harmed by the open library’s lending, and copyright law is better served by allowing IA’s lending than by preventing it.

“We use industry-standard technology to prevent our books from being downloaded and redistributed—the same technology used by corporate publishers,” Chris Freeland, IA’s director of library services, wrote in the blog. “But the publishers suing our library say we shouldn’t be allowed to lend the books we own. They have forced us to remove more than half a million books from our library, and that’s why we are appealing.”

IA will have an opportunity to defend its practices when oral arguments start in its appeal on June 28.

“Our position is straightforward; we just want to let our library patrons borrow and read the books we own, like any other library,” Freeland wrote, while arguing that the “potential repercussions of this lawsuit extend far beyond the Internet Archive” and publishers should just “let readers read.”

“This is a fight for the preservation of all libraries and the fundamental right to access information, a cornerstone of any democratic society,” Freeland wrote. “We believe in the right of authors to benefit from their work; and we believe that libraries must be permitted to fulfill their mission of providing access to knowledge, regardless of whether it takes physical or digital form. Doing so upholds the principle that knowledge should be equally and equitably accessible to everyone, regardless of where they live or where they learn.”

Internet Archive fans beg publishers to end takedowns

After publishers won an injunction stopping IA’s digital lending, which “limits what we can do with our digitized books,” IA’s help page said, the open library started shrinking. While “removed books are still available to patrons with print disabilities,” everyone else has been cut off, causing many books in IA’s collection to show up as “Borrow Unavailable.”

Ever since, IA has been “inundated” with inquiries from readers all over the world searching for the removed books, Freeland said. And “we get tagged in social media every day where people are like, ‘why are there so many books gone from our library’?” Freeland told Ars.

In an open letter to publishers signed by nearly 19,000 supporters, IA fans begged publishers to reconsider forcing takedowns and quickly restore access to the lost books.

Among the “far-reaching implications” of the takedowns, IA fans counted the negative educational impact of academics, students, and educators—”particularly in underserved communities where access is limited—who were suddenly cut off from “research materials and literature that support their learning and academic growth.”

They also argued that the takedowns dealt “a serious blow to lower-income families, people with disabilities, rural communities, and LGBTQ+ people, among many others,” who may not have access to a local library or feel “safe accessing the information they need in public.”

“Your removal of these books impedes academic progress and innovation, as well as imperiling the preservation of our cultural and historical knowledge,” the letter said.

“This isn’t happening in the abstract,” Freeland told Ars. “This is real. People no longer have access to a half a million books.”

Internet Archive forced to remove 500,000 books after publishers’ court win Read More »

from-infocom-to-80-days:-an-oral-history-of-text-games-and-interactive-fiction

From Infocom to 80 Days: An oral history of text games and interactive fiction

Zork running on an Amiga at the Computerspielemuseum in Berlin, Germany.

Enlarge / Zork running on an Amiga at the Computerspielemuseum in Berlin, Germany.

You are standing at the end of a road before a small brick building.

That simple sentence first appeared on a PDP-10 mainframe in the 1970s, and the words marked the beginning of what we now know as interactive fiction.

From the bare-bones text adventures of the 1980s to the heartfelt hypertext works of Twine creators, interactive fiction is an art form that continues to inspire a loyal audience. The community for interactive fiction, or IF, attracts readers and players alongside developers and creators. It champions an open source ethos and a punk-like individuality.

But whatever its production value or artistic merit, at heart, interactive fiction is simply words on a screen. In this time of AAA video games, prestige television, and contemporary novels and poetry, how does interactive fiction continue to endure?

To understand the history of IF, the best place to turn for insight is the authors themselves. Not just the authors of notable text games—although many of the people I interviewed for this article do have that claim to fame—but the authors of the communities and the tools that have kept the torch burning. Here’s what they had to say about IF and its legacy.

Examine roots: Adventure and Infocom

The interactive fiction story began in the 1970s. The first widely played game in the genre was Colossal Cave Adventure, also known simply as Adventure. The text game was made by Will Crowther in 1976, based on his experiences spelunking in Kentucky’s aptly named Mammoth Cave. Descriptions of the different spaces would appear on the terminal, then players would type in two-word commands—a verb followed by a noun—to solve puzzles and navigate the sprawling in-game caverns.

During the 1970s, getting the chance to interact with a computer was a rare and special thing for most people.

“My father’s office had an open house in about 1978,” IF author and tool creator Andrew Plotkin recalled. “We all went in and looked at the computers—computers were very exciting in 1978—and he fired up Adventure on one of the terminals. And I, being eight years old, realized this was the best thing in the universe and immediately wanted to do that forever.”

“It is hard to overstate how potent the effect of this game was,” said Graham Nelson, creator of the Inform language and author of the landmark IF Curses, of his introduction to the field. “Partly that was because the behemoth-like machine controlling the story was itself beyond ordinary human experience.”

Perhaps that extraordinary factor is what sparked the curiosity of people like Plotkin and Nelson to play Adventure and the other text games that followed. The roots of interactive fiction are entangled with the roots of the computing industry. “I think it’s always been a focus on the written word as an engine for what we consider a game,” said software developer and tech entrepreneur Liza Daly. “Originally, that was born out of necessity of primitive computers of the ’70s and ’80s, but people discovered that there was a lot to mine there.”

Home computers were just beginning to gain traction as Stanford University student Don Woods released his own version of Adventure in 1977, based on Crowther’s original Fortran work. Without wider access to comparatively pint-sized machines like the Apple 2 and the Vic-20, Scott Adams might not have found an audience for his own text adventure games, released under his company Adventure International, in another homage to Crowther. As computers spread to more people around the world, interactive fiction was able to reach more and more readers.

From Infocom to 80 Days: An oral history of text games and interactive fiction Read More »

hello-sunshine:-we-test-mclaren’s-drop-top-hybrid-artura-spider

Hello sunshine: We test McLaren’s drop-top hybrid Artura Spider

orange express —

The addition of a retractable roof makes this Artura the one to pick.

An orange McLaren Artura Spider drives on a twisy road

Enlarge / The introduction of model year 2025 brings a retractable hard-top option for the McLaren Artura, plus a host of other upgrades.

McLaren

MONACO—The idea of an “entry-level” supercar might sound like a contradiction in terms, but every car company’s range has to start somewhere, and in McLaren’s case, that’s the Artura. When Ars first tested this mid-engined plug-in hybrid in 2022, It was only available as a coupe. But for those who prefer things al fresco, the British automaker has now given you that option with the addition of the Artura Spider.

The Artura represented a step forward for McLaren. There’s a brand-new carbon fiber chassis tub, an advanced electronic architecture (with a handful of domain controllers that replace the dozens of individual ECUs you might find in some of its other models), and a highly capable hybrid powertrain that combines a twin-turbo V6 gasoline engine with an axial flux electric motor.

More power, faster shifts

For model year 2025 and the launch of the $273,800 Spider version, the engineering team at McLaren have given it a spruce-up, despite only being a couple of years old. Overall power output has increased by 19 hp (14 kW) thanks to new engine maps for the V6, which now has a bit more surge from 4,000 rpm all the way to the 8,500 rpm redline. Our test car was fitted with the new sports exhaust, which isn’t obnoxiously loud. It makes some interesting noises as you lift the throttle in the middle of the rev range, but like most turbo engines, it’s not particularly mellifluous.

  • The new engine map means the upper half of third gear will give you a real shove toward the horizon.

    McLaren

  • The Artura Spider’s buttresses are made from a lightweight and clear polymer, so they do their job aerodynamically without completely obscuring your view over your shoulder.

    McLaren

  • The Artura Spider is covered in vents and exhausts to channel air into and out of various parts of the car.

    McLaren

  • You could have your Artura Spider painted in a more somber color. But Orange with carbon fiber looks pretty great to me.

  • If you look closely, you can see the transmission hiding behind the diffuser.

    Jonathan Gitlin

Combined with the 94 hp (70 kW) electric motor, that gives the Artura Spider a healthy 680 hp (507 kW), which helps compensate for the added 134 lbs (62 kg) due to the car’s retractable hard top. There are stiffer engine mounts and new throttle maps, and the dual-clutch transmission shifts 25 percent faster than what we saw in the car that launched two years ago. (These upgrades are carried over to the Artura coupe as well, and the good news for existing owners is that the engine remapping can be applied to their cars, too, with a visit to a McLaren dealer.)

Despite the hybrid system—which uses a 7.4 kWh traction battery—and the roof mechanism, the Artura Spider remains a remarkably light car by 2024 standards, with a curb weight of 3,439 lbs (1,559 kg), which makes it lighter than any comparable car on the market.

In fact, picking a comparable car is a little tricky. Ferrari will sell you a convertible hybrid in the shape of the 296 GTS, but you’ll need another $100,000 or more to get behind the wheel of one of those, which in truth is more of a competitor for the (not-hybrid) 750S, McLaren’s middle model. Any other mid-engined drop-top will be propelled by dino juice alone.

What modes do you want today?

It's easy to drive around town and a lot of fun to drive on a twisty road.

Enlarge / It’s easy to drive around town and a lot of fun to drive on a twisty road.

McLaren

You can drive it using just the electric motor for up to 11 miles if you keep the powertrain in E-mode and start with a fully charged battery. In fact, when you start the car, it begins in this mode by default. Outside of E-mode, the Artura will use spare power from the engine to top up the battery as you drive, and it’s very easy to set a target state of charge if you want to save some battery power for later, for example. Plugged into a Level 2 charger, it should take about 2.5 hours to reach 80 percent.

The car is light enough that 94 hp is more than adequate for the 20 mph or 30 km/h zones you’re sure to encounter whether you’re driving this supercar through a rural village or past camera-wielding car-spotters in the city. Electric mode is serious, and the car won’t fire up the engine until you switch to Comfort (or Sport, or Track) with the control on the right side of the main instrument display.

On the left side is another control to switch the chassis settings between Comfort, Sport, and Track. For road driving, comfort never felt wrong-footed, and I really would leave track for the actual track. The same goes for the Track powertrain setting; for the open road, Sport is the best-sounding, and comfort is well-judged for everyday use and will kill the V6 when it’s not needed. Sport and Track instead use the electric motor—mounted inside the case of the eight-speed transmission—to fill in torque where needed, similar to an F1 or LMDh race car.

Hello sunshine: We test McLaren’s drop-top hybrid Artura Spider Read More »

mod-easy:-a-retro-e-bike-with-a-sidecar-perfect-for-indiana-jones-cosplay

Mod Easy: A retro e-bike with a sidecar perfect for Indiana Jones cosplay

Pure fun —

It’s not the most practical option for passengers, but my son had a blast.

The Mod Easy Sidecar

Enlarge / The Mod Easy Sidecar

As some Ars readers may recall, I reviewed The Maven Cargo e-bike earlier this year as a complete newb to e-bikes. For my second foray into the world of e-bikes, I took an entirely different path.

The stylish Maven was designed with utility in mind—it’s safe, user-friendly, and practical for accomplishing all the daily transportation needs of a busy family. The second bike, the $4,299 Mod Easy Sidecar 3, is on the other end of the spectrum. Just a cursory glance makes it clear: This bike is built for pure, head-turning fun.

The Mod Easy 3 is a retro-style Class 2 bike—complete with a sidecar that looks like it’s straight out of Indiana Jones and the Last Crusade. Nailing this look wasn’t the initial goal of Mod Bike founder Dor Korngold. In an interview with Ars, Korngold said the Mod Easy was the first bike he designed for himself. “It started with me wanting to have this classic cruiser,” he said, but he didn’t have a sketch or final design in mind at the outset. Instead, the design was based on what parts he had in his garage.

The first step was adding a wooden battery compartment to an old Electra frame he had painted. The battery compartment “looked vintage from the beginning,” he said, but the final look came together gradually as he added the sidecar and some of the other motorcycle-style features. Today, the Mod Easy is a sleek bike reminiscent of World War II-era motorcycles and comes in a chic matte finish.

An early version of the Mod Easy bike.

Enlarge / An early version of the Mod Easy bike.

Dor Korngold

When I showed my 5-year-old son a picture of the bike and sidecar, he was instantly enamored and insisted I review it. How could I refuse? He thoroughly enjoyed riding with me on the Maven, but riding in the sidecar turned out to be some next-level fun. He will readily tell you he gives it a five out of five-star rating. But in case you want a more thorough review, my thoughts are below. I’ll start with some general impressions and then discuss specific features of the bike and experience.

The Mod Easy Sidecar 3 at a glance

General impressions

  • The Mod Easy Sidecar 3.

  • Just the bike, which is sold at $3,299

    Beth Mole

  • The Mod Easy Sidecar 3.

    Beth Mole

Again, this is a stylish, fun bike. The bike alone is an effortless and smooth ride. Although it has the heft of an e-bike at 77 pounds (without the sidecar), it never felt unwieldy to me as a 5-foot-4-inch rider. The torque sensors are beautifully integrated into the riding experience, allowing the motor to feel like a gentle, natural assist to pedaling rather than an on-off boost. Of course, with my limited experience, I can’t comment on how these torque sensors compare to other torque sensors, but I have no complaints, and they’re an improvement over my experience with cadence sensors.

You may remember from my review of the Maven that the entrance to a bike path in my area has a switchback path with three tight turns on a hill. With the Maven’s cadence sensors, I struggled to go through the U-turns smoothly, especially going uphill, even after weeks of practice. With the Mod Easy’s torque sensors (and non-cargo length), I glided through them perfectly on the first try. Overall, the bike handles and corners nicely. The wide-set handlebars give the driving experience a relaxed, cruising feel, while the cushy saddle invites you to sink in and stay awhile. The sidecar, meanwhile, was a fun, head-turning feature, but it presents some practical aspects to consider.

Below, I’ll go through key features, starting with the headlining one: the sidecar.

Mod Easy: A retro e-bike with a sidecar perfect for Indiana Jones cosplay Read More »

may-contain-nuts:-precautionary-allergen-labels-lead-to-consumer-confusion

May contain nuts: Precautionary allergen labels lead to consumer confusion

can i eat this or not? —

Some labels suggest allergen cross-contamination that might not exist.

May contain nuts: Precautionary allergen labels lead to consumer confusion

TopMicrobialStock, Getty Images

When Ina Chung, a Colorado mother, first fed packaged foods to her infant, she was careful to read the labels. Her daughter was allergic to peanuts, dairy, and eggs, so products containing those ingredients were out. So were foods with labels that said they may contain the allergens.

Chung felt like this last category suggested a clear risk that wasn’t worth taking. “I had heard that the ingredient labels were regulated. And so I thought that that included those statements,” said Chung. “Which was not true.”

Precautionary allergen labels like those that say “processed in a facility that uses milk” or “may contain fish” are meant to address the potential for cross-contact. For instance, a granola bar that doesn’t list peanuts as an ingredient could still say they may be included. And in the United States, these warnings are not regulated; companies can use whatever precautionary phrasing they choose on any product. Some don’t bother with any labels, even in facilities where unintended allergens slip in; others list allergens that may pose little risk. Robert Earl, vice president of regulatory affairs at Food Allergy Research & Education, or FARE, a nonprofit advocacy, research, and education group, has even seen such labels that include all nine common food allergens. “I would bet my bottom dollar not all of those allergens are even in the facility,” he said.

So what are the roughly 20 million people with food allergies in the US supposed to do with these warnings? Should they eat the granola bar or not?

Recognizing this uncertainty, food safety experts, allergy advocates, policymakers, and food producers are discussing how to demystify precautionary allergen labels. One widely considered solution is to restrict warnings to cases where visual or analytical tests demonstrate that there is enough allergen to actually trigger a reaction. Experts say the costs to the food industry are minimal, and some food producers across the globe, including in Canada, Australia, Thailand, and the United States, already voluntarily take this approach. But in the US, where there are no clear guidelines to follow, consumers are still left wondering what each individual precautionary allergen label even means.

Pull a packaged food off an American store shelf and the ingredients label should say if the product intentionally contains one of nine recognized allergens. That’s because in 2004, Congress granted the Food and Drug Administration the power to regulate labeling of eight major food allergens—eggs, fish, milk, crustaceans, peanuts, tree nuts, soybeans, and wheat. In 2021, sesame was added to the list.

But the language often gets murkier further down the label, where companies may include precautionary allergen labels, also called advisory statements, to address the fact that allergens can unintentionally wind up in foods at many stages of production. Perhaps wheat grows near a field of rye destined for bread, for instance, or peanuts get lodged in processing equipment that later pumps out chocolate chip cookies. Candy manufacturers, in particular, struggle to keep milk out of dark chocolate.

The FDA offers no labeling guidance beyond declaring that “advisory statements should not be used as a substitute for adhering to current good manufacturing practices and must be truthful and not misleading.”

Companies can choose when to use these warnings, which vary widely. For example, a 2017 survey conducted by the FDA and the Illinois Institute of Technology of 78 dark chocolate products found that almost two-thirds contained an advisory statement for peanuts; of those, only about four actually contained the allergen. Meanwhile, of 18 bars that carried no advisory statement for peanuts specifically, three contained the allergen. (One product that was positive for peanuts did warn more generally of nuts, but the researchers noted that this term is ambiguous.) Another product that tested positive included a nut warning on one lot but not on another. Individual companies also select their own precautionary label phrasing.

For consumers, the inconsistency can be confusing, said Ruchi Gupta, a pediatrician and director of the Center for Food Allergy & Asthma Research at Northwestern University’s Feinberg School of Medicine in Chicago. In 2019, Gupta and colleagues asked around 3,000 US adults who have allergies or care for someone who does about how different precautionary allergen label phrases make a difference when they are considering whether to buy a particular food. About 80 percent never purchase products with a may contain warning. Less than half avoid products with labels suggesting that it was manufactured in a facility that also processes an allergen, even though numerous studies show that the wording of a precautionary allergen label has no bearing on risk level. “People are making their own decisions on what sounds safe,” said Gupta.

When Chung learned that advisory labels were unregulated, she experimented with ignoring them when her then-toddler really wanted a particular food. When her daughter developed a couple of hives after eating a cereal labeled may contain peanuts, Chung went back to heeding warnings of peanut cross-contact but continued ignoring the rest.

“A lot of families just make up their own rules,” she said. “There’s no way to really know exactly what you’re getting.”

May contain nuts: Precautionary allergen labels lead to consumer confusion Read More »