Author name: Mike M.

fair-or-fixed?-why-le-mans-is-all-about-“balance-of-performance”-now.

Fair or fixed? Why Le Mans is all about “balance of performance” now.


Last year’s data plus plenty of simulation are meant to create a level playing field.

Dozen and dozens of racing cars lined up on the start line at Le Mans

LE MANS, FRANCE – JUNE 10: The #35 Alpine Endurance Team Alpine A424 of Paul-Loup Chatin, Ferdinand Habsburg-Lothringen, and Charles Milesi sits among the 2025 Le Mans entry for a group picture on the main straight at the Circuit de la Sarthe on June 10, 2025 in Le Mans, France. Credit: Ker Robertson/Getty Images

LE MANS, FRANCE – JUNE 10: The #35 Alpine Endurance Team Alpine A424 of Paul-Loup Chatin, Ferdinand Habsburg-Lothringen, and Charles Milesi sits among the 2025 Le Mans entry for a group picture on the main straight at the Circuit de la Sarthe on June 10, 2025 in Le Mans, France. Credit: Ker Robertson/Getty Images

This coming weekend will see the annual 24 Hours of Le Mans take place in France. In total, 62 cars will compete, split into three different classes. At the front of the field are the very fastest hypercars—wickedly fast prototypes that are also all hybrids, with the exception of the V12 Aston Martin Valkyries. In the middle are the pro-am LMP2s, followed by 24 GT3 cars—modified versions of performance cars that include everything from Ford Mustangs to McLaren 720s. It is racing nirvana. But with so many different makes and models of cars in the Hypercar class, some two-wheel drive, others with all-wheel drive, how do they ensure it’s a fair race?

Get ready for some acronyms

Sports car racing can be (needlessly) complicated at times. Take the Hypercar class at Le Mans. The 21 cars that will contest it are actually built to two separate rulebooks.

One, called LMH (for Le Mans Hypercar), was written by the organizers of Le Mans and the World Endurance Championship. These prototypes can be hybrids, with the electric motor on the front axle: Ferrari, Peugeot, and Toyota have all taken this route. But they don’t have to be; the Aston Martin Valkyrie already had to lose a lot of power to meet the rules, so it just relies on its big V12 to do all the work. Most of the cars are purpose-built for the race, but Aston Martin went the other route and converted a road car for racing.

The other is called LMDh (Le Mans Daytona hybrid) and hails from the US, in the rulebook written for the International Motor Sports Association’s GTP category. As the name suggests, these cars must be hybrids, and all must use the same specified motor, battery, and gearbox. LMDh cars also all need to start off using one of four approved carbon-fiber chassis (or spines), onto which automakers can style their own bodies and add their own engines. Alpine, BMW, Cadillac, and Porsche all have LMDh cars entered in this year’s Le Mans.

Convergence

In a parallel universe, the result would be two competing series, neither with many cars on the grid. But the people at IMSA get on pretty well with the organizers of Le Mans (the Automobile Club de l’Ouest or ACO) and the World Endurance Championship (the Fédération Internationale de l’Automobile, or FIA), and they decided to create a way to allow everyone to play together in the same sandbox.

“2021 [was] the first year with LMH, and at that time, the only big manufacturer involved was Toyota; Glickenhaus was there at the time, but there were not many manufacturers, let’s say, interested in that kind of category,” said Thierry Bouvet, competition director at the ACO.

“So together with IMSA, while the world was [isolating] during the pandemic, we basically wrote a set of technical regulations, LMDh which was, on paper, a little bit of a different car [with] more focus on avoiding cost escalation. After a couple of years of writing those regulations, we had an interesting process of convergence, we call it, to be able to have the LMH and LMDh racing together,” he said.

It’s not the first time that different cars have competed against each other at Le Mans. Before Hypercar, the top category was called LMP1h (Le Mans Prototype 1 hybrids), which burned brightly for a few short years but collapsed under the weight of F1-level budgets that proved too much for both Audi and Porsche, leaving just Toyota and some privateers. LMP1h used a complicated “Equivalence of Technology,” but now the approach, first perfected with the slower GT3 cars, is called Balance of Performance, or BoP.

LE MANS, FRANCE - JUNE 10: The Penske Porsche, Ferrari AF-Corse, Toyota Gazoo Racing and Jota Cadillac sit on the front row as the 2025 Le Mans entry sits for a group picture on the main straight at the Circuit de la Sarthe on June 10, 2025 in Le Mans, France.

The race starts at 10 am ET on Saturday, June 14. Credit: Ker Robertson/Getty Images

Obviously, none of the automakers behind the LMDh teams would have entered the race if they thought only LMH cars had a chance of winning overall.

“So it went through a couple of long and very interesting—in terms of technique, technically speaking—simulation working groups, where we involved all the manufacturers from both categories, and we believe we achieved… a nice working point in the middle, which allows both cars to be competitive, through the different restrictions, through BoP and so on. Now we feel that we’ve got a really fair and equitable working point,” Bouvet said. As evidence, he pointed to the fact that last year Toyota took the World Endurance Championship for constructors, but Porsche’s drivers cemented the WEC driver’s title, with Ferrari winning Le Mans.

Imma hit you with the BoP gun

The rules limit both the amount of downforce and the amount of drag that the cars can generate from their bodywork, which have to be in the ratio of 4:1; this prevents any one manufacturer from having a massive advantage in terms of cornering grip or fuel efficiency. From there, the BoP gets more granular, setting maximum weight and power outputs (above and below 250 km/h), the maximum amount of energy allowed to be sent to the wheels between pit stops, as well as any extra time added to pit stops.

Weighing cars is easy, and timing them in pit stops is old hat, too. But the advance here is the torque sensors at each axle that feed back data to the race officials, letting them know exactly how much power each car is deploying to its wheels.

“We had to think of something which will work independently, whether it’s hybrid power or internal combustion engine power. Should we think about fuel only? That will only be concerning, obviously, the internal combustion engine and not do the job for the hybrid system. So, power at the wheel is a nice and elegant solution,” he said.

LE MANS, FRANCE - JUNE 8: The #007 Aston Martin Thor Team, Aston Martin Valkyrie of Harry Tincknell, Tom Gamble, and Ross Gunn in action during Test Day on June 8, 2025 in Le Mans, France.

The Aston Martin Valkyrie is the only road-going hypercar to be entered into the Hypercar category at Le Mans. Credit: ames Moy Photography/Getty Images

For the World Endurance Championship, BoP is calculated on a rolling average of the last three races, with some OEMs getting a little more weight or a little less power if necessary. While the 24 Hours of Le Mans counts as a round of the WEC, it’s open to other entrants as well, and BoP works a bit differently. Instead, Bouvet and his team based this year’s BoP on data from last year’s 24-hour race, plus the simulations he mentioned. This is done to prevent teams from sandbagging in the races that lead up to their most important race of the year

As the newest and least competitive car, the Valkyrie gets the biggest break, with a minimum weight of just 2,271 lbs (1,030 kg) and a maximum power of 697 hp (520 kW). The Toyota GR010—which won the race in 2021 and 2022—can also deploy 697 hp but at a minimum weight of 2,321 lbs (1,052 kg), more than any other car in the class.

No process is perfect, and there is little that racing fans like to complain about more than BoP, which some feel makes racing too artificial, or even fixed. You’re unlikely to hear complaints about it from competitors at Le Mans, though—criticizing BoP is not allowed in WEC, although both Porsche and Toyota have recently expressed their feelings about BoP within those strictures.

The first qualifying session for this weekend’s race took place earlier today, sorting out the 15 fastest Hypercars that will compete later this week to see who leads the pack to the start line on Saturday.

Photo of Jonathan M. Gitlin

Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica’s automotive coverage. He lives in Washington, DC.

Fair or fixed? Why Le Mans is all about “balance of performance” now. Read More »

a-warlord-brings-chaos-in-foundation-s3-trailer

A warlord brings chaos in Foundation S3 trailer

Foundation returns for a third season next month on Apple TV+.

Foundation, Apple TV+’s lavish adaptation (or re-mix, if you prefer) of Isaac Asimov’s seminal sci-fi series, returns for its third season next month, and the streaming platform has dropped an official trailer to give us a taste of what’s in store.

As previously reported, the first season ended with a major time jump of 138 years, and S2 focused on the Second Crisis: imminent war between Empire and the Foundation, along with an enemy seeking to destroy Empire from within. The Foundation, meanwhile, adopted the propaganda tactics of religion to recruit new acolytes to the cause. We also met a colony of “Mentalics” with psionic abilities. We’re getting another mega time jump for the Third Crisis.

Per the official premise:

Set 152 years after the events of S2, The Foundation has become increasingly established far beyond its humble beginnings while the Cleonic Dynasty’s Empire has dwindled. As both of these galactic powers forge an uneasy alliance, a threat to the entire galaxy appears in the fearsome form of a warlord known as “The Mule” whose sights are set on ruling the universe by use of physical and military force, as well as mind control. It’s anyone’s guess who will win, who will lose, who will live, and who will die as Hari Seldon, Gaal Dornick, the Cleons and Demerzel play a potentially deadly game of intergalactic chess.

Most of the main cast is returning: Lee Pace as Brother Day, Cassian Bilton as Brother Dawn, Terrence Mann as Brother Dusk, Jared Harris as Hari Seldon, Lou Llobell as Gaal, and Laura Birn as Eto Demerzel. Pilou Asbæk plays the Mule. New S3 cast members include Alexander Siddig as Dr. Ebling Mis, a Seldon fan and self-taught psychohistorian; Troy Kotsur as Preem Palver, leader of a planet of psychics; Cherry Jones as Foundation Ambassador Quent; Brandon P. Bell as Han Pritcher; Synnøve Karlsen as Bayta Mallow; Cody Fern as Toran Mallow; Tómas Lemarquis as Magnifico Giganticus; Yootha Wong-Loi-Sing as Song; and Leo Bill as Mayor Indbur.

A warlord brings chaos in Foundation S3 trailer Read More »

false-claims-that-ivermectin-treats-cancer,-covid-lead-states-to-pass-otc-laws

False claims that ivermectin treats cancer, COVID lead states to pass OTC laws

Doctors told the Times that they have already seen some cases where patients with treatable, early-stage cancers have delayed effective treatments to try ivermectin, only to see no effect and return to their doctor’s office with cancers that have advanced.

Risky business

Nevertheless, the malignant misinformation on social media has made its way into state legislatures. According to an investigation by NBC News published Monday, 16 states have proposed or passed legislation that would make ivermectin available over the counter. The intention is to make it much easier for people to get ivermectin and use it for any ailment they believe it can cure.

Idaho, Arkansas, and Tennessee have passed laws to make ivermectin available over the counter. On Monday, Louisiana’s state legislature passed a bill to do the same, and it now awaits signing by the governor. The other states that have considered or are considering such bills include: Alabama, Georgia, Kentucky, Maine, Minnesota, Mississippi, New Hampshire, North Carolina, North Dakota, Pennsylvania, South Carolina, and West Virginia.

State laws don’t mean the dewormer would be readily available, however; ivermectin is still regulated by the Food and Drug Administration, and it has not been approved for over-the-counter use yet. NBC News called 15 independent pharmacies in the three states that have laws on the books allowing ivermectin to be sold over the counter (Idaho, Arkansas, and Tennessee) and couldn’t find a single pharmacist who would sell it without a prescription. Pharmacists pointed to the federal regulations.

Likewise, CVS Health said its pharmacies are not currently selling ivermectin over the counter in any state. Walgreens declined to comment.

Some states, such as Alabama, have considered legislation that would protect pharmacists from any possible disciplinary action for dispensing ivermectin without a prescription. However, one pharmacist in Idaho, who spoke with NBC News, said that such protection would still not be enough. As a prescription-only drug, ivermectin is not packaged for retail sale. If it were, it would include over-the-counter directions and safety statements written specifically for consumers.

“If you dispense something that doesn’t have directions or safety precautions on it, who’s ultimately liable if that causes harm?” the pharmacist said. “I don’t know that I would want to assume that risk.”

It’s a risk people on social media don’t seem to be concerned with.

False claims that ivermectin treats cancer, COVID lead states to pass OTC laws Read More »

fcc-threat-to-revoke-echostar-spectrum-licenses-draws-widespread-backlash

FCC threat to revoke EchoStar spectrum licenses draws widespread backlash

Incompas, a communications industry trade group, said that revoking the deadline extension “would undermine regulatory certainty and threaten to disrupt ongoing investments in advanced network infrastructure, including EchoStar’s important work to integrate Open RAN and satellite capabilities.”

EchoStar, SpaceX, and VTel Wireless each submitted one last filing on Friday. SpaceX urged the FCC “to ensure that valuable spectrum resources are not allowed to remain fallow but instead are made available to those who would put them to productive use to provide advanced services to consumers across the United States.”

VTel Wireless, which was outbid by Dish in auctions for spectrum licenses, said that “nothing prevented EchoStar from meeting its final buildout deadlines; it simply made the business decision not to do so, at least until it faced the loss of its licenses. Under the circumstances, the Commission should investigate EchoStar’s conduct in seeking an extension of its final buildout deadlines.”

EchoStar in financial trouble

EchoStar said that a reversal “would unlawfully discriminate against EchoStar by treating it materially differently, and indeed much worse, than similarly situated entities,” and “would be a sharp and discriminatory departure from the thousands of license extensions the Bureau granted in the last two administrations—often without conditions, without public notice, and with a mere stamp grant.”

EchoStar’s financial stability is threatened by the FCC proceeding, as the company last week announced it would skip debt-interest payments that were due on June 2. EchoStar said it made the decision “in light of the uncertainty raised by the Federal Communications Commission review.”

There is a 30-day grace period before a default. “EchoStar has elected not to make the Interest Payments to allow time for the FCC to provide the relief requested in our FCC filing prior to the expiration of the 30-day grace period,” the company said. The Wall Street Journal article on EchoStar’s potential bankruptcy filing said the firm “has skipped about $500 million in debt-interest payments in recent days, starting a countdown that would push the company into default before July if not cured within the bonds’ grace period.”

FCC threat to revoke EchoStar spectrum licenses draws widespread backlash Read More »

ocean-acidification-crosses-“planetary-boundaries”

Ocean acidification crosses “planetary boundaries”

A critical measure of the ocean’s health suggests that the world’s marine systems are in greater peril than scientists had previously realized and that parts of the ocean have already reached dangerous tipping points.

A study, published Monday in the journal Global Change Biology, found that ocean acidification—the process in which the world’s oceans absorb excess carbon dioxide from the atmosphere, becoming more acidic—crossed a “planetary boundary” five years ago.

“A lot of people think it’s not so bad,” said Nina Bednaršek, one of the study’s authors and a senior researcher at Oregon State University. “But what we’re showing is that all of the changes that were projected, and even more so, are already happening—in all corners of the world, from the most pristine to the little corner you care about. We have not changed just one bay, we have changed the whole ocean on a global level.”

The new study, also authored by researchers at the UK’s Plymouth Marine Laboratory and the National Oceanic and Atmospheric Administration (NOAA), finds that by 2020 the world’s oceans were already very close to the “danger zone” for ocean acidity, and in some regions had already crossed into it.

Scientists had determined that ocean acidification enters this danger zone or crosses this planetary boundary when the amount of calcium carbonate—which allows marine organisms to develop shells—is less than 20 percent compared to pre-industrial levels. The new report puts the figure at about 17 percent.

“Ocean acidification isn’t just an environmental crisis, it’s a ticking time bomb for marine ecosystems and coastal economies,” said Steve Widdicombe, director of science at the Plymouth lab, in a press release. “As our seas increase in acidity, we’re witnessing the loss of critical habitats that countless marine species depend on and this, in turn, has major societal and economic implications.”

Scientists have determined that there are nine planetary boundaries that, once breached, risk humans’ abilities to live and thrive. One of these is climate change itself, which scientists have said is already beyond humanity’s “safe operating space” because of the continued emissions of heat-trapping gases. Another is ocean acidification, also caused by burning fossil fuels.

Ocean acidification crosses “planetary boundaries” Read More »

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

protesters-summon,-burn-waymo-robotaxis-in-los-angeles-after-ice-raids

Protesters summon, burn Waymo robotaxis in Los Angeles after ICE raids

The robotaxi company Waymo has suspended service in some parts of Los Angeles after some of its vehicles were summoned and then vandalized by protesters angry with ongoing raids by US Immigration and Customs Enforcement. Five of Waymo’s autonomous Jaguar I-Pace electric vehicles were summoned downtown to the site of anti-ICE protests, at which point they were vandalized with slashed tires and spray-painted messages. Three were set on fire.

The Los Angeles Police Department warned people to avoid the area due to risks from toxic gases given off by burning EVs. And Waymo told Ars that it is “in touch with law enforcement” regarding the matter.

The protesters in Los Angeles were outraged after ICE, using brutal tactics, began detaining people in raids across the city. Thousands of Angelenos took to the streets over the weekend to confront the masked federal enforcers and, in some cases, forced them away.

In response, the Trump administration mobilized more than 300 National Guard soldiers without consulting with or being requested to do so by the California governor.

California Governor Gavin Newsom has promised to sue the administration. “Donald Trump has created the conditions you see on your TV tonight. He’s exacerbated the conditions. He’s, you know, lit the proverbial match. He’s putting fuel on this fire, ever since he announced he was taking over the National Guard—an illegal act, an immoral act, an unconstitutional act,” Newsom said in an interview.

Waymo began offering rides in Los Angeles last November, and by January, the company said it had driven almost 2 million miles in the city. But there is some animosity toward robotaxis and food delivery robots, which are now being used by the Los Angeles Police Department as sources of surveillance footage. In April, the LAPD published footage obtained from a Waymo that it used to investigate a hit-and-run.

Protesters summon, burn Waymo robotaxis in Los Angeles after ICE raids Read More »

warner-bros.-discovery-makes-still-more-changes,-will-split-streaming,-tv-business

Warner Bros. Discovery makes still more changes, will split streaming, TV business

Warner Bros. Discovery will split its business into two publicly traded companies, with one focused on its streaming and studios business and the other on its television network businesses, including CNN and Discovery.

The US media giant said the move would unlock value for shareholders as well as create opportunities for both businesses, breaking up a group created just three years ago from the merger of Warner Media and Discovery.

Warner Bros. Discovery last year revealed its intent to split its business in two, a plan first reported by the Financial Times in July last year. The company intends to complete the split by the middle of next year.

The move comes on the heels of a similar move by rival Comcast, which last year announced plans to spin off its television networks, including CNBC and MSNBC, into a separate company.

US media giants are seeking to split their faster growing streaming businesses from their legacy television networks, which are facing the prospect of long-term decline as viewers turn away from traditional television.

Warner Bros. Discovery shares were more than 10 percent higher pre-market.

David Zaslav, chief executive of Warner Bros. Discovery, will head the streaming and studios arm, while chief financial officer Gunnar Wiedenfels will serve as president and chief executive of global networks. Both will continue in their present roles until the separation.

Zaslav said on Monday the split would result in a “sharper focus” and enhanced “strategic flexibility,” that would leave each company better placed to compete in “today’s evolving media landscape.”

Warner Bros. Discovery Chair Samuel A. Di Piazza Jr. said the move would “enhance shareholder value.”

The streaming and studios arm will consist of Warner Bros. Television, Warner Bros. Motion Picture Group, DC Studios, HBO, and HBO Max, as well as their film and television libraries.

Global networks will include entertainment, sports, and news television brands around the world, including CNN, TNT Sports in the US, and Discovery.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Warner Bros. Discovery makes still more changes, will split streaming, TV business Read More »

a-long-shot-plan-to-mine-the-moon-comes-a-little-closer-to-reality

A long-shot plan to mine the Moon comes a little closer to reality

The road ahead

Meyerson said the company’s current plan is to fly a prospecting mission in 2027, a payload of less than 100-kg, likely on a commercial lander that is part of NASA’s Commercial Lunar Payload Services program. Two years later the company seeks to fly a pilot plant. Meyerson said the size of this plant will depend on the launch capability available (i.e. if Starship is flying to the Moon, they’ll go big, and smaller if not).

Following this, Interlune is targeting 2032 for the launch of a solar-powered operating plant, which would include five mobile harvesters. The operation would also be able to return material mined to Earth. The total mass for this equipment would be about 40 metric tons, which could fly on a single Starship or two New Glenn Mk 2 landers. This would, understandably, be highly ambitious and capital-intensive. After raising $15 million last year, Meyerson said Interlune is planning a second fundraising round that should begin soon.

There are some outside factors that may be beneficial for Interlune. One is that China has a clear and demonstrated interest in sending humans to the Moon and has already sent rovers to explore for helium-3 resources. Moreover, with the exit of Jared Isaacman as a nominee to lead NASA, the Trump administration is likely to put someone in the position who is more focused on lunar activities. One candidate, a retired Air Force General named Steve Kwast, is a huge proponent of mining helium-3.

Interlune has a compelling story in that there are almost no other lunar businesses focused on purely commercial activities, those that will drive value from mining the lunar surface. In that sense, they could be a lynchpin of a lunar economy. But they’ve got a long way to go, and a lot of lunar regolith to plow through, before they start delivering for customers.

A long-shot plan to mine the Moon comes a little closer to reality Read More »

new-adventures-await-the-crew-in-strange-new-worlds-s3-trailer

New adventures await the crew in Strange New Worlds S3 trailer

Star Trek: Strange New Worlds returns for a third season next month.

Apart from a short teaser in April, we haven’t seen much of Star Trek: Strange New Worlds‘ upcoming third season, debuting next month. But Paramount+ has finally released the official trailer.

(Spoilers for S2 below.)

As previously reported, the S2 finale found the Enterprise under vicious attack by the Gorn, who were in the midst of invading one of the Federation’s colony worlds. Several crew members were kidnapped, along with other survivors of the attack. Captain Pike (Anson Mount) faced a momentous decision: follow orders to retreat or disobey them to rescue his crew. Footage shown last October at New York City Comic-Con picked up where the finale left off, giving us the kind of harrowing high-stakes pitched space battle against a ferocious enemy that has long been a hallmark of the franchise. (Of course, Pike opted to rescue his crew.)

Per the official synopsis:

In Season 3, when we reconnect with the crew of the U.S.S. Enterprise, still under the command of Captain Pike, they face the conclusion of Season 2’s harrowing encounter with the Gorn. But new life and civilizations await, including a villain that will test our characters’ grit and resolve. An exciting twist on classic Star Trek, Season 3 takes characters both new and beloved to new heights, and dives into thrilling adventures of faith, duty, romance, comedy, and mystery, with varying genres never before seen on any other Star Trek.

In addition to the returning main and recurring cast members, Cillian O’Sullivan joins the recurring cast as Dr. Roger Korby, a legacy character (originally played by Michael Strong). Korby was a renowned archaeologist in the field of medical archaeology and Nurse Chapel’s long-missing fiancé. His reappearance in S3 is bound to cause problems for SNW‘s Nurse Chapel (Jess Bush), who is romantically involved with Spock (Ethan Peck). Rhys Darby and Patton Oswalt will also guest star.

New adventures await the crew in Strange New Worlds S3 trailer Read More »

what-to-expect-from-apple’s-worldwide-developers-conference-next-week

What to expect from Apple’s Worldwide Developers Conference next week


i wwdc what you did there

We expect to see new designs, new branding, and more at Apple’s WWDC 2025.

Apple’s Worldwide Developers Conference kicks off on Monday with the company’s standard keynote presentation—a combination of PR about how great Apple and its existing products are and a first look at the next-generation versions of iOS, iPadOS, macOS, and the company’s other operating systems.

Reporting before the keynote rarely captures everything that Apple has planned at its presentations, but the reliable information we’ve seen so far is that Apple will keep the focus on its software this year rather than using the keynote to demo splashy new hardware like the Vision Pro and Apple Silicon Mac Pro, which the company introduced at WWDC a couple years back.

If you haven’t been keeping track, here are a few of the things that are most likely to happen when the pre-recorded announcement videos start rolling next week.

Redesign time

Reliable reports from Bloomberg’s Mark Gurman have been saying for months that Apple’s operating systems are getting a design overhaul at WWDC.

The company apparently plans to use the design of the Vision Pro’s visionOS software as a jumping-off point for the new designs, introducing more transparency and UI elements that appear to be floating on the surface of your screen. Apple’s overarching goal, according to Gurman, is to “simplify the way users navigate and control their devices” by “updating the style of icons, menus, apps, windows and system buttons.”

Apple’s airy, floaty visionOS will apparently serve as the inspiration for its next-generation software design. Credit: Apple

Any good software redesign needs to walk a tightrope between freshening up an old look and solving old problems without changing peoples’ devices so much that they become unrecognizable and unfamiliar. The number of people who have complained to me about the iOS 18-era redesign of the Photos app suggests to me that Apple doesn’t always strike the right balance. But a new look can also generate excitement and encourage upgrades more readily than some of the low-profile or under-the-hood improvements that these updates normally focus on.

The redesigned UI should be released simultaneously for iOS, iPadOS, and macOS. The Mac last received a significant facelift back in 2020 with macOS 11 Big Sur, though this was overshadowed at the time by the much more significant shift from Intel’s chips to Apple Silicon. The current iOS and iPadOS design has its roots in 2013’s iOS 7, though with over a decade’s worth of gradual evolution on top.

An OS by any other name

With the new design will apparently come a new naming scheme, shifting from the current version numbers to new numbers based on the year. So we allegedly won’t be seeing iOS 19, macOS 16, watchOS 12, or visionOS 3—instead, we’ll get iOS 26, macOS 26, watchOS 26, and visionOS 26.

The new numbers might be a little confusing at first, especially for the period of overlap where Apple is actively supporting (say) macOS 14, macOS 15, and macOS 26. But in the long run, the consistency should make it easier to tell roughly how old your software is and will also make it easier to tell whether your device is running current software without having to remember the number for each of your individual devices.

It also unifies the approach to any new operating system variants Apple might announce—tvOS starts at version 9 and iPadOS starts at version 13, for example, because they were linked to the then-current iOS release. But visionOS and watchOS both started over from 1.0, and the macOS version is based on the year that Apple arbitrarily decided to end the 20-year-old “macOS X” branding and jump up to 11.

Note that those numbers will use the upcoming year rather than the current year—iOS 26 will be Apple’s latest and greatest OS for about three months in 2025, assuming the normal September-ish launch, but it will be the main OS for nine months in 2026. Apple usually also waits until later in the fall or winter to start forcing people onto the new OS, issuing at least a handful of security-only updates for the outgoing OS for people who don’t want to be guinea pigs for a possibly buggy new release.

Seriously, don’t get your hopes up about hardware

Apple showed off Vision Pro at WWDC in 2023, but we’re not expecting to see much hardware this year. Credit: Samuel Axon

Gurman has reported that Apple had “no major new devices ready to ship” this year.

Apple generally concentrates its hardware launches to the spring and fall, with quieter and lower-profile launches in the spring and bigger launches in the fall, anchored by the tentpole that is the iPhone. But WWDC has occasionally been a launching point for new Macs (because Macs are the only systems that run Xcode, Apple’s development environment) and occasionally brand-new platforms (because getting developers on board with new platforms is one way to increase their chances of success). But the best available information suggests that neither of those things is happening this time around.

There are possibilities, though. Apple has apparently been at work behind the scenes on expanding its smart home footprint, and the eternally neglected Mac Pro is still using an M2 Ultra when an M3 Ultra already exists. But especially with a new redesign to play up, we’d expect Apple to keep the spotlight on its software this time around.

The fate of Intel Macs

It’s been five years since Apple started moving from Intel’s chips to its own custom silicon in Macs and two years since Apple sold its last Intel Macs. And since the very start of the transition, Apple has resisted providing a firm answer to the question of when Intel Macs will stop getting new macOS updates.

Our analysis of years of support data suggests two likely possibilities: that Apple releases one more new version of macOS for Intel Macs before shifting to a couple years of security-only updates or that Apple pulls the plug and shifts to security-only updates this year.

Rumors suggest that current betas still run on the last couple rounds of Intel Macs, dropping support for some older or slower models introduced between 2018 and 2020. If that’s true, there’s a pretty good chance it’s the last new macOS version to officially support Intel CPUs. Regardless, we’ll know more when the first betas drop after the keynote.

Even if the new version of macOS supports some Intel Macs, expect the list of features that require Apple Silicon to keep getting longer.

iPad multitasking? Again?

The perennial complaint about high-end iPads is that the hardware is a lot more capable than the software allows it to be. And every couple of years, Apple takes another crack at making the iPad a viable laptop replacement by improving the state of multitasking on the platform. This will allegedly be another one of those years.

We don’t know much about what form these multitasking improvements will take—whether they’re a further refinement of existing features like Stage Manager or something entirely new. The changes have been described as “more like macOS,” but that could mean pretty much anything.

Playing games

People play plenty of games on Apple’s devices, but they still aren’t really a “destination” for gaming in the same way that a dedicated console or Windows PC is. The company is apparently hoping to change that with a new unified app for games. Like Valve’s Steam, the app will reportedly serve as a storefront, launcher, and achievement tracker, and will also facilitate communication between friends playing the same game.

Apple took a similar stab at this idea in the early days of the iPhone with Game Center, which still exists as a service in the background on modern Apple devices but was discontinued as a standalone app quite a few years ago.

Apple has been trying for a few years now to make its operating systems more hospitable to gaming, especially in macOS. The company has added a low-latency Game Mode to macOS and comprehensive support for modern wireless gamepads from Microsoft, Sony, and Nintendo. The company’s Game Porting Toolkit stops short of being a consumer-friendly way to run Windows games on macOS, but it does give developers of Windows games an easier on-ramp for testing and porting their games to Apple’s platforms. We’ll see whether a unified app can help any of these other gaming features gel into something that feels cohesive.

Going home

A smart speaker about the size of a mason jar.

Might we see a more prominent, marketable name for what Apple currently calls the “HomePod Software”? Credit: Jeff Dunn

One of Apple’s long-simmering behind-the-scenes hardware projects is apparently a new kind of smart home device that weds the HomePod’s current capabilities with a vaguely Apple TV-like touchscreen interface. In theory, this device would compete with the likes of Amazon’s Echo Show devices.

Part of those plans involve a “new” operating system to replace what is known to the public as “HomePod Software” (and internally as audioOS). This so-called “homeOS” has been rumored for a bit, and some circumstantial evidence points to some possible pre-WWDC trademark activity around that name. Like the current HomePod software—and just about every other OS Apple maintains—homeOS would likely be a specialized offshoot of iOS. But even if it doesn’t come with new hardware right away, new branding could suggest that Apple is getting ready to expand its smart home ambitions.

What about AI?

Finally, it wouldn’t be a mid-2020s tech keynote without some kind of pronouncements about AI. Last year’s WWDC was the big public unveiling of Apple Intelligence, and (nearly) every one of Apple’s product announcements since then has made a point of highlighting the hardware’s AI capabilities.

We’d definitely expect Apple to devote some time to Apple Intelligence, but the company may be more hesitant to announce big new features in advance, following a news cycle where even normally sympathetic Apple boosters like Daring Fireball’s John Gruber excoriated the company for promising AI features that it was nowhere near ready to launch—or even to demo to the public. The executives handling Apple’s AI efforts were reshuffled following that news cycle; whether it was due to Gruber’s piece or the underlying problems outlined in the article is anyone’s guess.

Apple will probably try to find a middle road, torn between not wanting to overpromise and underdeliver and not wanting to seem “behind” on the tech industry’s biggest craze. There’s a decent chance that the new “more personalized” version of Siri will finally make a public appearance. But I’d guess that Apple will focus more on iterations of existing Apple Intelligence features like summaries or Writing Tools rather than big swings.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What to expect from Apple’s Worldwide Developers Conference next week Read More »

the-nine-armed-octopus-and-the-oddities-of-the-cephalopod-nervous-system

The nine-armed octopus and the oddities of the cephalopod nervous system


A mix of autonomous and top-down control manage the octopus’s limbs.

With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.

To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.

“This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”

A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.

By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.

Brains, brains, and more brains

Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.

“That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”

As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.

“There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.

Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.

“The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.

This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

Some similarities remain

While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.

“The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.”

Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.

While these similarities shed light on evolution’s independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.

Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

Nine arms, no problem

In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.

“In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.

The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.

“One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”

While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.

“That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”

Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

The nine-armed octopus and the oddities of the cephalopod nervous system Read More »