Author name: Mike M.

ocean-acidification-crosses-“planetary-boundaries”

Ocean acidification crosses “planetary boundaries”

A critical measure of the ocean’s health suggests that the world’s marine systems are in greater peril than scientists had previously realized and that parts of the ocean have already reached dangerous tipping points.

A study, published Monday in the journal Global Change Biology, found that ocean acidification—the process in which the world’s oceans absorb excess carbon dioxide from the atmosphere, becoming more acidic—crossed a “planetary boundary” five years ago.

“A lot of people think it’s not so bad,” said Nina Bednaršek, one of the study’s authors and a senior researcher at Oregon State University. “But what we’re showing is that all of the changes that were projected, and even more so, are already happening—in all corners of the world, from the most pristine to the little corner you care about. We have not changed just one bay, we have changed the whole ocean on a global level.”

The new study, also authored by researchers at the UK’s Plymouth Marine Laboratory and the National Oceanic and Atmospheric Administration (NOAA), finds that by 2020 the world’s oceans were already very close to the “danger zone” for ocean acidity, and in some regions had already crossed into it.

Scientists had determined that ocean acidification enters this danger zone or crosses this planetary boundary when the amount of calcium carbonate—which allows marine organisms to develop shells—is less than 20 percent compared to pre-industrial levels. The new report puts the figure at about 17 percent.

“Ocean acidification isn’t just an environmental crisis, it’s a ticking time bomb for marine ecosystems and coastal economies,” said Steve Widdicombe, director of science at the Plymouth lab, in a press release. “As our seas increase in acidity, we’re witnessing the loss of critical habitats that countless marine species depend on and this, in turn, has major societal and economic implications.”

Scientists have determined that there are nine planetary boundaries that, once breached, risk humans’ abilities to live and thrive. One of these is climate change itself, which scientists have said is already beyond humanity’s “safe operating space” because of the continued emissions of heat-trapping gases. Another is ocean acidification, also caused by burning fossil fuels.

Ocean acidification crosses “planetary boundaries” Read More »

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

protesters-summon,-burn-waymo-robotaxis-in-los-angeles-after-ice-raids

Protesters summon, burn Waymo robotaxis in Los Angeles after ICE raids

The robotaxi company Waymo has suspended service in some parts of Los Angeles after some of its vehicles were summoned and then vandalized by protesters angry with ongoing raids by US Immigration and Customs Enforcement. Five of Waymo’s autonomous Jaguar I-Pace electric vehicles were summoned downtown to the site of anti-ICE protests, at which point they were vandalized with slashed tires and spray-painted messages. Three were set on fire.

The Los Angeles Police Department warned people to avoid the area due to risks from toxic gases given off by burning EVs. And Waymo told Ars that it is “in touch with law enforcement” regarding the matter.

The protesters in Los Angeles were outraged after ICE, using brutal tactics, began detaining people in raids across the city. Thousands of Angelenos took to the streets over the weekend to confront the masked federal enforcers and, in some cases, forced them away.

In response, the Trump administration mobilized more than 300 National Guard soldiers without consulting with or being requested to do so by the California governor.

California Governor Gavin Newsom has promised to sue the administration. “Donald Trump has created the conditions you see on your TV tonight. He’s exacerbated the conditions. He’s, you know, lit the proverbial match. He’s putting fuel on this fire, ever since he announced he was taking over the National Guard—an illegal act, an immoral act, an unconstitutional act,” Newsom said in an interview.

Waymo began offering rides in Los Angeles last November, and by January, the company said it had driven almost 2 million miles in the city. But there is some animosity toward robotaxis and food delivery robots, which are now being used by the Los Angeles Police Department as sources of surveillance footage. In April, the LAPD published footage obtained from a Waymo that it used to investigate a hit-and-run.

Protesters summon, burn Waymo robotaxis in Los Angeles after ICE raids Read More »

warner-bros.-discovery-makes-still-more-changes,-will-split-streaming,-tv-business

Warner Bros. Discovery makes still more changes, will split streaming, TV business

Warner Bros. Discovery will split its business into two publicly traded companies, with one focused on its streaming and studios business and the other on its television network businesses, including CNN and Discovery.

The US media giant said the move would unlock value for shareholders as well as create opportunities for both businesses, breaking up a group created just three years ago from the merger of Warner Media and Discovery.

Warner Bros. Discovery last year revealed its intent to split its business in two, a plan first reported by the Financial Times in July last year. The company intends to complete the split by the middle of next year.

The move comes on the heels of a similar move by rival Comcast, which last year announced plans to spin off its television networks, including CNBC and MSNBC, into a separate company.

US media giants are seeking to split their faster growing streaming businesses from their legacy television networks, which are facing the prospect of long-term decline as viewers turn away from traditional television.

Warner Bros. Discovery shares were more than 10 percent higher pre-market.

David Zaslav, chief executive of Warner Bros. Discovery, will head the streaming and studios arm, while chief financial officer Gunnar Wiedenfels will serve as president and chief executive of global networks. Both will continue in their present roles until the separation.

Zaslav said on Monday the split would result in a “sharper focus” and enhanced “strategic flexibility,” that would leave each company better placed to compete in “today’s evolving media landscape.”

Warner Bros. Discovery Chair Samuel A. Di Piazza Jr. said the move would “enhance shareholder value.”

The streaming and studios arm will consist of Warner Bros. Television, Warner Bros. Motion Picture Group, DC Studios, HBO, and HBO Max, as well as their film and television libraries.

Global networks will include entertainment, sports, and news television brands around the world, including CNN, TNT Sports in the US, and Discovery.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Warner Bros. Discovery makes still more changes, will split streaming, TV business Read More »

a-long-shot-plan-to-mine-the-moon-comes-a-little-closer-to-reality

A long-shot plan to mine the Moon comes a little closer to reality

The road ahead

Meyerson said the company’s current plan is to fly a prospecting mission in 2027, a payload of less than 100-kg, likely on a commercial lander that is part of NASA’s Commercial Lunar Payload Services program. Two years later the company seeks to fly a pilot plant. Meyerson said the size of this plant will depend on the launch capability available (i.e. if Starship is flying to the Moon, they’ll go big, and smaller if not).

Following this, Interlune is targeting 2032 for the launch of a solar-powered operating plant, which would include five mobile harvesters. The operation would also be able to return material mined to Earth. The total mass for this equipment would be about 40 metric tons, which could fly on a single Starship or two New Glenn Mk 2 landers. This would, understandably, be highly ambitious and capital-intensive. After raising $15 million last year, Meyerson said Interlune is planning a second fundraising round that should begin soon.

There are some outside factors that may be beneficial for Interlune. One is that China has a clear and demonstrated interest in sending humans to the Moon and has already sent rovers to explore for helium-3 resources. Moreover, with the exit of Jared Isaacman as a nominee to lead NASA, the Trump administration is likely to put someone in the position who is more focused on lunar activities. One candidate, a retired Air Force General named Steve Kwast, is a huge proponent of mining helium-3.

Interlune has a compelling story in that there are almost no other lunar businesses focused on purely commercial activities, those that will drive value from mining the lunar surface. In that sense, they could be a lynchpin of a lunar economy. But they’ve got a long way to go, and a lot of lunar regolith to plow through, before they start delivering for customers.

A long-shot plan to mine the Moon comes a little closer to reality Read More »

new-adventures-await-the-crew-in-strange-new-worlds-s3-trailer

New adventures await the crew in Strange New Worlds S3 trailer

Star Trek: Strange New Worlds returns for a third season next month.

Apart from a short teaser in April, we haven’t seen much of Star Trek: Strange New Worlds‘ upcoming third season, debuting next month. But Paramount+ has finally released the official trailer.

(Spoilers for S2 below.)

As previously reported, the S2 finale found the Enterprise under vicious attack by the Gorn, who were in the midst of invading one of the Federation’s colony worlds. Several crew members were kidnapped, along with other survivors of the attack. Captain Pike (Anson Mount) faced a momentous decision: follow orders to retreat or disobey them to rescue his crew. Footage shown last October at New York City Comic-Con picked up where the finale left off, giving us the kind of harrowing high-stakes pitched space battle against a ferocious enemy that has long been a hallmark of the franchise. (Of course, Pike opted to rescue his crew.)

Per the official synopsis:

In Season 3, when we reconnect with the crew of the U.S.S. Enterprise, still under the command of Captain Pike, they face the conclusion of Season 2’s harrowing encounter with the Gorn. But new life and civilizations await, including a villain that will test our characters’ grit and resolve. An exciting twist on classic Star Trek, Season 3 takes characters both new and beloved to new heights, and dives into thrilling adventures of faith, duty, romance, comedy, and mystery, with varying genres never before seen on any other Star Trek.

In addition to the returning main and recurring cast members, Cillian O’Sullivan joins the recurring cast as Dr. Roger Korby, a legacy character (originally played by Michael Strong). Korby was a renowned archaeologist in the field of medical archaeology and Nurse Chapel’s long-missing fiancé. His reappearance in S3 is bound to cause problems for SNW‘s Nurse Chapel (Jess Bush), who is romantically involved with Spock (Ethan Peck). Rhys Darby and Patton Oswalt will also guest star.

New adventures await the crew in Strange New Worlds S3 trailer Read More »

what-to-expect-from-apple’s-worldwide-developers-conference-next-week

What to expect from Apple’s Worldwide Developers Conference next week


i wwdc what you did there

We expect to see new designs, new branding, and more at Apple’s WWDC 2025.

Apple’s Worldwide Developers Conference kicks off on Monday with the company’s standard keynote presentation—a combination of PR about how great Apple and its existing products are and a first look at the next-generation versions of iOS, iPadOS, macOS, and the company’s other operating systems.

Reporting before the keynote rarely captures everything that Apple has planned at its presentations, but the reliable information we’ve seen so far is that Apple will keep the focus on its software this year rather than using the keynote to demo splashy new hardware like the Vision Pro and Apple Silicon Mac Pro, which the company introduced at WWDC a couple years back.

If you haven’t been keeping track, here are a few of the things that are most likely to happen when the pre-recorded announcement videos start rolling next week.

Redesign time

Reliable reports from Bloomberg’s Mark Gurman have been saying for months that Apple’s operating systems are getting a design overhaul at WWDC.

The company apparently plans to use the design of the Vision Pro’s visionOS software as a jumping-off point for the new designs, introducing more transparency and UI elements that appear to be floating on the surface of your screen. Apple’s overarching goal, according to Gurman, is to “simplify the way users navigate and control their devices” by “updating the style of icons, menus, apps, windows and system buttons.”

Apple’s airy, floaty visionOS will apparently serve as the inspiration for its next-generation software design. Credit: Apple

Any good software redesign needs to walk a tightrope between freshening up an old look and solving old problems without changing peoples’ devices so much that they become unrecognizable and unfamiliar. The number of people who have complained to me about the iOS 18-era redesign of the Photos app suggests to me that Apple doesn’t always strike the right balance. But a new look can also generate excitement and encourage upgrades more readily than some of the low-profile or under-the-hood improvements that these updates normally focus on.

The redesigned UI should be released simultaneously for iOS, iPadOS, and macOS. The Mac last received a significant facelift back in 2020 with macOS 11 Big Sur, though this was overshadowed at the time by the much more significant shift from Intel’s chips to Apple Silicon. The current iOS and iPadOS design has its roots in 2013’s iOS 7, though with over a decade’s worth of gradual evolution on top.

An OS by any other name

With the new design will apparently come a new naming scheme, shifting from the current version numbers to new numbers based on the year. So we allegedly won’t be seeing iOS 19, macOS 16, watchOS 12, or visionOS 3—instead, we’ll get iOS 26, macOS 26, watchOS 26, and visionOS 26.

The new numbers might be a little confusing at first, especially for the period of overlap where Apple is actively supporting (say) macOS 14, macOS 15, and macOS 26. But in the long run, the consistency should make it easier to tell roughly how old your software is and will also make it easier to tell whether your device is running current software without having to remember the number for each of your individual devices.

It also unifies the approach to any new operating system variants Apple might announce—tvOS starts at version 9 and iPadOS starts at version 13, for example, because they were linked to the then-current iOS release. But visionOS and watchOS both started over from 1.0, and the macOS version is based on the year that Apple arbitrarily decided to end the 20-year-old “macOS X” branding and jump up to 11.

Note that those numbers will use the upcoming year rather than the current year—iOS 26 will be Apple’s latest and greatest OS for about three months in 2025, assuming the normal September-ish launch, but it will be the main OS for nine months in 2026. Apple usually also waits until later in the fall or winter to start forcing people onto the new OS, issuing at least a handful of security-only updates for the outgoing OS for people who don’t want to be guinea pigs for a possibly buggy new release.

Seriously, don’t get your hopes up about hardware

Apple showed off Vision Pro at WWDC in 2023, but we’re not expecting to see much hardware this year. Credit: Samuel Axon

Gurman has reported that Apple had “no major new devices ready to ship” this year.

Apple generally concentrates its hardware launches to the spring and fall, with quieter and lower-profile launches in the spring and bigger launches in the fall, anchored by the tentpole that is the iPhone. But WWDC has occasionally been a launching point for new Macs (because Macs are the only systems that run Xcode, Apple’s development environment) and occasionally brand-new platforms (because getting developers on board with new platforms is one way to increase their chances of success). But the best available information suggests that neither of those things is happening this time around.

There are possibilities, though. Apple has apparently been at work behind the scenes on expanding its smart home footprint, and the eternally neglected Mac Pro is still using an M2 Ultra when an M3 Ultra already exists. But especially with a new redesign to play up, we’d expect Apple to keep the spotlight on its software this time around.

The fate of Intel Macs

It’s been five years since Apple started moving from Intel’s chips to its own custom silicon in Macs and two years since Apple sold its last Intel Macs. And since the very start of the transition, Apple has resisted providing a firm answer to the question of when Intel Macs will stop getting new macOS updates.

Our analysis of years of support data suggests two likely possibilities: that Apple releases one more new version of macOS for Intel Macs before shifting to a couple years of security-only updates or that Apple pulls the plug and shifts to security-only updates this year.

Rumors suggest that current betas still run on the last couple rounds of Intel Macs, dropping support for some older or slower models introduced between 2018 and 2020. If that’s true, there’s a pretty good chance it’s the last new macOS version to officially support Intel CPUs. Regardless, we’ll know more when the first betas drop after the keynote.

Even if the new version of macOS supports some Intel Macs, expect the list of features that require Apple Silicon to keep getting longer.

iPad multitasking? Again?

The perennial complaint about high-end iPads is that the hardware is a lot more capable than the software allows it to be. And every couple of years, Apple takes another crack at making the iPad a viable laptop replacement by improving the state of multitasking on the platform. This will allegedly be another one of those years.

We don’t know much about what form these multitasking improvements will take—whether they’re a further refinement of existing features like Stage Manager or something entirely new. The changes have been described as “more like macOS,” but that could mean pretty much anything.

Playing games

People play plenty of games on Apple’s devices, but they still aren’t really a “destination” for gaming in the same way that a dedicated console or Windows PC is. The company is apparently hoping to change that with a new unified app for games. Like Valve’s Steam, the app will reportedly serve as a storefront, launcher, and achievement tracker, and will also facilitate communication between friends playing the same game.

Apple took a similar stab at this idea in the early days of the iPhone with Game Center, which still exists as a service in the background on modern Apple devices but was discontinued as a standalone app quite a few years ago.

Apple has been trying for a few years now to make its operating systems more hospitable to gaming, especially in macOS. The company has added a low-latency Game Mode to macOS and comprehensive support for modern wireless gamepads from Microsoft, Sony, and Nintendo. The company’s Game Porting Toolkit stops short of being a consumer-friendly way to run Windows games on macOS, but it does give developers of Windows games an easier on-ramp for testing and porting their games to Apple’s platforms. We’ll see whether a unified app can help any of these other gaming features gel into something that feels cohesive.

Going home

A smart speaker about the size of a mason jar.

Might we see a more prominent, marketable name for what Apple currently calls the “HomePod Software”? Credit: Jeff Dunn

One of Apple’s long-simmering behind-the-scenes hardware projects is apparently a new kind of smart home device that weds the HomePod’s current capabilities with a vaguely Apple TV-like touchscreen interface. In theory, this device would compete with the likes of Amazon’s Echo Show devices.

Part of those plans involve a “new” operating system to replace what is known to the public as “HomePod Software” (and internally as audioOS). This so-called “homeOS” has been rumored for a bit, and some circumstantial evidence points to some possible pre-WWDC trademark activity around that name. Like the current HomePod software—and just about every other OS Apple maintains—homeOS would likely be a specialized offshoot of iOS. But even if it doesn’t come with new hardware right away, new branding could suggest that Apple is getting ready to expand its smart home ambitions.

What about AI?

Finally, it wouldn’t be a mid-2020s tech keynote without some kind of pronouncements about AI. Last year’s WWDC was the big public unveiling of Apple Intelligence, and (nearly) every one of Apple’s product announcements since then has made a point of highlighting the hardware’s AI capabilities.

We’d definitely expect Apple to devote some time to Apple Intelligence, but the company may be more hesitant to announce big new features in advance, following a news cycle where even normally sympathetic Apple boosters like Daring Fireball’s John Gruber excoriated the company for promising AI features that it was nowhere near ready to launch—or even to demo to the public. The executives handling Apple’s AI efforts were reshuffled following that news cycle; whether it was due to Gruber’s piece or the underlying problems outlined in the article is anyone’s guess.

Apple will probably try to find a middle road, torn between not wanting to overpromise and underdeliver and not wanting to seem “behind” on the tech industry’s biggest craze. There’s a decent chance that the new “more personalized” version of Siri will finally make a public appearance. But I’d guess that Apple will focus more on iterations of existing Apple Intelligence features like summaries or Writing Tools rather than big swings.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What to expect from Apple’s Worldwide Developers Conference next week Read More »

the-nine-armed-octopus-and-the-oddities-of-the-cephalopod-nervous-system

The nine-armed octopus and the oddities of the cephalopod nervous system


A mix of autonomous and top-down control manage the octopus’s limbs.

With their quick-change camouflage and high level of intelligence, it’s not surprising that the public and scientific experts alike are fascinated by octopuses. Their abilities to recognize faces, solve puzzles, and learn behaviors from other octopuses make these animals a captivating study.

To perform these processes and others, like crawling or exploring, octopuses rely on their complex nervous system, one that has become a focus for neuroscientists. With about 500 million neurons—around the same number as dogs—octopuses’ nervous systems are the most complex of any invertebrate. But, unlike vertebrate organisms, the octopus’s nervous system is also decentralized, with around 350 million neurons, or 66 percent of it, located in its eight arms.

“This means each arm is capable of independently processing sensory input, initiating movement, and even executing complex behaviors—without direct instructions from the brain,” explains Galit Pelled, a professor of Mechanical Engineering, Radiology, and Neuroscience at Michigan State University who studies octopus neuroscience. “In essence, the arms have their own ‘mini-brains.’”

A decentralized nervous system is one factor that helps octopuses adapt to changes, such as injury or predation, as seen in the case of an Octopus vulgaris, or common octopus, that was observed with nine arms by researchers at the ECOBAR lab at the Institute of Marine Research in Spain between 2021 and 2022.

By studying outliers like this cephalopod, researchers can gain insight into how the animal’s detailed scaffolding of nerves changes and regrows over time, uncovering more about how octopuses have evolved over millennia in our oceans.

Brains, brains, and more brains

Because each arm of an octopus contains its own bundle of neurons, the limbs can operate semi-independently from the central brain, enabling faster responses since signals don’t always need to travel back and forth between the brain and the arms. In fact, Pelled and her team recently discovered that “neural signals recorded in the octopus arm can predict movement type within 100 milliseconds of stimulation, without central brain involvement.” She notes that “that level of localized autonomy is unprecedented in vertebrate systems.”

Though each limb moves on its own, the movements of the octopus’s body are smooth and conducted with a coordinated elegance that allows the animal to exhibit some of the broadest range of behaviors, adapting on the fly to changes in its surroundings.

“That means the octopus can react quickly to its environment, especially when exploring, hunting, or defending itself,” Pelled says. “For example, one arm can grab food while another is feeling around a rock, without needing permission from the brain. This setup also makes the octopus more resilient. If one arm is injured, the others still work just fine. And because so much decision-making happens at the arms, the central brain is freed up to focus on the bigger picture—like navigating or learning new tasks.”

As if each limb weren’t already buzzing with neural activity, things get even more intricate when researchers zoom in further—to the nerves within each individual sucker, a ring of muscular tissue, which octopuses use to sense and taste their surroundings.

“There is a sucker ganglion, or nerve center, located in the stalk of every sucker. For some species of octopuses, that’s over a thousand ganglia,” says Cassady Olson, a graduate student at the University of Chicago who works with Cliff Ragsdale, a leading expert in octopus neuroscience.

Given that each sucker has its own nerve centers—connected by a long axial nerve cord running down the limb—and each arm has hundreds of suckers, things get complicated very quickly, as researchers have historically struggled to study this peripheral nervous system, as it’s called, within the octopus’s body.

“The large size of the brain makes it both really exciting to study and really challenging,” says Z. Yan Wang, an assistant professor of biology and psychology at the University of Washington. “Many of the tools available for neuroscience have to be adjusted or customized specifically for octopuses and other cephalopods because of their unique body plans.”

While each limb acts independently, signals are transmitted back to the octopus’s central nervous system. The octopus’ brain sits between its eyes at the front of its mantle, or head, couched between its two optic lobes, large bean-shaped neural organs that help octopuses see the world around them. These optic lobes are just two of the over 30 lobes experts study within the animal’s centralized brain, as each lobe helps the octopus process its environment.

This elaborate neural architecture is critical given the octopus’s dual role in the ecosystem as both predator and prey. Without natural defenses like a hard shell, octopuses have evolved a highly adaptable nervous system that allows them to rapidly process information and adjust as needed, helping their chances of survival.

Some similarities remain

While the octopus’s decentralized nervous system makes it a unique evolutionary example, it does have some structures similar to or analogous to the human nervous system.

“The octopus has a central brain mass located between its eyes, and an axial nerve cord running down each arm (similar to a spinal cord),” says Wang. “The octopus has many sensory systems that we are familiar with, such as vision, touch (somatosensation), chemosensation, and gravity sensing.”

Neuroscientists have homed in on these similarities to understand how these structures may have evolved across the different branches in the tree of life. As the most recent common ancestor for humans and octopuses lived around 750 million years ago, experts believe that many similarities, from similar camera-like eyes to maps of neural activities, evolved separately in a process known as convergent evolution.

While these similarities shed light on evolution’s independent paths, they also offer valuable insights for fields like soft robotics and regenerative medicine.

Occasionally, unique individuals—like an octopus with an unexpected number of limbs—can provide even deeper clues into how this remarkable nervous system functions and adapts.

Nine arms, no problem

In 2021, researchers from the Institute of Marine Research in Spain used an underwater camera to follow a male Octopus vulgaris, or common octopus. On its left side, three arms were intact, while the others were reduced to uneven, stumpy lengths, sharply bitten off at varying points. Although the researchers didn’t witness the injury itself, they observed that the front right arm—known as R1—was regenerating unusually, splitting into two separate limbs and giving the octopus a total of nine arms.

“In this individual, we believe this condition was a result of abnormal regeneration [a genetic mutation] after an encounter with a predator,” explains Sam Soule, one of the researchers and the first author on the corresponding paper recently published in Animals.

The researchers named the octopus Salvador due to its bifurcated arm coiling up on itself like the two upturned ends of Salvador Dali’s moustache. For two years, the team studied the cephalopod’s behavior and found that it used its bifurcated arm less when doing “riskier” movements such as exploring or grabbing food, which would force the animal to stretch its arm out and expose it to further injury.

“One of the conclusions of our research is that the octopus likely retains a long-term memory of the original injury, as it tends to use the bifurcated arms for less risky tasks compared to the others,” elaborates Jorge Hernández Urcera, a lead author of the study. “This idea of lasting memory brought to mind Dalí’s famous painting The Persistence of Memory, which ultimately became the title of the paper we published on monitoring this particular octopus.”

While the octopus acted more protective of its extra limb, its nervous system had adapted to using the extra appendage, as the octopus was observed, after some time recovering from its injuries, using its ninth arm for probing its environment.

“That nine-armed octopus is a perfect example of just how adaptable these animals are,” Pelled adds. “Most animals would struggle with an unusual body part, but not the octopus. In this case, the octopus had a bifurcated (split) arm and still used it effectively, just like any other arm. That tells us the nervous system didn’t treat it as a mistake—it figured out how to make it work.”

Kenna Hughes-Castleberry is the science communicator at JILA (a joint physics research institute between the National Institute of Standards and Technology and the University of Colorado Boulder) and a freelance science journalist. Her main writing focuses are quantum physics, quantum technology, deep technology, social media, and the diversity of people in these fields, particularly women and people from minority ethnic and racial groups. Follow her on LinkedIn or visit her website.

The nine-armed octopus and the oddities of the cephalopod nervous system Read More »

ted-cruz-bill:-states-that-regulate-ai-will-be-cut-out-of-$42b-broadband-fund

Ted Cruz bill: States that regulate AI will be cut out of $42B broadband fund

BEAD changes: No fiber preference, no low-cost mandate

The BEAD program is separately undergoing an overhaul because Republicans don’t like how it was administered by Democrats. The Biden administration spent about three years developing rules and procedures for BEAD and then evaluating plans submitted by each US state and territory, but the Trump administration has delayed grants while it rewrites the rules.

While Biden’s Commerce Department decided to prioritize the building of fiber networks, Republicans have pushed for a “tech-neutral approach” that would benefit cable companies, fixed wireless providers, and Elon Musk’s Starlink satellite service.

Secretary of Commerce Howard Lutnick previewed changes in March, and today he announced more details of the overhaul that will eliminate the fiber preference and various requirements imposed on states. One notable but unsurprising change is that the Trump administration won’t let states require grant recipients to offer low-cost Internet plans at specific rates to people with low incomes.

The National Telecommunications and Information Administration (NTIA) “will refuse to accept any low-cost service option proposed in a [state or territory’s] Final Proposal that attempts to impose a specific rate level (i.e., dollar amount),” the Trump administration said. Instead, ISPs receiving subsidies will be able to continue offering “their existing, market driven low-cost plans to meet the statutory low-cost requirement.”

The Benton Institute for Broadband & Society criticized the overhaul, saying that the Trump administration is investing in the cheapest broadband infrastructure instead of the best. “Fiber-based broadband networks will last longer, provide better, more reliable service, and scale to meet communities’ ever-growing connectivity needs,” the advocacy group said. “NTIA’s new guidance is shortsighted and will undermine economic development in rural America for decades to come.”

The Trump administration’s overhaul drew praise from cable lobby group NCTA-The Internet & Television Association, whose members will find it easier to obtain subsidies. “We welcome changes to the BEAD program that will make the program more efficient and eliminate onerous requirements, which add unnecessary costs that impede broadband deployment efforts,” NCTA said. “These updates are welcome improvements that will make it easier for providers to build faster, especially in hard-to-reach communities, without being bogged down by red tape.”

Ted Cruz bill: States that regulate AI will be cut out of $42B broadband fund Read More »

millions-of-low-cost-android-devices-turn-home-networks-into-crime-platforms

Millions of low-cost Android devices turn home networks into crime platforms

Millions of low-cost devices for media streaming, in-vehicle entertainment, and video projection are infected with malware that turns consumer networks into platforms for distributing malware, concealing nefarious communications, and performing other illicit activities, the FBI has warned.

The malware infecting these devices, known as BadBox, is based on Triada, a malware strain discovered in 2016 by Kaspersky Lab, which called it “one of the most advanced mobile Trojans” the security firm’s analysts had ever encountered. It employed an impressive kit of tools, including rooting exploits that bypassed security protections built into Android and functions for modifying the Android OS’s all-powerful Zygote process. Google eventually updated Android to block the methods Triada used to infect devices.

The threat remains

A year later, Triada returned, only this time, devices came pre-infected before they reached consumers’ hands. In 2019, Google confirmed that the supply-chain attack affected thousands of devices and that the company had once again taken measures to thwart it.

In 2023, security firm Human Security reported on BigBox, a Triada-derived backdoor it found preinstalled on thousands of devices manufactured in China. The malware, which Human Security estimated was installed on 74,000 devices around the world, facilitated a range of illicit activities, including advertising fraud, residential proxy services, the creation of fake Gmail and WhatsApp accounts, and infecting other Internet-connected devices.

Millions of low-cost Android devices turn home networks into crime platforms Read More »

openai-is-retaining-all-chatgpt-logs-“indefinitely”-here’s-who’s-affected.

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected.

In the copyright fight, Magistrate Judge Ona Wang granted the order within one day of the NYT’s request. She agreed with news plaintiffs that it seemed likely that ChatGPT users may be spooked by the lawsuit and possibly set their chats to delete when using the chatbot to skirt NYT paywalls. Because OpenAI wasn’t sharing deleted chat logs, the news plaintiffs had no way of proving that, she suggested.

Now, OpenAI is not only asking Wang to reconsider but has “also appealed this order with the District Court Judge,” the Thursday statement said.

“We strongly believe this is an overreach by the New York Times,” Lightcap said. “We’re continuing to appeal this order so we can keep putting your trust and privacy first.”

Who can access deleted chats?

To protect users, OpenAI provides an FAQ that clearly explains why their data is being retained and how it could be exposed.

For example, the statement noted that the order doesn’t impact OpenAI API business customers under Zero Data Retention agreements because their data is never stored.

And for users whose data is affected, OpenAI noted that their deleted chats could be accessed, but they won’t “automatically” be shared with The New York Times. Instead, the retained data will be “stored separately in a secure system” and “protected under legal hold, meaning it can’t be accessed or used for purposes other than meeting legal obligations,” OpenAI explained.

Of course, with the court battle ongoing, the FAQ did not have all the answers.

Nobody knows how long OpenAI may be required to retain the deleted chats. Likely seeking to reassure users—some of which appeared to be considering switching to a rival service until the order lifts—OpenAI noted that “only a small, audited OpenAI legal and security team would be able to access this data as necessary to comply with our legal obligations.”

OpenAI is retaining all ChatGPT logs “indefinitely.” Here’s who’s affected. Read More »

google’s-nightmare:-how-a-search-spinoff-could-remake-the-web

Google’s nightmare: How a search spinoff could remake the web


Google has shaped the Internet as we know it, and unleashing its index could change everything.

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google wasn’t around for the advent of the World Wide Web, but it successfully remade the web on its own terms. Today, any website that wants to be findable has to play by Google’s rules, and after years of search dominance, the company has lost a major antitrust case that could reshape both it and the web.

The closing arguments in the case just wrapped up last week, and Google could be facing serious consequences when the ruling comes down in August. Losing Chrome would certainly change things for Google, but the Department of Justice is pursuing other remedies that could have even more lasting impacts. During his testimony, Google CEO Sundar Pichai seemed genuinely alarmed at the prospect of being forced to license Google’s search index and algorithm, the so-called data remedies in the case. He claimed this would be no better than a spinoff of Google Search. The company’s statements have sometimes derisively referred to this process as “white labeling” Google Search.

But does a white label Google Search sound so bad? Google has built an unrivaled index of the web, but the way it shows results has become increasingly frustrating. A handful of smaller players in search have tried to offer alternatives to Google’s search tools. They all have different approaches to retrieving information for you, but they agree that spinning off Google Search could change the web again. Whether or not those changes are positive depends on who you ask.

The Internet is big and noisy

As Google’s search results have changed over the years, more people have been open to other options. Some have simply moved to AI chatbots to answer their questions, hallucinations be damned. But for most people, it’s still about the 10 blue links (for now).

Because of the scale of the Internet, there are only three general web search indexes: Google, Bing, and Brave. Every search product (including AI tools) relies on one or more of these indexes to probe the web. But what does that mean?

“Generally, a search index is a service that, when given a query, is able to find relevant documents published on the Internet,” said Brave’s search head Josep Pujol.

A search index is essentially a big database, and that’s not the same as search results. According to JP Schmetz, Brave’s chief of ads, it’s entirely possible to have the best and most complete search index in the world and still show poor results for a given query. Sound like anyone you know?

Google’s technological lead has allowed it to crawl more websites than anyone else. It has all the important parts of the web, plus niche sites, abandoned blogs, sketchy copies of legitimate websites, copies of those copies, and AI-rephrased copies of the copied copies—basically everything. And the result of this Herculean digital inventory is a search experience that feels increasingly discombobulated.

“Google is running large-scale experiments in ways that no rival can because we’re effectively blinded,” said Kamyl Bazbaz, head of public affairs at DuckDuckGo, which uses the Bing index. “Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a perpetual scale and quality deficit for rivals that locks in Google’s advantage.”

The size of the index may not be the only factor that matters, though. Brave, which is perhaps best known for its browser, also has a search engine. Brave Search is the default in its browser, but you can also just go to the URL in your current browser. Unlike most other search engines, Brave doesn’t need to go to anyone else for results. Pujol suggested that Brave doesn’t need the scale of Google’s index to find what you need. And admittedly, Brave’s search results don’t feel meaningfully worse than Google’s—they may even be better when you consider the way that Google tries to keep you from clicking.

Brave’s index spans around 25 billion pages, but it leaves plenty of the web uncrawled. “We could be indexing five to 10 times more pages, but we choose not to because not all the web has signal. Most web pages are basically noise,” said Pujol.

The freemium search engine Kagi isn’t worried about having the most comprehensive index. Kagi is a meta search engine. It pulls in data from multiple indexes, like Bing and Brave, but it has a custom index of what founder and CEO Vladimir Prelovac calls the “non-commercial web.”

When you search with Kagi, some of the results (it tells you the proportion) come from its custom index of personal blogs, hobbyist sites, and other content that is poorly represented on other search engines. It’s reminiscent of the days when huge brands weren’t always clustered at the top of Google—but even these results are being pushed out of reach in favor of AI, ads, Knowledge Graph content, and other Google widgets. That’s a big part of why Kagi exists, according to Prelovac.

A Google spinoff could change everything

We’ve all noticed the changes in Google’s approach to search, and most would agree that they have made finding reliable and accurate information harder. Regardless, Google’s incredibly deep and broad index of the Internet is in demand.

Even with Bing and Brave available, companies are going to extremes to syndicate Google Search results. A cottage industry has emerged to scrape Google searches as a stand-in for an official index. These companies are violating Google’s terms, yet they appear in Google Search results themselves. Google could surely do something about this if it wanted to.

The DOJ calls Google’s mountain of data the “essential raw material” for building a general search engine, and it believes forcing the firm to license that material is key to breaking its monopoly. The sketchy syndication firms will evaporate if the DOJ’s data remedies are implemented, which would give competitors an official way to utilize Google’s index. And utilize it they will.

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech.

Credit: Ryan Whitwam

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech. Credit: Ryan Whitwam

According to Prelovac, this could lead to an explosion in search choices. “The whole purpose of the Sherman Act is to proliferate a healthy, competitive marketplace. Once you have access to a search index, then you can have thousands of search startups,” said Prelovac.

The Kagi founder suggested that licensing Google Search could allow entities of all sizes to have genuinely useful custom search tools. Cities could use the data to create deep, hyper-local search, and people who love cats could make a cat-specific search engine, in both cases pulling what they want from the most complete database of online content. And, of course, general search products like Kagi would be able to license Google’s tech for a “nominal fee,” as the DOJ puts it.

Prelovac didn’t hesitate when asked if Kagi, which offers a limited number of free searches before asking users to subscribe, would integrate Google’s index. “Yes, that is something we would do,” he said. “And that’s what I believe should happen.”

There may be some drawbacks to unleashing Google’s search services. Judge Amit Mehta has expressed concern that blocking Google’s search placement deals could reduce browser choice, and there is a similar issue with the data remedies. If Google is forced to license search as an API, its few competitors in web indexing could struggle to remain afloat. In a roundabout way, giving away Google’s search tech could actually increase its influence.

The Brave team worries about how open access to Google’s search technology could impact diversity on the web. “If implemented naively, it’s a big problem,” said Brave’s ad chief JP Schmetz, “If the court forces Google to provide search at a marginal cost, it will not be possible for Bing or Brave to survive until the remedy ends.”

The landscape of AI-based search could also change. We know from testimony given during the remedy trial by OpenAI’s Nick Turley that the ChatGPT maker tried and failed to get access to Google Search to ground its AI models—it currently uses Bing. If Google were suddenly an option, you can be sure OpenAI and others would rush to connect Google’s web data to their large language models (LLMs).

The attempt to reduce Google’s power could actually grant it new monopolies in AI, according to Brave Chief Business Officer Brian Brown. “All of a sudden, you would have a single monolithic voice of truth across all the LLMs, across all the web,” Brown said.

What if you weren’t the product?

If white labeling Google does expand choice, even at the expense of other indexes, it will give more kinds of search products a chance in the market—maybe even some that shun Google’s focus on advertising. You don’t see much of that right now.

For most people, web search is and always has been a free service supported by ads. Google, Brave, DuckDuckGo, and Bing offer all the search queries you want for free because they want eyeballs. It’s been said often, but it’s true: If you’re not paying for it, you’re the product. This is an arrangement that bothers Kagi’s founder.

“For something as important as information consumption, there should not be an intermediary between me and the information, especially one that is trying to sell me something,” said Prelovac.

Kagi search results acknowledge the negative impact of today’s advertising regime. Kagi users see a warning next to results with a high number of ads and trackers. According to Prelovac, that is by far the strongest indication that a result is of low quality. That icon also lets you adjust the prevalence of such sites in your personal results. You can demote a site or completely hide it, which is a valuable option in the age of clickbait.

Kagi search gives you a lot of control.

Credit: Ryan Whitwam

Kagi search gives you a lot of control. Credit: Ryan Whitwam

Kagi’s paid approach to search changes its relationship with your data. “We literally don’t need user data,” Prelovac said. “But it’s not only that we don’t need it. It’s a liability.”

Prelovac admitted that getting people to pay for search is “really hard.” Nevertheless, he believes ad-supported search is a dead end. So Kagi is planning for a future in five or 10 years when more people have realized they’re still “paying” for ad-based search with lost productivity time and personal data, he said.

We know how Google handles user data (it collects a lot of it), but what does that mean for smaller search engines like Brave and DuckDuckGo that rely on ads?

“I’m sure they mean well,” said Prelovac.

Brave said that it shields user data from advertisers, relying on first-party tracking to attribute clicks to Brave without touching the user. “They cannot retarget people later; none of that is happening,” said Brave’s JP Schmetz.

DuckDuckGo is a bit of an odd duck—it relies on Bing’s general search index, but it adds a layer of privacy tools on top. It’s free and ad-supported like Google and Brave, but the company says it takes user privacy seriously.

“Viewing ads is privacy protected by DuckDuckGo, and most ad clicks are managed by Microsoft’s ad network,” DuckDuckGo’s Kamyl Bazbaz said. He explained that DuckDuckGo has worked with Microsoft to ensure its network does not track users or create any profiles based on clicks. He added that the company has a similar privacy arrangement with TripAdvisor for travel-related ads.

It’s AI all the way down

We can’t talk about the future of search without acknowledging the artificially intelligent elephant in the room. As Google continues its shift to AI-based search, it’s tempting to think of the potential search spin-off as a way to escape that trend. However, you may find few refuges in the coming years. There’s a real possibility that search is evolving beyond the 10 blue links and toward an AI assistant model.

All non-Google search engines have AI integrations, with the most prominent being Microsoft Bing, which has a partnership with OpenAI. But smaller players have AI search features, too. The folks working on these products agree with Microsoft and Google on one important point: They see AI as inevitable.

Today’s Google alternatives all have their own take on AI Overviews, which generates responses to queries based on search results. They’re generally not as in-your-face as Google AI, though. While Google and Microsoft are intensely focused on increasing the usage of AI search, other search operators aren’t pushing for that future. They are along for the ride, though.

AI overview on phone

AI Overviews are integrated with Google’s search results, and most other players have their own version.

Credit: Google

AI Overviews are integrated with Google’s search results, and most other players have their own version. Credit: Google

“We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite,” Bazbaz said. “So we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.”

The team at Brave views AI as a core means of accessing search and one that will continue to grow. Brave generates AI answers for many searches and prominently cites sources. You can also disable Brave’s AI if you prefer. But according to search chief Josep Pujol, the move to AI search is inevitable for a pretty simple reason: It’s convenient, and people will always choose convenience. So AI is changing the web as we know it, for better or worse, because LLMs can save a smidge of time, especially for more detailed “long-tail” queries. These AI features may give you false information while they do it, but that’s not always apparent.

This is very similar to the language Google uses when discussing agentic search, although it expresses it in a more nuanced way. By understanding the task behind a query, Google hopes to provide AI answers that save people time, even if the model needs a few ticks to fan out and run multiple searches to generate a more comprehensive report on a topic. That’s probably still faster than running multiple searches and manually reviewing the results, and it could leave traditional search as an increasingly niche service, even in a world with more choices.

“Will the 10 blue links continue to exist in 10 years?” Pujol asked. “Actually, one question would be, does it even exist now? In 10 years, [search] will have evolved into more of an AI conversation behavior or even agentic. That is probably the case. What, for sure, will continue to exist is the need to search. Search is a verb, an action that you do, and whether you will do it directly or whether it will be done through an agent, it’s a search engine.”

Vlad from Kagi sees AI becoming the default way we access information in the long term, but his search engine doesn’t force you to use it. On Kagi, you can expand the AI box for your searches and ask follow-ups, and the AI will open automatically if you use a question mark in your search. But that’s just the start.

“You watch Star Trek, nobody’s clicking on links there—I do believe in that vision in science fiction movies,” Prelovac said. “I don’t think my daughter will be clicking links in 10 years. The only question is if the current technology will be the one that gets us there. LLMs have inherent flaws. I would even tend to say it’s likely not going to get us to Star Trek.”

If we think of AI mainly as a way to search for information, the future becomes murky. With generative AI in the driver’s seat, questions of authority and accuracy may be left to language models that often behave in unpredictable and difficult-to-understand ways. Whether we’re headed for an AI boom or bust—for continued Google dominance or a new era of choice—we’re facing fundamental changes to how we access information.

Maybe if we get those thousands of search startups, there will be a few that specialize in 10 blue links. We can only hope.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s nightmare: How a search spinoff could remake the web Read More »