Tech

a-history-of-the-internet,-part-2:-the-high-tech-gold-rush-begins

A history of the Internet, part 2: The high-tech gold rush begins


The Web Era arrives, the browser wars flare, and a bubble bursts.

Welcome to the second article in our three-part series on the history of the Internet. If you haven’t already, read part one here.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country.  Later, it evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol.

By the late 1980s, investments from the National Science Foundation (NSF) had established an “Internet backbone” supporting hundreds of thousands of users worldwide. These users were mostly professors, researchers, and graduate students.

In the meantime, commercial online services like CompuServe were growing rapidly. These systems connected personal computer users, using dial-up modems, to a mainframe running proprietary software. Once online, people could read news articles and message other users. In 1989, CompuServe added the ability to send email to anyone on the Internet.

In 1965, Ted Nelson submitted a paper to the Association for Computing Machinery. He wrote: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” The paper was part of a grand vision he called Xanadu, after the poem by Samuel Coleridge.

A decade later, in his book “Dream Machines/Computer Lib,” he described Xanadu thusly: “To give you a screen in your home from which you can see into the world’s hypertext libraries.” He admitted that the world didn’t have any hypertext libraries yet, but that wasn’t the point. One day, maybe soon, it would. And he was going to dedicate his life to making it happen.

As the Internet grew, it became more and more difficult to find things on it. There were lots of cool documents like the Hitchhiker’s Guide To The Internet, but to read them, you first had to know where they were.

The community of helpful programmers on the Internet leapt to the challenge. Alan Emtage at McGill University in Montreal wrote a tool called Archie. It searched a list of public file transfer protocol (FTP) servers. You still had to know the file name you were looking for, but Archie would let you download it no matter what server it was on.

An improved search engine was Gopher, written by a team headed by Mark McCahill at the University of Minnesota. It used a text-based menu system so that users didn’t have to remember file names or locations. Gopher servers could display a customized collection of links inside nested menus, and they integrated with other services like Archie and Veronica to help users search for more resources.

Gopher is a text-based Internet search and retrieval system. It’s still running in 2025! Jeremy Reimer

A Gopher server could provide many of the things we take for granted today: search engines, personal pages that could contain links, and downloadable files. But this wasn’t enough for a British computer scientist who was working at CERN, an intergovernmental institute that operated the world’s largest particle physics lab.

The World Wide Web

Hypertext had come a long way since Ted Nelson had coined the word in 1965. Bill Atkinson, a member of the original Macintosh development team, released HyperCard in 1987. It used the Mac’s graphical interface to let anyone develop “stacks,” collections of text, graphics, and sounds that could be connected together with clickable links. There was no networking, but stacks could be shared with other users by sending the files on a floppy disk.

The home screen of HyperCard 1.0 for Macintosh. Jeremy Reimer

Hypertext was so big that conferences were held just to discuss it in 1987 and 1988. Even Ted Nelson had finally found a sponsor for his personal dream: Autodesk founder John Walker had agreed to spin up a subsidiary to create a commercial version of Xanadu.

It was in this environment that CERN fellow Tim Berners-Lee drew up his own proposal in March 1989 for a new hypertext environment. His goal was to make it easier for researchers at CERN to collaborate and share information about new projects.

The proposal (which he called “Mesh”) had several objectives. It would provide a system for connecting information about people, projects, documents, and hardware being developed at CERN. It would be decentralized and distributed over many computers. Not all the computers at CERN were the same—there were Digital Equipment minis running VMS, some Macintoshes, and an increasing number of Unix workstations. Each of them should be able to view the information in the same way.

As Berners-Lee described it, “There are few products which take Ted Nelson’s idea of a wide ‘docuverse’ literally by allowing links between nodes in different databases. In order to do this, some standardization would be necessary.”

The original proposal document for the web, written in Microsoft Word for Macintosh 4.0, downloaded from Tim Berners-Lee’s website. Credit: Jeremy Reimer

The document ended by describing the project as “practical” and estimating that it might take two people six to 12 months to complete. Berners-Lee’s manager called it “vague, but exciting.” Robert Cailliau, who had independently proposed a hypertext system for CERN, joined Berners-Lee to start designing the project.

The computer Berners-Lee used was a NeXT cube, from the company Steve Jobs started after he was kicked out of Apple. NeXT workstations were expensive, but they came with a software development environment that was years ahead of its time. If you could afford one, it was like a coding accelerator. John Carmack would later write DOOM on a NeXT.

The NeXT workstation that Tim Berners-Lee used to create the World Wide Web. Please do not power down the World Wide Web. Credit: Coolcaesar (CC BY-SA 3.0)

Berners-Lee called his application “WorldWideWeb.” The software consisted of a server, which delivered pages of text over a new protocol called “Hypertext Transport Protocol,” or HTTP, and a browser that rendered the text. The browser translated markup code like “h1” to indicate a larger header font or “a” to indicate a link. There was also a graphical webpage editor, but it didn’t work very well and was abandoned.

The very first website was published, running on the development NeXT cube, on December 20, 1990. Anyone who had a NeXT machine and access to the Internet could view the site in all its glory.

The original WorldWideWeb browser running on NeXTstep 3, browsing the world’s first webpage. Jeremy Reimer

Because NeXT only sold 50,000 computers in total, that intersection did not represent a lot of people. Eight months later, Berners-Lee posted a reply to a question about interesting projects on the alt.hypertext Usenet newsgroup. He described the World Wide Web project and included links to all the software and documentation.

That one post changed the world forever.

Mosaic

On December 9, 1991, President George H.W. Bush signed into law the High Performance Computing Act, also known as the Gore Bill. The bill paid for an upgrade of the NSFNET backbone, as well as a separate funding initiative for the National Center for Supercomputing Applications (NCSA).

NCSA, based out of the University of Illinois, became a dream location for computing research. “NCSA was heaven,” recalled Alex Totic, who was a student there. “They had all the toys, from Thinking Machines to Crays to Macs to beautiful networks. It was awesome.” As is often the case in academia, the professors came up with research ideas but assigned most of the actual work to their grad students.

One of those students was Marc Andreessen, who joined NCSA as a part-time programmer for $6.85 an hour. Andreessen was fascinated by the World Wide Web, especially browsers. A new browser for Unix computers, ViolaWWW, was making the rounds at NCSA. No longer confined to the NeXT workstation, the web had caught the attention of the Unix community. But that community was still too small for Andreessen.

“To use the Net, you had to understand Unix,” he said in an interview with Forbes. “And the current users had no interest in making it easier. In fact, there was a definite element of not wanting to make it easier, of actually wanting to keep the riffraff out.”

Andreessen enlisted the help of his colleague, programmer Eric Bina, and started developing a new web browser in December 1992. In a little over a month, they released version 0.5 of “NCSA X Mosaic”—so called because it was designed to work with Unix’s X Window System. Ports for the Macintosh and Windows followed shortly thereafter.

Being available on the most popular graphical computers changed the trajectory of the web. In just 18 months, millions of copies of Mosaic were downloaded, and the rate was accelerating. The riffraff was here to stay.

Netscape

The instant popularity of Mosaic caused the management at NCSA to take a deeper interest in the project. Jon Mittelhauser, who co-wrote the Windows version, recalled that the small team “suddenly found ourselves in meetings with forty people planning our next features, as opposed to the five of us making plans at 2 am over pizzas and Cokes.”

Andreessen was told to step aside and let more experienced managers take over. Instead, he left NCSA and moved to California, looking for his next opportunity. “I thought I had missed the whole thing,” Andreessen said. “The overwhelming mood in the Valley when I arrived was that the PC was done, and by the way, the Valley was probably done because there was nothing else to do.”

But his reputation had preceded him. Jim Clark, the founder of Silicon Graphics, was also looking to start something new. A friend had shown him a demo of Mosaic, and Clark reached out to meet with Andreessen.

At a meeting, Andreessen pitched the idea of building a “Mosaic killer.” He showed Clark a graph that showed web users doubling every five months. Excited by the possibilities, the two men founded Mosaic Communications Corporation on April 4, 1994. Andreessen quickly recruited programmers from his former team, and they got to work. They codenamed their new browser “Mozilla” since it was going to be a monster that would devour Mosaic. Beta versions were titled “Mosaic Netscape,” but the University of Illinois threatened to sue the new company. To avoid litigation, the name of the company and browser were changed to Netscape, and the programmers audited their code to ensure none of it had been copied from NCSA.

Netscape became the model for all Internet startups to follow. Programmers were given unlimited free sodas and encouraged to basically never leave the office. “Netscape Time” accelerated software development schedules, and because updates could be delivered over the Internet, old principles of quality assurance went out the window. And the business model? It was simply to “get big fast,” and profits could be figured out later.

Work proceeded quickly, and the 1.0 version of Netscape Navigator and the Netsite web server were released on December 15, 1994, for Windows, Macintosh, and Unix systems running X Windows. The browser was priced at $39 for commercial users, but there was no charge for “academic and non-profit use, as well as for free evaluation purposes.”

Version 0.9 was called “Mosaic Netscape,” and the logo and company were still Mosaic. Jeremy Reimer

Netscape quickly became the standard. Within six months, it captured over 70 percent of the market share for web browsers. On August 9, 1995, only 16 months after the founding of the company, Netscape filed for an Initial Public Offering. A last-minute decision doubled the offering price to $28 per share, and on the first day of trading, the stock soared to $75 and closed at $58.25. The Web Era had officially arrived.

The web battles proprietary solutions

The excitement over a new way to transmit text and images to the public over phone lines wasn’t confined to the World Wide Web. Commercial online systems like CompuServe were also evolving to meet the graphical age. These companies released attractive new front-ends for their services that ran on DOS, Windows, and Macintosh computers. There were also new services that were graphics-only, like Prodigy, a cooperation between IBM and Sears, and an upstart that had sprung from the ashes of a Commodore 64 service called Quantum Link. This was America Online, or AOL.

Even Microsoft was getting into the act. Bill Gates believed that the “Information Superhighway” was the future of computing, and he wanted to make sure that all roads went through his company’s toll booth. The highly anticipated Windows 95 was scheduled to ship with a bundled dial-up online service called the Microsoft Network, or MSN.

At first, it wasn’t clear which of these online services would emerge as the winner. But people assumed that at least one of them would beat the complicated, nerdy Internet. CompuServe was the oldest, but AOL was nimbler and found success by sending out millions of free “starter” disks (and later, CDs) to potential customers. Microsoft was sure that bundling MSN with the upcoming Windows 95 would ensure victory.

Most of these services decided to hedge their bets by adding a sort of “side access” to the World Wide Web. After all, if they didn’t, their competitors would. At the same time, smaller companies (many of them former bulletin board services) started becoming Internet service providers. These smaller “ISPs” could charge less money than the big services because they didn’t have to create any content themselves. Thousands of new websites were appearing on the Internet every day, much faster than new sections could be added to AOL or CompuServe.

The tipping point happened very quickly. Before Windows 95 had even shipped, Bill Gates wrote his famous “Internet Tidal Wave” memo, where he assigned the Internet the “highest level of importance.” MSN was quickly changed to become more of a standard ISP and moved all of its content to the web. Microsoft rushed to release its own web browser, Internet Explorer, and bundled it with the Windows 95 Plus Pack.

The hype and momentum were entirely with the web now. It was the most exciting, most transformative technology of its time. The decade-long battle to control the Internet by forcing a shift to a new OSI standards model was forgotten. The web was all anyone cared about, and the web ran on TCP/IP.

The browser wars

Netscape had never expected to make a lot of money from its browser, as it was assumed that most people would continue to download new “evaluation” versions for free. Executives were pleasantly surprised when businesses started sending Netscape huge checks. The company went from $17 million in revenue in 1995 to $346 million the following year, and the press started calling Marc Andreessen “the new Bill Gates.”

The old Bill Gates wasn’t having any of that. Following his 1995 memo, Microsoft worked hard to improve Internet Explorer and made it available for free, including to business users. Netscape tried to fight back. It added groundbreaking new features like JavaScript, which was inspired by LISP but with a syntax similar to Java, the hot new programming language from Sun Microsystems. But it was hard to compete with free, and Netscape’s market share started to fall. By 1996, both browsers had reached version 3.0 and were roughly equal in terms of features. The battle continued, but when the Apache Software Foundation released its free web server, Netscape’s other source of revenue dried up as well. The writing was on the wall.

There was no better way to declare your allegiance to a web browser in 1996 than adding “Best Viewed In” above one of these icons. Credit: Jeremy Reimer

The dot-com boom

In 1989, the NSF lifted the restrictions on providing commercial access to the Internet, and by 1991, it had removed all barriers to commercial trade on the network. With the sudden ascent of the web, thanks to Mosaic, Netscape, and Internet Explorer, new companies jumped into this high-tech gold rush. But at first, it wasn’t clear what the best business strategy was. Users expected everything on the web to be free, so how could you make money?

Many early web companies started as hobby projects. In 1994, Jerry Yang and David Filo were electrical engineering PhD students at Stanford University. After Mosaic started popping off, they began collecting and trading links to new websites. Thus, “Jerry’s Guide to the World Wide Web” was born, running on Yang’s Sun workstation. Renamed Yahoo! (Yet Another Hierarchical, Officious Oracle), the site exploded in popularity. Netscape put multiple links to Yahoo on its main navigation bar, which further accelerated growth. “We weren’t really sure if you could make a business out of it, though,” Yang told Fortune. Nevertheless, venture capital companies came calling. Sequoia, which had made millions investing in Apple, put in $1 million for 25 percent of Yahoo.

Yahoo.com as it would have appeared in 1995. Credit: Jeremy Reimer

Another hobby site, AuctionWeb, was started in 1995 by Pierre Omidyar. Running on his own home server using the regular $30 per month service from his ISP, the site let people buy and sell items of almost any kind. When traffic started growing, his ISP told him it was increasing his Internet fees to $250 per month, as befitting a commercial enterprise. Omidyar decided he would try to make it a real business, even though he didn’t have a merchant account for credit cards or even a way to enforce the new 5 percent or 2.5 percent royalty charges. That didn’t matter, as the checks started rolling in. He found a business partner, changed the name to eBay, and the rest was history.

AuctionWeb (later eBay) as it would have appeared in 1995. Credit: Jeremy Reimer

In 1993, Jeff Bezos, a senior vice president at a hedge fund company, was tasked with investigating business opportunities on the Internet. He decided to create a proof of concept for what he described as an “everything store.” He chose books as an ideal commodity to sell online, since a book in one store was identical to one in another, and a website could offer access to obscure titles that might not get stocked in physical bookstores.

He left the hedge fund company, gathered investors and software development talent, and moved to Seattle. There, he started Amazon. At first, the site wasn’t much more than an online version of an existing bookseller catalog called Books In Print. But over time, Bezos added inventory data from the two major book distributors, Ingram and Baker & Taylor. The promise of access to every book in the world was exciting for people, and the company grew quickly.

Amazon.com as it would have appeared in 1995. Credit: Jeremy Reimer

The explosive growth of these startups fueled a self-perpetuating cycle. As publications like Wired experimented with online versions of their magazines, they invented and sold banner ads to fund their websites. The best customers for these ads were other web startups. These companies wanted more traffic, and they knew ads on sites like Yahoo were the best way to get it. Yahoo salespeople could then turn around and point to their exponential ad sales curves, which caused Yahoo stock to rise. This encouraged people to fund more web startups, which would all need to advertise on Yahoo. These new startups also needed to buy servers from companies like Sun Microsystems, causing those stocks to rise as well.

The crash

In the latter half of the 1990s, it looked like everything was going great. The economy was booming, thanks in part to the rise of the World Wide Web and the huge boost it gave to computer hardware and software companies. The NASDAQ index of tech-focused stocks painted a clear picture of the boom.

The NASDAQ composite index in the 1990s. Credit: Jeremy Reimer

Federal Reserve chairman Alan Greenspan called this phenomenon “irrational exuberance” but didn’t seem to be in a hurry to stop it. The fact that most new web startups didn’t have a realistic business model didn’t seem to bother investors. Sure, WebVan might have been paying more to deliver groceries than they earned from customers, but look at that growth curve!

The exuberance couldn’t last forever. The NASDAQ peaked at 8,843.87 in February 2000 and started to go down. In one month, it lost 34 percent of its value, and by August 2001, it was down to 3,253.38. Web companies laid off employees or went out of business completely. The party was over.

Andreessen said that the tech crash scarred him. “The overwhelming message to our generation in the early nineties was ‘You’re dirty, you’re all about grunge—you guys are fucking losers!’ Then the tech boom hit, and it was ‘We are going to do amazing things!’ And then the roof caved in, and the wisdom was that the Internet was a mirage. I 100 percent believed that because the rejection was so personal—both what everybody thought of me and what I thought of myself.”

But while some companies quietly celebrated the end of the whole Internet thing, others would rise from the ashes of the dot-com collapse. That’s the subject of our third and final article.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 2: The high-tech gold rush begins Read More »

apple-details-the-end-of-intel-mac-support-and-a-phaseout-for-rosetta-2

Apple details the end of Intel Mac support and a phaseout for Rosetta 2

The support list for macOS Tahoe still includes Intel Macs, but it has been whittled down to just four models, all released in 2019 or 2020. We speculated that this meant that the end was near for Intel Macs, and now we can confirm just how near it is: macOS Tahoe will be the last new macOS release to support any Intel Macs. All new releases starting with macOS 27 will require an Apple Silicon Mac.

Apple will provide additional security updates for Tahoe until fall 2028, two years after it is replaced with macOS 27. That’s a typical schedule for older macOS versions, which all get one year of major point updates that include security fixes and new features, followed by two years of security-only updates to keep them patched but without adding significant new features.

Apple is also planning changes to Rosetta 2, the Intel-to-Arm app translation technology created to ease the transition between the Intel and Apple Silicon eras. Rosetta will continue to work as a general-purpose app translation tool in both macOS 26 and macOS 27.

But after that, Rosetta will be pared back and will only be available to a limited subset of apps—specifically, older games that rely on Intel-specific libraries but are no longer being actively maintained by their developers. Devs who want their apps to continue running on macOS after that will need to transition to either Apple Silicon-native apps or universal apps that run on either architecture.

Apple details the end of Intel Mac support and a phaseout for Rosetta 2 Read More »

apple-drops-support-for-just-three-iphone-and-ipad-models-from-ios-and-ipados-26

Apple drops support for just three iPhone and iPad models from iOS and iPadOS 26

Every year, Apple releases new versions of iOS and iPadOS, and most years those updates also end support for a handful of devices that are too old or too slow or otherwise incapable of running the new software.

Though this year’s macOS 26 Tahoe release was unkind to Intel Macs, the iOS 26 and iPadOS 26 releases are more generous, dropping support for just two iPhone models and a single iPad. The iOS 26 update won’t run on 2018’s iPhone XR or XS, and iPadOS 26 won’t run on 2019’s 7th-generation iPad. Any other device that can currently run iOS or iPadOS 18 will be able to upgrade to the new versions and pick up the new Liquid Glass look, among other features.

Apple never provides explicit reasoning for why it drops the devices it drops, though they can usually be explained by some combination of age and technical capability. The 7th-gen iPad, for example, was still using a 2017-vintage Apple A10X chip despite being introduced a number of years later.

The iPhone XR and XS, on the other hand, use an Apple A12 chip, the same one used by several still-supported iPads.

Apple usually provides security-only patches for dispatched iDevices for a year or two after they stop running the newest OS, though the company never publishes timelines for these updates, and the iPhone and iPad haven’t been treated as reliably as Macs have. Still, if you do find yourself relying on one of those older devices, you can still probably wring a bit of usefulness out of it before you start missing out on critical security patches.

Apple drops support for just three iPhone and iPad models from iOS and iPadOS 26 Read More »

warner-bros.-discovery-makes-still-more-changes,-will-split-streaming,-tv-business

Warner Bros. Discovery makes still more changes, will split streaming, TV business

Warner Bros. Discovery will split its business into two publicly traded companies, with one focused on its streaming and studios business and the other on its television network businesses, including CNN and Discovery.

The US media giant said the move would unlock value for shareholders as well as create opportunities for both businesses, breaking up a group created just three years ago from the merger of Warner Media and Discovery.

Warner Bros. Discovery last year revealed its intent to split its business in two, a plan first reported by the Financial Times in July last year. The company intends to complete the split by the middle of next year.

The move comes on the heels of a similar move by rival Comcast, which last year announced plans to spin off its television networks, including CNBC and MSNBC, into a separate company.

US media giants are seeking to split their faster growing streaming businesses from their legacy television networks, which are facing the prospect of long-term decline as viewers turn away from traditional television.

Warner Bros. Discovery shares were more than 10 percent higher pre-market.

David Zaslav, chief executive of Warner Bros. Discovery, will head the streaming and studios arm, while chief financial officer Gunnar Wiedenfels will serve as president and chief executive of global networks. Both will continue in their present roles until the separation.

Zaslav said on Monday the split would result in a “sharper focus” and enhanced “strategic flexibility,” that would leave each company better placed to compete in “today’s evolving media landscape.”

Warner Bros. Discovery Chair Samuel A. Di Piazza Jr. said the move would “enhance shareholder value.”

The streaming and studios arm will consist of Warner Bros. Television, Warner Bros. Motion Picture Group, DC Studios, HBO, and HBO Max, as well as their film and television libraries.

Global networks will include entertainment, sports, and news television brands around the world, including CNN, TNT Sports in the US, and Discovery.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Warner Bros. Discovery makes still more changes, will split streaming, TV business Read More »

bill-atkinson,-architect-of-the-mac’s-graphical-soul,-dies-at-74

Bill Atkinson, architect of the Mac’s graphical soul, dies at 74

Using HyperCard, Teachers created interactive lessons, artists built multimedia experiences, and businesses developed custom database applications—all without writing traditional code. The hypermedia environment also had a huge impact on gaming: 1993 first-person adventure hit Myst originally used HyperCard as its game engine.

An example of graphical dithering, which allows 1-bit color (black and white only) to imitate grayscale.

An example of graphical dithering, which allows 1-bit color (black and white only) to imitate grayscale. Credit: Benj Edwards / Apple

For the two-color Macintosh (which could only display black or white pixels, with no gradient in between), Atkinson developed an innovative high-contrast dithering algorithm that created the illusion of grayscale images with a characteristic stippled appearance that became synonymous with early Mac graphics. The dithered aesthetic remains popular today among some digital artists and indie game makers, with modern tools like this web converter that allows anyone to transform photos into the classic Atkinson dither style.

Life after Apple

After leaving Apple in 1990, Atkinson co-founded General Magic with Marc Porat and Andy Hertzfeld, attempting to create personal communicators before smartphones existed. Wikipedia notes that in 2007, he joined Numenta, an AI startup, declaring their work on machine intelligence “more fundamentally important to society than the personal computer and the rise of the Internet.”

In his later years, Atkinson pursued nature photography with the same artistry he’d brought to programming. His 2004 book “Within the Stone” featured close-up images of polished rocks that revealed hidden worlds of color and pattern.

Atkinson announced his pancreatic cancer diagnosis in November 2024, writing on Facebook that he had “already led an amazing and wonderful life.” The same disease claimed his friend and collaborator Steve Jobs in 2011.

Given Atkinson’s deep contributions to Apple history, it’s not surprising that Jobs’ successor, Apple CEO Tim Cook, paid tribute to the Mac’s original graphics guru on X on Saturday. “We are deeply saddened by the passing of Bill Atkinson,” Cook wrote. “He was a true visionary whose creativity, heart, and groundbreaking work on the Mac will forever inspire us.”

Bill Atkinson, architect of the Mac’s graphical soul, dies at 74 Read More »

what-to-expect-from-apple’s-worldwide-developers-conference-next-week

What to expect from Apple’s Worldwide Developers Conference next week


i wwdc what you did there

We expect to see new designs, new branding, and more at Apple’s WWDC 2025.

Apple’s Worldwide Developers Conference kicks off on Monday with the company’s standard keynote presentation—a combination of PR about how great Apple and its existing products are and a first look at the next-generation versions of iOS, iPadOS, macOS, and the company’s other operating systems.

Reporting before the keynote rarely captures everything that Apple has planned at its presentations, but the reliable information we’ve seen so far is that Apple will keep the focus on its software this year rather than using the keynote to demo splashy new hardware like the Vision Pro and Apple Silicon Mac Pro, which the company introduced at WWDC a couple years back.

If you haven’t been keeping track, here are a few of the things that are most likely to happen when the pre-recorded announcement videos start rolling next week.

Redesign time

Reliable reports from Bloomberg’s Mark Gurman have been saying for months that Apple’s operating systems are getting a design overhaul at WWDC.

The company apparently plans to use the design of the Vision Pro’s visionOS software as a jumping-off point for the new designs, introducing more transparency and UI elements that appear to be floating on the surface of your screen. Apple’s overarching goal, according to Gurman, is to “simplify the way users navigate and control their devices” by “updating the style of icons, menus, apps, windows and system buttons.”

Apple’s airy, floaty visionOS will apparently serve as the inspiration for its next-generation software design. Credit: Apple

Any good software redesign needs to walk a tightrope between freshening up an old look and solving old problems without changing peoples’ devices so much that they become unrecognizable and unfamiliar. The number of people who have complained to me about the iOS 18-era redesign of the Photos app suggests to me that Apple doesn’t always strike the right balance. But a new look can also generate excitement and encourage upgrades more readily than some of the low-profile or under-the-hood improvements that these updates normally focus on.

The redesigned UI should be released simultaneously for iOS, iPadOS, and macOS. The Mac last received a significant facelift back in 2020 with macOS 11 Big Sur, though this was overshadowed at the time by the much more significant shift from Intel’s chips to Apple Silicon. The current iOS and iPadOS design has its roots in 2013’s iOS 7, though with over a decade’s worth of gradual evolution on top.

An OS by any other name

With the new design will apparently come a new naming scheme, shifting from the current version numbers to new numbers based on the year. So we allegedly won’t be seeing iOS 19, macOS 16, watchOS 12, or visionOS 3—instead, we’ll get iOS 26, macOS 26, watchOS 26, and visionOS 26.

The new numbers might be a little confusing at first, especially for the period of overlap where Apple is actively supporting (say) macOS 14, macOS 15, and macOS 26. But in the long run, the consistency should make it easier to tell roughly how old your software is and will also make it easier to tell whether your device is running current software without having to remember the number for each of your individual devices.

It also unifies the approach to any new operating system variants Apple might announce—tvOS starts at version 9 and iPadOS starts at version 13, for example, because they were linked to the then-current iOS release. But visionOS and watchOS both started over from 1.0, and the macOS version is based on the year that Apple arbitrarily decided to end the 20-year-old “macOS X” branding and jump up to 11.

Note that those numbers will use the upcoming year rather than the current year—iOS 26 will be Apple’s latest and greatest OS for about three months in 2025, assuming the normal September-ish launch, but it will be the main OS for nine months in 2026. Apple usually also waits until later in the fall or winter to start forcing people onto the new OS, issuing at least a handful of security-only updates for the outgoing OS for people who don’t want to be guinea pigs for a possibly buggy new release.

Seriously, don’t get your hopes up about hardware

Apple showed off Vision Pro at WWDC in 2023, but we’re not expecting to see much hardware this year. Credit: Samuel Axon

Gurman has reported that Apple had “no major new devices ready to ship” this year.

Apple generally concentrates its hardware launches to the spring and fall, with quieter and lower-profile launches in the spring and bigger launches in the fall, anchored by the tentpole that is the iPhone. But WWDC has occasionally been a launching point for new Macs (because Macs are the only systems that run Xcode, Apple’s development environment) and occasionally brand-new platforms (because getting developers on board with new platforms is one way to increase their chances of success). But the best available information suggests that neither of those things is happening this time around.

There are possibilities, though. Apple has apparently been at work behind the scenes on expanding its smart home footprint, and the eternally neglected Mac Pro is still using an M2 Ultra when an M3 Ultra already exists. But especially with a new redesign to play up, we’d expect Apple to keep the spotlight on its software this time around.

The fate of Intel Macs

It’s been five years since Apple started moving from Intel’s chips to its own custom silicon in Macs and two years since Apple sold its last Intel Macs. And since the very start of the transition, Apple has resisted providing a firm answer to the question of when Intel Macs will stop getting new macOS updates.

Our analysis of years of support data suggests two likely possibilities: that Apple releases one more new version of macOS for Intel Macs before shifting to a couple years of security-only updates or that Apple pulls the plug and shifts to security-only updates this year.

Rumors suggest that current betas still run on the last couple rounds of Intel Macs, dropping support for some older or slower models introduced between 2018 and 2020. If that’s true, there’s a pretty good chance it’s the last new macOS version to officially support Intel CPUs. Regardless, we’ll know more when the first betas drop after the keynote.

Even if the new version of macOS supports some Intel Macs, expect the list of features that require Apple Silicon to keep getting longer.

iPad multitasking? Again?

The perennial complaint about high-end iPads is that the hardware is a lot more capable than the software allows it to be. And every couple of years, Apple takes another crack at making the iPad a viable laptop replacement by improving the state of multitasking on the platform. This will allegedly be another one of those years.

We don’t know much about what form these multitasking improvements will take—whether they’re a further refinement of existing features like Stage Manager or something entirely new. The changes have been described as “more like macOS,” but that could mean pretty much anything.

Playing games

People play plenty of games on Apple’s devices, but they still aren’t really a “destination” for gaming in the same way that a dedicated console or Windows PC is. The company is apparently hoping to change that with a new unified app for games. Like Valve’s Steam, the app will reportedly serve as a storefront, launcher, and achievement tracker, and will also facilitate communication between friends playing the same game.

Apple took a similar stab at this idea in the early days of the iPhone with Game Center, which still exists as a service in the background on modern Apple devices but was discontinued as a standalone app quite a few years ago.

Apple has been trying for a few years now to make its operating systems more hospitable to gaming, especially in macOS. The company has added a low-latency Game Mode to macOS and comprehensive support for modern wireless gamepads from Microsoft, Sony, and Nintendo. The company’s Game Porting Toolkit stops short of being a consumer-friendly way to run Windows games on macOS, but it does give developers of Windows games an easier on-ramp for testing and porting their games to Apple’s platforms. We’ll see whether a unified app can help any of these other gaming features gel into something that feels cohesive.

Going home

A smart speaker about the size of a mason jar.

Might we see a more prominent, marketable name for what Apple currently calls the “HomePod Software”? Credit: Jeff Dunn

One of Apple’s long-simmering behind-the-scenes hardware projects is apparently a new kind of smart home device that weds the HomePod’s current capabilities with a vaguely Apple TV-like touchscreen interface. In theory, this device would compete with the likes of Amazon’s Echo Show devices.

Part of those plans involve a “new” operating system to replace what is known to the public as “HomePod Software” (and internally as audioOS). This so-called “homeOS” has been rumored for a bit, and some circumstantial evidence points to some possible pre-WWDC trademark activity around that name. Like the current HomePod software—and just about every other OS Apple maintains—homeOS would likely be a specialized offshoot of iOS. But even if it doesn’t come with new hardware right away, new branding could suggest that Apple is getting ready to expand its smart home ambitions.

What about AI?

Finally, it wouldn’t be a mid-2020s tech keynote without some kind of pronouncements about AI. Last year’s WWDC was the big public unveiling of Apple Intelligence, and (nearly) every one of Apple’s product announcements since then has made a point of highlighting the hardware’s AI capabilities.

We’d definitely expect Apple to devote some time to Apple Intelligence, but the company may be more hesitant to announce big new features in advance, following a news cycle where even normally sympathetic Apple boosters like Daring Fireball’s John Gruber excoriated the company for promising AI features that it was nowhere near ready to launch—or even to demo to the public. The executives handling Apple’s AI efforts were reshuffled following that news cycle; whether it was due to Gruber’s piece or the underlying problems outlined in the article is anyone’s guess.

Apple will probably try to find a middle road, torn between not wanting to overpromise and underdeliver and not wanting to seem “behind” on the tech industry’s biggest craze. There’s a decent chance that the new “more personalized” version of Siri will finally make a public appearance. But I’d guess that Apple will focus more on iterations of existing Apple Intelligence features like summaries or Writing Tools rather than big swings.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

What to expect from Apple’s Worldwide Developers Conference next week Read More »

nintendo-switch-2’s-faster-chip-can-dramatically-improve-original-switch-games

Nintendo Switch 2’s faster chip can dramatically improve original Switch games

Link’s Awakening, Switch 1, docked. Andrew Cunningham

It’s pretty much the same story for Link’s Awakening. Fine detail is much more visible, and the 3D is less aliased-looking because the Switch 2 is running the game at a higher resolution. Even the fairly aggressive background blur the game uses looks toned down on the Switch 2.

Link’s Awakening on the Switch 1, docked.

Link’s Awakening on the Switch 2, docked.

The videos of these games aren’t quite as obviously impressive as the Pokémon ones, but they give you a sense of the higher resolution on the Switch 2 and the way that the Switch’s small endemic frame rate hiccups are no longer a problem.

Quiet updates

For the last two categories of games, we won’t be waxing as poetic about the graphical improvements because there aren’t many. In fact, some of these games we played looked ever-so-subtly worse on the Switch 2 in handheld mode, likely a side effect of a 720p handheld-mode image being upscaled to the Switch 2’s 1080p native resolution.

That said, we still noticed minor graphical improvements. In Kirby Star Allies, for example, the 3D elements in the picture looked mostly the same, with roughly the same resolution, same textures, and similar overall frame rates. But 2D elements of the UI did still seem to be aware that the console is outputting a 4K image and are visibly sharper as a result.

Games without updates

If you were hoping that all games would get some kind of “free” resolution or frame rate boost from the Switch 2, that mostly doesn’t happen. Games like Kirby’s Return to Dream Land Deluxe and Pokémon Legends Arceus, neither of which got any kind of Switch 2-specific update, look mostly identical on both consoles. If you get right up close and do some pixel peeping, you can occasionally see places where outputting a 4K image instead of a 1080p image will look better on a 4K TV, but it’s nothing like what we saw in the other games we tested.

Pokémon Legends Arceus, Switch 1, docked.

Pokémon Legends Arceus, Switch 2, docked.

However, it does seem that the Switch 2 may help out somewhat in terms of performance consistency. Observe the footage of a character running around town in Pokémon Legends—the resolution, draw distance, and overall frame rate all look pretty much the same. But the minor frame rate dips and hitches you see on the Switch 1 seem to have been at least partially addressed on the Switch 2. Your mileage will vary, of course. But you may encounter cases where a game targeting a stable 30 fps on the Switch 1 will hit that 30 fps with a bit more consistency on the Switch 2.

Nintendo Switch 2’s faster chip can dramatically improve original Switch games Read More »

google’s-nightmare:-how-a-search-spinoff-could-remake-the-web

Google’s nightmare: How a search spinoff could remake the web


Google has shaped the Internet as we know it, and unleashing its index could change everything.

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google may be forced to license its search technology when the final antitrust ruling comes down. Credit: Aurich Lawson

Google wasn’t around for the advent of the World Wide Web, but it successfully remade the web on its own terms. Today, any website that wants to be findable has to play by Google’s rules, and after years of search dominance, the company has lost a major antitrust case that could reshape both it and the web.

The closing arguments in the case just wrapped up last week, and Google could be facing serious consequences when the ruling comes down in August. Losing Chrome would certainly change things for Google, but the Department of Justice is pursuing other remedies that could have even more lasting impacts. During his testimony, Google CEO Sundar Pichai seemed genuinely alarmed at the prospect of being forced to license Google’s search index and algorithm, the so-called data remedies in the case. He claimed this would be no better than a spinoff of Google Search. The company’s statements have sometimes derisively referred to this process as “white labeling” Google Search.

But does a white label Google Search sound so bad? Google has built an unrivaled index of the web, but the way it shows results has become increasingly frustrating. A handful of smaller players in search have tried to offer alternatives to Google’s search tools. They all have different approaches to retrieving information for you, but they agree that spinning off Google Search could change the web again. Whether or not those changes are positive depends on who you ask.

The Internet is big and noisy

As Google’s search results have changed over the years, more people have been open to other options. Some have simply moved to AI chatbots to answer their questions, hallucinations be damned. But for most people, it’s still about the 10 blue links (for now).

Because of the scale of the Internet, there are only three general web search indexes: Google, Bing, and Brave. Every search product (including AI tools) relies on one or more of these indexes to probe the web. But what does that mean?

“Generally, a search index is a service that, when given a query, is able to find relevant documents published on the Internet,” said Brave’s search head Josep Pujol.

A search index is essentially a big database, and that’s not the same as search results. According to JP Schmetz, Brave’s chief of ads, it’s entirely possible to have the best and most complete search index in the world and still show poor results for a given query. Sound like anyone you know?

Google’s technological lead has allowed it to crawl more websites than anyone else. It has all the important parts of the web, plus niche sites, abandoned blogs, sketchy copies of legitimate websites, copies of those copies, and AI-rephrased copies of the copied copies—basically everything. And the result of this Herculean digital inventory is a search experience that feels increasingly discombobulated.

“Google is running large-scale experiments in ways that no rival can because we’re effectively blinded,” said Kamyl Bazbaz, head of public affairs at DuckDuckGo, which uses the Bing index. “Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a perpetual scale and quality deficit for rivals that locks in Google’s advantage.”

The size of the index may not be the only factor that matters, though. Brave, which is perhaps best known for its browser, also has a search engine. Brave Search is the default in its browser, but you can also just go to the URL in your current browser. Unlike most other search engines, Brave doesn’t need to go to anyone else for results. Pujol suggested that Brave doesn’t need the scale of Google’s index to find what you need. And admittedly, Brave’s search results don’t feel meaningfully worse than Google’s—they may even be better when you consider the way that Google tries to keep you from clicking.

Brave’s index spans around 25 billion pages, but it leaves plenty of the web uncrawled. “We could be indexing five to 10 times more pages, but we choose not to because not all the web has signal. Most web pages are basically noise,” said Pujol.

The freemium search engine Kagi isn’t worried about having the most comprehensive index. Kagi is a meta search engine. It pulls in data from multiple indexes, like Bing and Brave, but it has a custom index of what founder and CEO Vladimir Prelovac calls the “non-commercial web.”

When you search with Kagi, some of the results (it tells you the proportion) come from its custom index of personal blogs, hobbyist sites, and other content that is poorly represented on other search engines. It’s reminiscent of the days when huge brands weren’t always clustered at the top of Google—but even these results are being pushed out of reach in favor of AI, ads, Knowledge Graph content, and other Google widgets. That’s a big part of why Kagi exists, according to Prelovac.

A Google spinoff could change everything

We’ve all noticed the changes in Google’s approach to search, and most would agree that they have made finding reliable and accurate information harder. Regardless, Google’s incredibly deep and broad index of the Internet is in demand.

Even with Bing and Brave available, companies are going to extremes to syndicate Google Search results. A cottage industry has emerged to scrape Google searches as a stand-in for an official index. These companies are violating Google’s terms, yet they appear in Google Search results themselves. Google could surely do something about this if it wanted to.

The DOJ calls Google’s mountain of data the “essential raw material” for building a general search engine, and it believes forcing the firm to license that material is key to breaking its monopoly. The sketchy syndication firms will evaporate if the DOJ’s data remedies are implemented, which would give competitors an official way to utilize Google’s index. And utilize it they will.

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech.

Credit: Ryan Whitwam

Google CEO Sundar Pichai decried the court’s efforts to force a “de facto divestiture” of Google’s search tech. Credit: Ryan Whitwam

According to Prelovac, this could lead to an explosion in search choices. “The whole purpose of the Sherman Act is to proliferate a healthy, competitive marketplace. Once you have access to a search index, then you can have thousands of search startups,” said Prelovac.

The Kagi founder suggested that licensing Google Search could allow entities of all sizes to have genuinely useful custom search tools. Cities could use the data to create deep, hyper-local search, and people who love cats could make a cat-specific search engine, in both cases pulling what they want from the most complete database of online content. And, of course, general search products like Kagi would be able to license Google’s tech for a “nominal fee,” as the DOJ puts it.

Prelovac didn’t hesitate when asked if Kagi, which offers a limited number of free searches before asking users to subscribe, would integrate Google’s index. “Yes, that is something we would do,” he said. “And that’s what I believe should happen.”

There may be some drawbacks to unleashing Google’s search services. Judge Amit Mehta has expressed concern that blocking Google’s search placement deals could reduce browser choice, and there is a similar issue with the data remedies. If Google is forced to license search as an API, its few competitors in web indexing could struggle to remain afloat. In a roundabout way, giving away Google’s search tech could actually increase its influence.

The Brave team worries about how open access to Google’s search technology could impact diversity on the web. “If implemented naively, it’s a big problem,” said Brave’s ad chief JP Schmetz, “If the court forces Google to provide search at a marginal cost, it will not be possible for Bing or Brave to survive until the remedy ends.”

The landscape of AI-based search could also change. We know from testimony given during the remedy trial by OpenAI’s Nick Turley that the ChatGPT maker tried and failed to get access to Google Search to ground its AI models—it currently uses Bing. If Google were suddenly an option, you can be sure OpenAI and others would rush to connect Google’s web data to their large language models (LLMs).

The attempt to reduce Google’s power could actually grant it new monopolies in AI, according to Brave Chief Business Officer Brian Brown. “All of a sudden, you would have a single monolithic voice of truth across all the LLMs, across all the web,” Brown said.

What if you weren’t the product?

If white labeling Google does expand choice, even at the expense of other indexes, it will give more kinds of search products a chance in the market—maybe even some that shun Google’s focus on advertising. You don’t see much of that right now.

For most people, web search is and always has been a free service supported by ads. Google, Brave, DuckDuckGo, and Bing offer all the search queries you want for free because they want eyeballs. It’s been said often, but it’s true: If you’re not paying for it, you’re the product. This is an arrangement that bothers Kagi’s founder.

“For something as important as information consumption, there should not be an intermediary between me and the information, especially one that is trying to sell me something,” said Prelovac.

Kagi search results acknowledge the negative impact of today’s advertising regime. Kagi users see a warning next to results with a high number of ads and trackers. According to Prelovac, that is by far the strongest indication that a result is of low quality. That icon also lets you adjust the prevalence of such sites in your personal results. You can demote a site or completely hide it, which is a valuable option in the age of clickbait.

Kagi search gives you a lot of control.

Credit: Ryan Whitwam

Kagi search gives you a lot of control. Credit: Ryan Whitwam

Kagi’s paid approach to search changes its relationship with your data. “We literally don’t need user data,” Prelovac said. “But it’s not only that we don’t need it. It’s a liability.”

Prelovac admitted that getting people to pay for search is “really hard.” Nevertheless, he believes ad-supported search is a dead end. So Kagi is planning for a future in five or 10 years when more people have realized they’re still “paying” for ad-based search with lost productivity time and personal data, he said.

We know how Google handles user data (it collects a lot of it), but what does that mean for smaller search engines like Brave and DuckDuckGo that rely on ads?

“I’m sure they mean well,” said Prelovac.

Brave said that it shields user data from advertisers, relying on first-party tracking to attribute clicks to Brave without touching the user. “They cannot retarget people later; none of that is happening,” said Brave’s JP Schmetz.

DuckDuckGo is a bit of an odd duck—it relies on Bing’s general search index, but it adds a layer of privacy tools on top. It’s free and ad-supported like Google and Brave, but the company says it takes user privacy seriously.

“Viewing ads is privacy protected by DuckDuckGo, and most ad clicks are managed by Microsoft’s ad network,” DuckDuckGo’s Kamyl Bazbaz said. He explained that DuckDuckGo has worked with Microsoft to ensure its network does not track users or create any profiles based on clicks. He added that the company has a similar privacy arrangement with TripAdvisor for travel-related ads.

It’s AI all the way down

We can’t talk about the future of search without acknowledging the artificially intelligent elephant in the room. As Google continues its shift to AI-based search, it’s tempting to think of the potential search spin-off as a way to escape that trend. However, you may find few refuges in the coming years. There’s a real possibility that search is evolving beyond the 10 blue links and toward an AI assistant model.

All non-Google search engines have AI integrations, with the most prominent being Microsoft Bing, which has a partnership with OpenAI. But smaller players have AI search features, too. The folks working on these products agree with Microsoft and Google on one important point: They see AI as inevitable.

Today’s Google alternatives all have their own take on AI Overviews, which generates responses to queries based on search results. They’re generally not as in-your-face as Google AI, though. While Google and Microsoft are intensely focused on increasing the usage of AI search, other search operators aren’t pushing for that future. They are along for the ride, though.

AI overview on phone

AI Overviews are integrated with Google’s search results, and most other players have their own version.

Credit: Google

AI Overviews are integrated with Google’s search results, and most other players have their own version. Credit: Google

“We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite,” Bazbaz said. “So we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.”

The team at Brave views AI as a core means of accessing search and one that will continue to grow. Brave generates AI answers for many searches and prominently cites sources. You can also disable Brave’s AI if you prefer. But according to search chief Josep Pujol, the move to AI search is inevitable for a pretty simple reason: It’s convenient, and people will always choose convenience. So AI is changing the web as we know it, for better or worse, because LLMs can save a smidge of time, especially for more detailed “long-tail” queries. These AI features may give you false information while they do it, but that’s not always apparent.

This is very similar to the language Google uses when discussing agentic search, although it expresses it in a more nuanced way. By understanding the task behind a query, Google hopes to provide AI answers that save people time, even if the model needs a few ticks to fan out and run multiple searches to generate a more comprehensive report on a topic. That’s probably still faster than running multiple searches and manually reviewing the results, and it could leave traditional search as an increasingly niche service, even in a world with more choices.

“Will the 10 blue links continue to exist in 10 years?” Pujol asked. “Actually, one question would be, does it even exist now? In 10 years, [search] will have evolved into more of an AI conversation behavior or even agentic. That is probably the case. What, for sure, will continue to exist is the need to search. Search is a verb, an action that you do, and whether you will do it directly or whether it will be done through an agent, it’s a search engine.”

Vlad from Kagi sees AI becoming the default way we access information in the long term, but his search engine doesn’t force you to use it. On Kagi, you can expand the AI box for your searches and ask follow-ups, and the AI will open automatically if you use a question mark in your search. But that’s just the start.

“You watch Star Trek, nobody’s clicking on links there—I do believe in that vision in science fiction movies,” Prelovac said. “I don’t think my daughter will be clicking links in 10 years. The only question is if the current technology will be the one that gets us there. LLMs have inherent flaws. I would even tend to say it’s likely not going to get us to Star Trek.”

If we think of AI mainly as a way to search for information, the future becomes murky. With generative AI in the driver’s seat, questions of authority and accuracy may be left to language models that often behave in unpredictable and difficult-to-understand ways. Whether we’re headed for an AI boom or bust—for continued Google dominance or a new era of choice—we’re facing fundamental changes to how we access information.

Maybe if we get those thousands of search startups, there will be a few that specialize in 10 blue links. We can only hope.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google’s nightmare: How a search spinoff could remake the web Read More »

discord-cto-says-he’s-“constantly-bringing-up-enshittification”-during-meetings

Discord CTO says he’s “constantly bringing up enshittification” during meetings

Discord members are biting their nails. As reports swirl that the social media company is planning an initial public offering this year and increasingly leans on advertising revenue, there’s fear that Discord will become engulfed in the enshittification that has already scarred so many online communities. Co-founder and CTO Stanislav Vishnevskiy claims he’s worried about that, too.

In an interview with Engadget published today, Vishnevskiy claimed that Discord employees regularly discuss concerns about Discord going astray and angering users.

“I understand the anxiety and concern,” Vishnevskiy said. “I think the things that people are afraid of are what separate a great, long-term focused company from just any other company.”

But there are reasons for long-time Discord users to worry about the platform changing for the worse in the coming years. The most obvious one is Discord’s foray into ads, something the company has avoided since launching in 2015. Discord started showing ads in March 2024 via its desktop and console apps. Since then, it has introduced video ads to its mobile app and launched Orbs, which Discord users can earn by clicking on ads in Discord and trade for in-game rewards. Discord also recently said that it plans to start selling ads to more companies.

Fanning expectations of Discord going public soon and looking different in the future, Discord co-founder and CEO Jason Citron left in April. His replacement, Humam Sakhnini, has experience leading public companies, like Activision Blizzard. When Citron announced his departure in April, GamesBeat asked him if Discord was going public. Citron claimed there were “no specific plans” but added that “hiring someone like Humam is a step in that direction.” Vishnevskiy declined to comment on a potential Discord IPO while speaking to Engadget.

Amid current and imminent changes, though, Vishnevskiy claims to be eyeing Discord’s enshittification risk, telling Engadget:

I’m definitely the one who’s constantly bringing up enshittification [at internal meetings]. It’s not a bad thing to build a strong business and to monetize a product. That’s how we can reinvest and continue to make things better. But we have to be extremely thoughtful about how we do that.

Discord has axed bad ideas before

For some, the inclusion of ads is automatic enshittification. However, Discord’s ad load, at least for now, is minimally intrusive. The ads appear in sidebars within the platform that expand only if clicked upon and can lead to user rewards.

Discord CTO says he’s “constantly bringing up enshittification” during meetings Read More »

google’s-new-gemini-2.5-pro-release-aims-to-fix-past-“regressions”-in-the-model

Google’s new Gemini 2.5 Pro release aims to fix past “regressions” in the model

It seems like hardly a day goes by anymore without a new version of Google’s Gemini AI landing, and sure enough, Google is rolling out a major update to its most powerful 2.5 Pro model. This release is aimed at fixing some problems that cropped up in an earlier Gemini Pro update, and the word is, this version will become a stable release that comes to the Gemini app for everyone to use.

The previous Gemini 2.5 Pro release, known as the I/O Edition, or simply 05-06, was focused on coding upgrades. Google claims the new version is even better at generating code, with a new high score of 82.2 percent in the Aider Polyglot test. That beats the best from OpenAI, Anthropic, and DeepSeek by a comfortable margin.

While the general-purpose Gemini 2.5 Flash has left preview, the Pro version is lagging behind. In fact, the last several updates have attracted some valid criticism of 2.5 Pro’s performance outside of coding tasks since the big 03-25 update. Google’s Logan Kilpatrick says the team has taken that feedback to heart and that the new model “closes [the] gap on 03-25 regressions.” For example, users will supposedly see more creativity with better formatting of responses.

Kilpatrick also notes that the 06-05 release now supports configurable thinking budgets for developers, and the team expects this build to become a “long term stable release.” So, Gemini 2.5 Pro should finally drop its “Preview” disclaimer when this version rolls out to the consumer-facing app and web interface in the coming weeks.

Google’s new Gemini 2.5 Pro release aims to fix past “regressions” in the model Read More »

endangered-classic-mac-plastic-color-returns-as-3d-printer-filament

Endangered classic Mac plastic color returns as 3D-printer filament

On Tuesday, classic computer collector Joe Strosnider announced the availability of a new 3D-printer filament that replicates the iconic “Platinum” color scheme used in classic Macintosh computers from the late 1980s through the 1990s. The PLA filament (PLA is short for polylactic acid) allows hobbyists to 3D-print nostalgic novelties, replacement parts, and accessories that match the original color of vintage Apple computers.

Hobbyists commonly feed this type of filament into commercial desktop 3D printers, which heat the plastic and extrude it in a computer-controlled way to fabricate new plastic parts.

The Platinum color, which Apple used in its desktop and portable computer lines starting with the Apple IIgs in 1986, has become synonymous with a distinctive era of classic Macintosh aesthetic. Over time, original Macintosh plastics have become brittle and discolored with age, so matching the “original” color can be a somewhat challenging and subjective experience.

A close-up of

A close-up of “Retro Platinum” PLA filament by Polar Filament. Credit: Polar Filament

Strosnider, who runs a website about his extensive vintage computer collection in Ohio, worked for years to color-match the distinctive beige-gray hue of the Macintosh Platinum scheme, resulting in a spool of hobby-ready plastic by Polar Filament and priced at $21.99 per kilogram.

According to a forum post, Strosnider paid approximately $900 to develop the color and purchase an initial 25-kilogram supply of the filament. Rather than keeping the formulation proprietary, he arranged for Polar Filament to make the color publicly available.

“I paid them a fee to color match the speaker box from inside my Mac Color Classic,” Strosnider wrote in a Tinkerdifferent forum post on Tuesday. “In exchange, I asked them to release the color to the public so anyone can use it.”

Endangered classic Mac plastic color returns as 3D-printer filament Read More »

american-science-&-surplus-is-fighting-for-its-life-here’s-why-you-should-care.

American Science & Surplus is fighting for its life. Here’s why you should care.

A long history

Old catalogs from the American Lens & Photo days.

Credit: American Science & Surplus

Old catalogs from the American Lens & Photo days. Credit: American Science & Surplus

American Science & Surplus got its start in 1937 and has expanded its inventory and reach ever since. And Meyer has been a part of the story for the last four decades. “I’ve done everything in the company that there is to do,” he said. “I started off literally scraping floors and Windexing, driven our trucks. I’ve done warehouse sales, I’ve done tent sales. I’ve been store manager, assistant manager, I’ve done—there isn’t anything in this company that I haven’t—and it’s been my life for 41 years.”

Over time, the store has moved far beyond lenses and lab equipment. There’s a science toy section and an aisle devoted to Etsy-style craft supplies. But other, once-thriving areas of the business have suffered. When I first discovered American Science & Surplus in the early 2000s, I would always linger at the massive telescope section. The store staff was always more than happy to answer my questions and explain the differences between the scopes. Now, telescopes are just a small corner of the store, and sales are infrequent. “People come in to ask questions and then buy the telescopes online,” Meyer explained.

In many ways, American Science & Surplus is a physical manifestation of the maker ethos. There is an endless array of motors, switches, cables, tools, and connectors. “Sometimes our customers will send us photos of their creations,” said Meyer. “It’s always cool to see how people are inspired by shopping here.”

The store should feel familiar to those who were alive in the peak days of Radio Shack. In fact, there used to be a Radio Shack in the same strip mall as American Science & Surplus’ old store in the Jefferson Park neighborhood on Chicago’s northwest side. Meyer said that Radio Shack would frequently send customers a few doors down to his store to find things Radio Shack didn’t stock. And one time, the surplus store sent a customer back. “Radio Shack sent one guy over to us after telling him they didn’t have the item in stock,” Meyer said. “We didn’t have it, but one of our associates knew Radio Shack did, so he walked the customer back, pulled the part out of the bin, and handed it to him.”

American Science & Surplus is fighting for its life. Here’s why you should care. Read More »