Google

california’s-newly-signed-ai-law-just-gave-big-tech-exactly-what-it-wanted

California’s newly signed AI law just gave Big Tech exactly what it wanted

On Monday, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, requiring AI companies to disclose their safety practices while stopping short of mandating actual safety testing. The law requires companies with annual revenues of at least $500 million to publish safety protocols on their websites and report incidents to state authorities, but it lacks the stronger enforcement teeth of the bill Newsom vetoed last year after tech companies lobbied heavily against it.

The legislation, S.B. 53, replaces Senator Scott Wiener’s previous attempt at AI regulation, known as S.B. 1047, that would have required safety testing and “kill switches” for AI systems. Instead, the new law asks companies to describe how they incorporate “national standards, international standards, and industry-consensus best practices” into their AI development, without specifying what those standards are or requiring independent verification.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement, though the law’s actual protective measures remain largely voluntary beyond basic reporting requirements.

According to the California state government, the state houses 32 of the world’s top 50 AI companies, and more than half of global venture capital funding for AI and machine learning startups went to Bay Area companies last year. So while the recently signed bill is state-level legislation, what happens in California AI regulation will have a much wider impact, both by legislative precedent and by affecting companies that craft AI systems used around the world.

Transparency instead of testing

Where the vetoed SB 1047 would have mandated safety testing and kill switches for AI systems, the new law focuses on disclosure. Companies must report what the state calls “potential critical safety incidents” to California’s Office of Emergency Services and provide whistleblower protections for employees who raise safety concerns. The law defines catastrophic risk narrowly as incidents potentially causing 50+ deaths or $1 billion in damage through weapons assistance, autonomous criminal acts, or loss of control. The attorney general can levy civil penalties of up to $1 million per violation for noncompliance with these reporting requirements.

California’s newly signed AI law just gave Big Tech exactly what it wanted Read More »

f-droid-says-google’s-new-sideloading-restrictions-will-kill-the-project

F-Droid says Google’s new sideloading restrictions will kill the project

F-Droid warns that the project will end if Google is allowed to seize control of the entire Android software ecosystem by way of its developer verification program. In addition to gathering personal information from devs, F-Droid says Google will be demanding registration fees from independent developers, many of whom give their apps away for free and would be uninterested in paying Google for the privilege.

Verification

Google’s application to test verification does ask if you can pay in USD, suggesting it will charge devs for the privilege of creating Android apps.

Credit: Ryan Whitwam

Google’s application to test verification does ask if you can pay in USD, suggesting it will charge devs for the privilege of creating Android apps. Credit: Ryan Whitwam

Google has been slow to provide details of the verification system. However, you can sign up for the early access program. During that process, Google does ask if you are able to pay registration fees in US dollars, which suggests there will be a cost for developers in the program. We’ve reached out to Google for more information.

A plea for regulation

F-Droid’s position is clear: if you own a device, you should be allowed to decide what software to run on it. To force everyone to register with a central authority is an affront to the ideas of free speech and thought, says F-Droid.

So what’s the solution? In the blog post, Google is accused of using security as a mask for what is really an attempt to consolidate monopoly power over app distribution at a time when its power is being suppressed by antitrust actions. F-Droid is calling on regulators from the US and EU to take a close look at Google’s plans before it’s too late.

Google is currently on the verge of massive court-mandated changes to the Play Store. After losing the antitrust case brought by Epic Games, Google went on to lose the appeal. As it explores further legal maneuvering, the firm may have to begin opening up its app distribution system by promoting third-party stores in Google Play and mirroring Google Play content in other storefronts. This will reduce Google’s monopoly power in Android apps, which is the court’s intention. However, the company’s new goal of locking down sideloading could maintain its central role in Android software.

F-Droid calls on concerned developers and users to contact their government representatives to demand action. Specifically, the site suggests invoking the European Commission’s Digital Markets Act (DMA) to keep FOSS apps free from Google’s gatekeeping.

While the pilot verification program is set to launch next month, it will be almost a year before unverified apps will be blocked. That will start with a handful of markets, including Brazil, Indonesia, Singapore, and Thailand. The restrictions are expected to expand globally in 2027.

F-Droid says Google’s new sideloading restrictions will kill the project Read More »

big-ai-firms-pump-money-into-world-models-as-llm-advances-slow

Big AI firms pump money into world models as LLM advances slow

Runway, a video generation start-up that has deals with Hollywood studios, including Lionsgate, launched a product last month that uses world models to create gaming settings, with personalized stories and characters generated in real time.

“Traditional video methods [are a] brute-force approach to pixel generation, where you’re trying to squeeze motion in a couple of frames to create the illusion of movement, but the model actually doesn’t really know or reason about what’s going on in that scene,” said Cristóbal Valenzuela, chief executive officer at Runway.

Previous video-generation models had physics that were unlike the real world, he added, which general-purpose world model systems help to address.

To build these models, companies need to collect a huge amount of physical data about the world.

San Francisco-based Niantic has mapped 10 million locations, gathering information through games including Pokémon Go, which has 30 million monthly players interacting with a global map.

Niantic ran Pokémon Go for nine years and, even after the game was sold to US-based Scopely in June, its players still contribute anonymized data through scans of public landmarks to help build its world model.

“We have a running start at the problem,” said John Hanke, chief executive of Niantic Spatial, as the company is now called following the Scopely deal.

Both Niantic and Nvidia are working on filling gaps by getting their world models to generate or predict environments. Nvidia’s Omniverse platform creates and runs such simulations, assisting the $4.3 trillion tech giant’s push toward robotics and building on its long history of simulating real-world environments in video games.

Nvidia Chief Executive Jensen Huang has asserted that the next major growth phase for the company will come with “physical AI,” with the new models revolutionizing the field of robotics.

Some such as Meta’s LeCun have said this vision of a new generation of AI systems powering machines with human-level intelligence could take 10 years to achieve.

But the potential scope of the cutting-edge technology is extensive, according to AI experts. World models “open up the opportunity to service all of these other industries and amplify the same thing that computers did for knowledge work,” said Nvidia’s Lebaredian.

Additional reporting by Melissa Heikkilä in London and Michael Acton in San Francisco.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Big AI firms pump money into world models as LLM advances slow Read More »

youtube-music-is-testing-ai-hosts-that-will-interrupt-your-tunes

YouTube Music is testing AI hosts that will interrupt your tunes

YouTube has a new Labs program, allowing listeners to “discover the next generation of YouTube.” In case you were wondering, that generation is apparently all about AI. The streaming site says Labs will offer a glimpse of the AI features it’s developing for YouTube Music, and it starts with AI “hosts” that will chime in while you’re listening to music. Yes, really.

The new AI music hosts are supposed to provide a richer listening experience, according to YouTube. As you’re listening to tunes, the AI will generate audio snippets similar to, but shorter than, the fake podcasts you can create in NotebookLM. The “Beyond the Beat” host will break in every so often with relevant stories, trivia, and commentary about your musical tastes. YouTube says this feature will appear when you are listening to mixes and radio stations.

The experimental feature is intended to be a bit like having a radio host drop some playful banter while cueing up the next song. It sounds a bit like Spotify’s AI DJ, but the YouTube AI doesn’t create playlists like Spotify’s robot. This is still generative AI, which comes with the risk of hallucinations and low-quality slop, neither of which belongs in your music. That said, Google’s Audio Overviews are often surprisingly good in small doses.

YouTube Music is testing AI hosts that will interrupt your tunes Read More »

experts-urge-caution-about-using-chatgpt-to-pick-stocks

Experts urge caution about using ChatGPT to pick stocks

“AI models can be brilliant,” Dan Moczulski, UK managing director at eToro, told Reuters. “The risk comes when people treat generic models like ChatGPT or Gemini as crystal balls.” He noted that general AI models “can misquote figures and dates, lean too hard on a pre-established narrative, and overly rely on past price action to attempt to predict the future.”

The hazards of AI stock picking

Using AI to trade stocks at home feels like it might be the next step in a long series of technological advances that have democratized individual retail investing, for better or for worse. Computer-based stock trading for individuals dates back to 1984, when Charles Schwab introduced electronic trading services for dial-up customers. E-Trade launched in 1992, and by the late 1990s, online brokerages had transformed retail investing, dropping commission fees from hundreds of dollars per trade to under $10.

The first “robo-advisors” appeared after the 2008 financial crisis, which began the rise of automated online services that use algorithms to manage and rebalance portfolios based on a client’s goals. Services like Betterment launched in 2010, and Wealthfront followed in 2011, using algorithms to automatically rebalance portfolios. By the end of 2015, robo-advisors from nearly 100 companies globally were managing $60 billion in client assets.

The arrival of ChatGPT in November 2022 arguably marked a new phase where retail investors could directly query an AI model for stock picks rather than relying on pre-programmed algorithms. But Leung acknowledged that ChatGPT cannot access data behind paywalls, potentially missing crucial analyses available through professional services. To get better results, he creates specific prompts like “assume you’re a short analyst, what is the short thesis for this stock?” or “use only credible sources, such as SEC filings.”

Beyond chatbots, reliance on financial algorithms is growing. The “robo-advisory” market, which includes all companies providing automated, algorithm-driven financial advice from fintech startups to established banks, is forecast to grow roughly 600 percent by 2029, according to data-analysis firm Research and Markets.

But as more retail investors turn to AI tools for investment decisions, it’s also potential trouble waiting to happen.

“If people get comfortable investing using AI and they’re making money, they may not be able to manage in a crisis or downturn,” Leung warned Reuters. The concern extends beyond individual losses to whether retail investors using AI tools understand risk management or have strategies for when markets turn bearish.

Experts urge caution about using ChatGPT to pick stocks Read More »

amazon-blamed-ai-for-layoffs,-then-hired-cheap-h1-b-workers,-senators-allege

Amazon blamed AI for layoffs, then hired cheap H1-B workers, senators allege


Tech firms pressed to explain if H-1B workers are paid less than US workers.

Senators are demanding answers from Big Tech companies accused of “filing thousands of H-1B skilled labor visa petitions after conducting mass layoffs of American employees.”

In letters sent to Amazon, Meta, Apple, Google, and Microsoft—among some of the largest sponsors of H-1B visas—Senators Chuck Grassley (R-Iowa) and Dick Durbin (D-Ill.) requested “information and data from each company regarding their recruitment and hiring practices, as well as any variation in salary and benefits between H-1B visa holders and American employees.”

The letters came shortly after Grassley sent a letter to Department of Homeland Security Secretary Kristi Noem requesting that DHS stop “issuing work authorizations to student visa holders.” According to Grassley, “foreign student work authorizations put America at risk of technological and corporate espionage,” in addition to allegedly “contributing to rising unemployment rates among college-educated Americans.”

If DHS refuses to stop authorizing the visas, Grassley requested a “detailed explanation of what legal authority DHS is relying on to issue these authorizations.” He suggested that the authorization violates a law intended to ensure that only highly skilled workers and top talents that can’t be found in the US are granted visas.

In the letters to tech firms, senators emphasized that the unemployment rate in America’s tech sector is “well above” the overall jobless rate.

Amazon perhaps faces the most scrutiny. US Citizenship and Immigration Services data showed that Amazon sponsored the most H-1B visas in 2024 at 14,000, compared to other criticized firms like Microsoft and Meta, which each sponsored 5,000, The Wall Street Journal reported. Senators alleged that Amazon blamed layoffs of “tens of thousands” on the “adoption of generative AI tools,” then hired more than 10,000 foreign H-1B employees in 2025.

The letter similarly called out Meta for its “year of efficiency,” laying off “a quarter of its workforce” between 2022 and 2023. Meta followed that with more layoffs impacting 3,500 employees in 2025, Senators noted, while receiving approval to hire more than 5,000 H-1B employees.

Senators also pushed Google to explain why it “laid off tens-of-thousands of employees in recent years” despite “enjoying record profits.”

“With all of the homegrown American talent relegated to the sidelines, we find it hard to believe that [you] cannot find qualified American tech workers to fill these positions,” senators scolded tech firms, demanding responses by October 10.

That’s the same deadline that Grassley gave Noem to respond about stopping student visa authorizations. If Noem agrees, that would likely also include cutting off “a pathway for students to work in the US for around 12 to 36 months immediately after completing their degree,” Hindustan Times reported, noting that students from India were likely to be most harmed by the proposed change.

Asked for comment on whether Noem would meet the deadline, DHS told Ars that “Congressional correspondence will be handled through official channels.”

Ars reached out to tech firms, but only Microsoft immediately responded, declining to comment.

Trump’s $100,000 H-1B visa fee spooks startups

On X, Grassley noted that the recent pressure campaign revives an effort to change H-1B visa approval processes that he and Durbin have worked to oppose since 2023.

Back then, the senators introduced the H-1B and L-1 Visa Reform Act, alleging that “for years” companies have “used legal loopholes to displace qualified American workers and replace them with foreign workers who are paid subpar wages and put in exploitative working conditions.”

That legislation sought to “put an end” to “abuses” by placing new wage requirements on employers and new education requirements, only approving visas for specialty occupations that required “a bachelor’s degree or higher.” If passed, employers risked fines for violating wage requirements.

But despite having bipartisan support and a stamp of approval from Sen. Bernie Sanders (I-Vt.)—who has long argued H-1B visas “replace American” workers “with cheaper international workers,” The Guardian noted—the bill died after being referred to the Committee on the Judiciary.

Now the White House is forcing changes after Donald Trump issued an executive order last week requiring all companies sponsoring new H-1B employees to pay a $100,000 fee to bring them into the US, which started Sunday.

Trump claimed the fee was necessary to stop the H-1B nonimmigrant visa program from being “deliberately exploited to replace, rather than supplement, American workers with lower-paid, lower-skilled labor.”

To support this, the order cited data showing that the number of “foreign STEM workers in the United States has more than doubled between 2000 and 2019, increasing from 1.2 million to almost 2.5 million, while overall STEM employment has only increased 44.5 percent during that time.”

Attacking the tech sector in particular, the order also noted that “the share of IT workers in the H-1B program grew from 32 percent” in 2003 to “an average of over 65 percent” in the last five years. According to Trump, tech firms are incentivized to “close their IT divisions, fire their American staff, and outsource IT jobs to lower-paid foreign workers,” due to “artificially lower labor costs” the H-1B program supposedly creates.

“American IT workers have reported they were forced to train the foreign workers who were taking their jobs and to sign nondisclosure agreements about this indignity as a condition of receiving any form of severance,” Trump’s order said. “This suggests H-1B visas are not being used to fill occupational shortages or obtain highly skilled workers who are unavailable in the United States.”

By imposing the $100,000 fee, Trump claims that companies will be forced to use the H-1B program the way “it was intended”—motivated to pay more for certain foreign workers in order “to fill jobs for which highly skilled and educated American workers are unavailable.” Speaking last Friday, Trump suggested that money collected from the fees would be used to “reduce taxes” and “reduce debt,” The Guardian reported.

The order also proposed a new weighted lottery system, where applications for visas for jobs with the highest wages would be more likely to be approved than lower-wage jobs. For some firms, changes to the system may feel personal, as The Guardian noted that Alphabet chief executive Sundar Pichai and Microsoft chief executive Satya Nadella “were at one point H-1B visa holders.”

Most tech companies haven’t commented directly on the order, with Netflix founder Reed Hastings standing out among the few voicing support for the change, while other firms internally warned workers to limit travel until companies understood how the process could impact existing H-1B employees. Since then, the White House has confirmed that only new applicants will be impacted by the changes.

Previously, tech firms only had to pay somewhere between $1,700 to $4,500, “depending on whether the visa was expedited,” The Guardian reported. Now facing a much larger fee to hire foreign talent, smaller tech firms have complained that Trump’s policy advantages Big Tech firms with deeper pockets, The New York Times reported. The fee may also deter companies from coming into the US, the Times reported.

Some believe that Trump’s policy is short-sighted, with startups particularly panicked. While Big Tech firms can afford to pay the fees, the US risks falling behind in innovation and tech leadership, critics told the Times, as “Silicon Valley relies on a steady stream of start-ups to advance new ideas and technologies.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Amazon blamed AI for layoffs, then hired cheap H1-B workers, senators allege Read More »

google-deepmind-unveils-its-first-“thinking”-robotics-ai

Google DeepMind unveils its first “thinking” robotics AI

Imagine that you want a robot to sort a pile of laundry into whites and colors. Gemini Robotics-ER 1.5 would process the request along with images of the physical environment (a pile of clothing). This AI can also call tools like Google search to gather more data. The ER model then generates natural language instructions, specific steps that the robot should follow to complete the given task.

Gemin iRobotics thinking

The two new models work together to “think” about how to complete a task.

Credit: Google

The two new models work together to “think” about how to complete a task. Credit: Google

Gemini Robotics 1.5 (the action model) takes these instructions from the ER model and generates robot actions while using visual input to guide its movements. But it also goes through its own thinking process to consider how to approach each step. “There are all these kinds of intuitive thoughts that help [a person] guide this task, but robots don’t have this intuition,” said DeepMind’s Kanishka Rao. “One of the major advancements that we’ve made with 1.5 in the VLA is its ability to think before it acts.”

Both of DeepMind’s new robotic AIs are built on the Gemini foundation models but have been fine-tuned with data that adapts them to operating in a physical space. This approach, the team says, gives robots the ability to undertake more complex multi-stage tasks, bringing agentic capabilities to robotics.

The DeepMind team tests Gemini robotics with a few different machines, like the two-armed Aloha 2 and the humanoid Apollo. In the past, AI researchers had to create customized models for each robot, but that’s no longer necessary. DeepMind says that Gemini Robotics 1.5 can learn across different embodiments, transferring skills learned from Aloha 2’s grippers to the more intricate hands on Apollo with no specialized tuning.

All this talk of physical agents powered by AI is fun, but we’re still a long way from a robot you can order to do your laundry. Gemini Robotics 1.5, the model that actually controls robots, is still only available to trusted testers. However, the thinking ER model is now rolling out in Google AI Studio, allowing developers to generate robotic instructions for their own physically embodied robotic experiments.

Google DeepMind unveils its first “thinking” robotics AI Read More »

when-“no”-means-“yes”:-why-ai-chatbots-can’t-process-persian-social-etiquette

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

If an Iranian taxi driver waves away your payment, saying, “Be my guest this time,” accepting their offer would be a cultural disaster. They expect you to insist on paying—probably three times—before they’ll take your money. This dance of refusal and counter-refusal, called taarof, governs countless daily interactions in Persian culture. And AI models are terrible at it.

New research released earlier this month titled “We Politely Insist: Your LLM Must Learn the Persian Art of Taarof” shows that mainstream AI language models from OpenAI, Anthropic, and Meta fail to absorb these Persian social rituals, correctly navigating taarof situations only 34 to 42 percent of the time. Native Persian speakers, by contrast, get it right 82 percent of the time. This performance gap persists across large language models such as GPT-4o, Claude 3.5 Haiku, Llama 3, DeepSeek V3, and Dorna, a Persian-tuned variant of Llama 3.

A study led by Nikta Gohari Sadr of Brock University, along with researchers from Emory University and other institutions, introduces “TAAROFBENCH,” the first benchmark for measuring how well AI systems reproduce this intricate cultural practice. The researchers’ findings show how recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.

“Cultural missteps in high-consequence settings can derail negotiations, damage relationships, and reinforce stereotypes,” the researchers write. For AI systems increasingly used in global contexts, that cultural blindness could represent a limitation that few in the West realize exists.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance. Credit: Sadr et al.

“Taarof, a core element of Persian etiquette, is a system of ritual politeness where what is said often differs from what is meant,” the researchers write. “It takes the form of ritualized exchanges: offering repeatedly despite initial refusals, declining gifts while the giver insists, and deflecting compliments while the other party reaffirms them. This ‘polite verbal wrestling’ (Rafiee, 1991) involves a delicate dance of offer and refusal, insistence and resistance, which shapes everyday interactions in Iranian culture, creating implicit rules for how generosity, gratitude, and requests are expressed.”

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette Read More »

google-play-is-getting-a-gemini-powered-ai-sidekick-to-help-you-in-games

Google Play is getting a Gemini-powered AI Sidekick to help you in games

The era of Google’s Play’s unrivaled dominance may be coming to an end in the wake of the company’s antitrust loss, but Google’s app store isn’t going anywhere. In fact, the Play Store experience is getting a massive update with more personalization, content, and yes, AI. This is Google, after all.

The revamped Google Play Games is a key part of this update. Gamer profiles will now have a public face, allowing you to interact with other players if you choose. Play Games will track your activity for daily streaks, which will be shown on your profile and unlock new Play Points rewards. Your profile will also display your in-game achievements.

Your gaming exploits can also span multiple platforms. Google Play Games for PC is officially leaving beta. Google says there are now 200,000 games that work across mobile and PC, and even more PC-friendly titles, like Deep Rock Galactic: Survivor, are on the way. Your stats and streaks will apply across both mobile and PC as long as the title comes from the Play Store.

At the core of Google’s app store revamp is the You Tab, which will soon take its place in the main navigation bar of the Play Store. This page will show your rewards, subscriptions, game, stats, and more—and it goes beyond gaming. The You Tab will recommend a variety of content on Google Play, including books and podcasts.

Google Play is getting a Gemini-powered AI Sidekick to help you in games Read More »

eu-investigates-apple,-google,-and-microsoft-over-handling-of-online-scams

EU investigates Apple, Google, and Microsoft over handling of online scams

The EU is set to scrutinize if Apple, Google, and Microsoft are failing to adequately police financial fraud online, as it steps up efforts to police how Big Tech operates online.

The EU’s tech chief Henna Virkkunen told the Financial Times that on Tuesday, the bloc’s regulators would send formal requests for information to the three US Big Tech groups as well as global accommodation platform Booking Holdings, under powers granted under the Digital Services Act to tackle financial scams.

“We see that more and more criminal actions are taking place online,” Virkkunen said. “We have to make sure that online platforms really take all their efforts to detect and prevent that kind of illegal content.”

The move, which could later lead to a formal investigation and potential fines against the companies, comes amid transatlantic tensions over the EU’s digital rulebook. US President Donald Trump has threatened to punish countries that “discriminate” against US companies with higher tariffs.

Virkkunnen stressed the commission looked at the operations of individual companies, rather than where they were based. She will scrutinize how Apple and Google are handling fake applications in their app stores, such as fake banking apps.

She said regulators would also look at fake search results in the search engines of Google and Microsoft’s Bing. The bloc wants to have more information about the approach Booking Holdings, whose biggest subsidiary Booking.com is based in Amsterdam, is taking to fake accommodation listings. It is the only Europe-based company among the four set to be scrutinized.

EU investigates Apple, Google, and Microsoft over handling of online scams Read More »

deepmind-ai-safety-report-explores-the-perils-of-“misaligned”-ai

DeepMind AI safety report explores the perils of “misaligned” AI

DeepMind also addresses something of a meta-concern about AI. The researchers say that a powerful AI in the wrong hands could be dangerous if it is used to accelerate machine learning research, resulting in the creation of more capable and unrestricted AI models. DeepMind says this could “have a significant effect on society’s ability to adapt to and govern powerful AI models.” DeepMind ranks this as a more severe threat than most other CCLs.

The misaligned AI

Most AI security mitigations follow from the assumption that the model is at least trying to follow instructions. Despite years of hallucination, researchers have not managed to make these models completely trustworthy or accurate, but it’s possible that a model’s incentives could be warped, either accidentally or on purpose. If a misaligned AI begins to actively work against humans or ignore instructions, that’s a new kind of problem that goes beyond simple hallucination.

Version 3 of the Frontier Safety Framework introduces an “exploratory approach” to understanding the risks of a misaligned AI. There have already been documented instances of generative AI models engaging in deception and defiant behavior, and DeepMind researchers express concern that it may be difficult to monitor for this kind of behavior in the future.

A misaligned AI might ignore human instructions, produce fraudulent outputs, or refuse to stop operating when requested. For the time being, there’s a fairly straightforward way to combat this outcome. Today’s most advanced simulated reasoning models produce “scratchpad” outputs during the thinking process. Devs are advised to use an automated monitor to double-check the model’s chain-of-thought output for evidence misalignment or deception.

Google says this CCL could become more severe in the future. The team believes models in the coming years may evolve to have effective simulated reasoning without producing a verifiable chain of thought. So your overseer guardrail wouldn’t be able to peer into the reasoning process of such a model. For this theoretical advanced AI, it may be impossible to completely rule out that the model is working against the interests of its human operator.

The framework doesn’t have a good solution to this problem just yet. DeepMind says it is researching possible mitigations for a misaligned AI, but it’s hard to know when or if this problem will become a reality. These “thinking” models have only been common for about a year, and there’s still a lot we don’t know about how they arrive at a given output.

DeepMind AI safety report explores the perils of “misaligned” AI Read More »

a-history-of-the-internet,-part-3:-the-rise-of-the-user

A history of the Internet, part 3: The rise of the user


the best of times, the worst of times

The reins of the Internet are handed over to ordinary users—with uneven results.

Everybody get together. Credit: D3Damon/Getty Images

Everybody get together. Credit: D3Damon/Getty Images

Welcome to the final article in our three-part series on the history of the Internet. If you haven’t already, catch up with part one and part two.

As a refresher, here’s the story so far:

The ARPANET was a project started by the Defense Department’s Advanced Research Project Agency in 1969 to network different mainframe computers together across the country. It later evolved into the Internet, connecting multiple global networks together using a common TCP/IP protocol. By the late 1980s, a small group of academics and a few curious consumers connected to each other on the Internet, which was still mostly text-based.

In 1991, Tim Berners-Lee invented the World Wide Web, an Internet-based hypertext system designed for graphical interfaces. At first, it ran only on the expensive NeXT workstation. But when Berners-Lee published the web’s protocols and made them available for free, people built web browsers for many different operating systems. The most popular of these was Mosaic, written by Marc Andreessen, who formed a company to create its successor, Netscape. Microsoft responded with Internet Explorer, and the browser wars were on.

The web grew exponentially, and so did the hype surrounding it. It peaked in early 2001, right before the dotcom collapse that left most web-based companies nearly or completely bankrupt. Some people interpreted this crash as proof that the consumer Internet was just a fad. Others had different ideas.

Larry Page and Sergey Brin met each other at a graduate student orientation at Stanford in 1996. Both were studying for their PhDs in computer science, and both were interested in analyzing large sets of data. Because the web was growing so rapidly, they decided to start a project to improve the way people found information on the Internet.

They weren’t the first to try this. Hand-curated sites like Yahoo had already given way to more algorithmic search engines like AltaVista and Excite, which both started in 1995. These sites attempted to find relevant webpages by analyzing the words on every page.

Page and Brin’s technique was different. Their “BackRub” software created a map of all the links that pages had to each other. Pages on a given subject that had many incoming links from other sites were given a higher ranking for that keyword. Higher-ranked pages could then contribute a larger score to any pages they linked to. In a sense, this was a like a crowdsourcing of search: When people put “This is a good place to read about alligators” on a popular site and added a link to a page about alligators, it did a better job of determining that page’s relevance than simply counting the number of times the word appeared on a page.

Step 1 of the simplified BackRub algorithm. It also stores the position of each word on a page, so it can make a further subset for multiple words that appear next to each other. Jeremy Reimer.

Creating a connected map of the entire World Wide Web with indexes for every word took a lot of computing power. The pair filled their dorm rooms with any computers they could find, paid for by a $10,000 grant from the Stanford Digital Libraries Project. Many were cobbled together from spare parts, including one with a case made from imitation LEGO bricks. Their web scraping project was so bandwidth-intensive that it briefly disrupted the university’s internal network. Because neither of them had design skills, they coded the simplest possible “home page” in HTML.

In August 1996, BackRub was made available as a link from Stanford’s website. A year later, Page and Brin rebranded the site as “Google.” The name was an accidental misspelling of googol, a term coined by a mathematician’s young son to describe a 1 with 100 zeros after it. Even back then, the pair was thinking big.

Google.com as it appeared in 1998. Credit: Jeremy Reimer

By mid-1998, their prototype was getting over 10,000 searches a day. Page and Brin realized they might be onto something big. It was nearing the height of the dotcom mania, so they went looking for some venture capital to start a new company.

But at the time, search engines were considered passée. The new hotness was portals, sites that had some search functionality but leaned heavily into sponsored content. After all, that’s where the big money was. Page and Brin tried to sell the technology to AltaVista for $1 million, but its parent company passed. Excite also turned them down, as did Yahoo.

Frustrated, they decided to hunker down and keep improving their product. Brin created a colorful logo using the free GIMP paint program, and they added a summary snippet to each result. Eventually, the pair received $100,000 from angel investor Andy Bechtolsheim, who had co-founded Sun Microsystems. That was enough to get the company off the ground.

Page and Brin were careful with their money, even after they received millions more from venture capitalist firms. They preferred cheap commodity PC hardware and the free Linux operating system as they expanded their system. For marketing, they relied mostly on word of mouth. This allowed Google to survive the dotcom crash that crippled its competitors.

Still, the company eventually had to find a source of income. The founders were concerned that if search results were influenced by advertising, it could lower the usefulness and accuracy of the search. They compromised by adding short, text-based ads that were clearly labeled as “Sponsored Links.” To cut costs, they created a form so that advertisers could submit their own ads and see them appear in minutes. They even added a ranking system so that more popular ads would rise to the top.

The combination of a superior product with less intrusive ads propelled Google to dizzying heights. In 2024, the company collected over $350 billion in revenue, with $112 billion of that as profit.

Information wants to be free

The web was, at first, all about text and the occasional image. In 1997, Netscape added the ability to embed small music files in the MIDI sound format that would play when a webpage was loaded. Because the songs only encoded notes, they sounded tinny and annoying on most computers. Good audio or songs with vocals required files that were too large to download over the Internet.

But this all changed with a new file format. In 1993, researchers at the Fraunhofer Institute developed a compression technique that eliminated portions of audio that human ears couldn’t detect. Suzanne Vega’s song “Tom’s Diner” was used as the first test of the new MP3 standard.

Now, computers could play back reasonably high-quality songs from small files using software decoders. WinPlay3 was the first, but WinAmp, released in 1997, became the most popular. People started putting links to MP3 files on their personal websites. Then, in 1999, Shawn Fanning released a beta of a product he called Napster. This was a desktop application that relied on the Internet to let people share their MP3 collection and search everyone else’s.

Napster as it would have appeared in 1999. Credit: Jeremy Reimer

Napster almost immediately ran into legal challenges from the Recording Industry Association of America (RIAA). It sparked a debate about sharing things over the Internet that persists to this day. Some artists agreed with the RIAA that downloading MP3 files should be illegal, while others (many of whom had been financially harmed by their own record labels) welcomed a new age of digital distribution. Napster lost the case against the RIAA and shut down in 2002. This didn’t stop people from sharing files, but replacement tools like eDonkey 2000, Limewire, Kazaa, and Bearshare lived in a legal gray area.

In the end, it was Apple that figured out a middle ground that worked for both sides. In 2003, two years after launching its iPod music player, Apple announced the Internet-only iTunes Store. Steve Jobs had signed deals with all five major record labels to allow legal purchasing of individual songs—astoundingly, without copy protection—for 99 cents each, or full albums for $10. By 2010, the iTunes Store was the largest music vendor in the world.

iTunes 4.1, released in 2003. This was the first version for Windows and introduced the iTunes Store to a wider world. Credit: Jeremy Reimer

The Web turns 2.0

Tim Berners-Lee’s original vision for the web was simply to deliver and display information. It was like a library, but with hypertext links. But it didn’t take long for people to start experimenting with information flowing the other way. In 1994, Netscape 0.9 added new HTML tags like FORM and INPUT that let users enter text and, using a “Submit” button, send it back to the web server.

Early web servers didn’t know what to do with this text. But programmers developed extensions that let a server run programs in the background. The standardized “Common Gateway Interface” (CGI) made it possible for a “Submit” button to trigger a program (usually in a /cgi-bin/ directory) that could do something interesting with the submission, like talking to a database. CGI scripts could even generate new webpages dynamically and send them back to the user.

This intelligent two-way interaction changed the web forever. It enabled things like logging into an account on a website, web-based forums, and even uploading files directly to a web server. Suddenly, a website wasn’t just a page that you looked at. It could be a community where groups of interested people could interact with each other, sharing both text and images.

Dynamic webpages led to the rise of blogging, first as an experiment (some, like Justin Hall’s and Dave Winer’s, are still around today) and then as something anyone could do in their spare time. Websites in general became easier to create with sites like Geocities and Angelfire, which let people build their own personal dream house on the web for free. A community-run dynamic linking site, webring.org, connected similar websites together, encouraging exploration.

Webring.org was a free, community-run service that allowed dynamically updated webrings. Credit: Jeremy Reimer

One of the best things to come out of Web 2.0 was Wikipedia. It arose as a side project of Nupedia, an online encyclopedia founded by Jimmy Wales, with articles written by volunteers who were subject matter experts. This process was slow, and the site only had 21 articles in its first year. Wikipedia, in contrast, allowed anyone to contribute and review articles, so it quickly outpaced its predecessor. At first, people were skeptical about letting random Internet users edit articles. But thanks to an army of volunteer editors and a set of tools to quickly fix vandalism, the site flourished. Wikipedia far surpassed works like the Encyclopedia Britannica in sheer numbers of articles while maintaining roughly equivalent accuracy.

Not every Internet innovation lived on a webpage. In 1988, Jarkko Oikarinen created a program called Internet Relay Chat (IRC), which allowed real-time messaging between individuals and groups. IRC clients for Windows and Macintosh were popular among nerds, but friendlier applications like PowWow (1994), ICQ (1996), and AIM (1997) brought messaging to the masses. Even Microsoft got in on the act with MSN Messenger in 1999. For a few years, this messaging culture was an important part of daily life at home, school, and work.

A digital recreation of MSN Messenger from 2001. Sadly, Microsoft shut down the servers in 2014. Credit: Jeremy Reimer

Animation, games, and video

While the web was evolving quickly, the slow speeds of dial-up modems limited the size of files you could upload to a website. Static images were the norm. Animation only appeared in heavily compressed GIF files with a few frames each.

But a new technology blasted past these limitations and unleashed a torrent of creativity on the web. In 1995, Macromedia released Shockwave Player, an add-on for Netscape Navigator. Along with its Director software, the combination allowed artists to create animations based on vector drawings. These were small enough to embed inside webpages.

Websites popped up to support this new content. Newgrounds.com, which started in 1995 as a Neo-Geo fan site, started collecting the best animations. Because Director was designed to create interactive multimedia for CD-ROM projects, it also supported keyboard and mouse input and had basic scripting. This meant that people could make simple games that ran in Shockwave. Newgrounds eagerly showcased these as well, giving many aspiring artists and game designers an entry point into their careers. Super Meat Boy, for example, was first prototyped on Newgrounds.

Newgrounds as it would have appeared circa 2003. Credit: Jeremy Reimer

Putting actual video on the web seemed like something from the far future. But the future arrived quickly. After the dotcom crash of 2001, there were many unemployed web programmers with a lot of time on their hands to experiment with their personal projects. The arrival of broadband with cable modems and digital subscriber lines (DSL), combined with the new MPEG4 compression standard, made a lot of formerly impossible things possible.

In early 2005, Chad Hurley, Steve Chen, and Jawed Karim launched Youtube.com. Initially, it was meant to be an online dating site, but that service failed. The site, however, had great technology for uploading and playing videos. It used Macromedia’s Flash, a new technology so similar to Shockwave that the company marketed it as Shockwave Flash. YouTube allowed anybody to upload videos up to ten minutes in length for free. It became so popular that Google bought it a year later for $1.65 billion.

All these technologies combined to provide ordinary people with the opportunity, however brief, to make an impact on popular culture. An early example was the All Your Base phenomenon. An animated GIF of an obscure, mistranslated Sega Genesis game inspired indie musicians The Laziest Men On Mars to create a song and distribute it as an MP3. The popular humor site somethingawful.com picked it up, and users in the Photoshop Friday forum thread created a series of humorous images to go along with the song. Then in 2001, the user Bad_CRC took the song and the best of the images and put them together in an animation they shared on Newgrounds. The YouTube version gained such wide popularity that it was reported on by USA Today.

You have no chance to survive make your time.

Media goes social

In the early 2000s, most websites were either blogs or forums—and frequently both. Forums had multiple discussion boards, both general and specific. They often leaned into a specific hobby or interest, and anyone with that interest could join. There were also a handful of dating websites, like kiss.com (1994), match.com (1995), and eHarmony.com (2000), that specifically tried to connect people who might have a romantic interest in each other.

The Swedish Lunarstorm was one of the first social media websites. Credit: Jeremy Reimer

The road to social media was a hazy and confusing merging of these two types of websites. There was classmates.com (1995) that served as a way to connect with former school chums, and the following year, the Swedish site lunarstorm.com opened with this mission:

Everyone has their own website called Krypin. Each babe [this word is an accurate translation] has their own Krypin where she or he introduces themselves, posts their diaries and their favorite files, which can be anything from photos and their own songs to poems and other fun stuff. Every LunarStormer also has their own guestbook where you can write if you don’t really dare send a LunarEmail or complete a Friend Request.

In 1997, sixdegrees.com opened, based on the truism that everyone on earth is connected with six or fewer degrees of separation. Its About page said, “Our free networking services let you find the people you want to know through the people you already know.”

By the time friendster.com opened its doors in 2002, the concept of “friending” someone online was already well established, although it was still a niche activity. LinkedIn.com, launched the following year, used the excuse of business networking to encourage this behavior. But it was MySpace.com (2003) that was the first to gain significant traction.

MySpace was initially a Friendster clone written in just ten days by employees at eUniverse, an Internet marketing startup founded by Brad Greenspan. It became the company’s most successful product. MySpace combined the website-building ability of sites like GeoCities with social networking features. It took off incredibly quickly: in just three years, it surpassed Google as the most visited website in the United States. Hype around MySpace reached such a crescendo that Rupert Murdoch purchased it in 2005 for $580 million.

But a newcomer to the social media scene was about to destroy MySpace. Just as Google crushed its competitors, this startup won by providing a simpler, more functional, and less intrusive product. TheFaceBook.com began as Mark Zuckerberg and his college roommate’s attempt to replace their college’s online directory. Zuckerberg’s first student website, “Facemash,” had been created by breaking into Harvard’s network, and its sole feature was to provide “Hot or Not” comparisons of student photos. Facebook quickly spread to other universities, and in 2006 (after dropping the “the”), it was opened to the rest of the world.

“The” Facebook as it appeared in 2004. Credit: Jeremy Reimer

Facebook won the social networking wars by focusing on the rapid delivery of new features. The company’s slogan, “Move fast and break things,” encouraged this strategy. The most prominent feature, added in 2006, was the News Feed. It generated a list of posts, selected out of thousands of potential updates for each user based on who they followed and liked, and showed it on their front page. Combined with a technique called “infinite scrolling,” first invented for Microsoft’s Bing Image Search by Hugh E. Williams in 2005, it changed the way the web worked forever.

The algorithmically generated News Feed created new opportunities for Facebook to make profits. For example, businesses could boost posts for a fee, which would make them appear in news feeds more often. These blurred the lines between posts and ads.

Facebook was also successful in identifying up-and-coming social media sites and buying them out before they were able to pose a threat. This was made easier thanks to Onavo, a VPN that monitored its users’ activities and resold the data. Facebook acquired Onavo in 2013. It was shut down in 2019 due to continued controversy over the use of private data.

Social media transformed the Internet, drawing in millions of new users and starting a consolidation of website-visiting habits that continues to this day. But something else was about to happen that would shake the Internet to its core.

Don’t you people have phones?

For years, power users had experimented with getting the Internet on their handheld devices. IBM’s Simon phone, which came out in 1994, had both phone and PDA features. It could send and receive email. The Nokia 9000 Communicator, released in 1996, even had a primitive text-based web browser.

Later phones like the Blackberry 850 (1999), the Nokia 9210 (2001), and the Palm Treo (2002), added keyboards, color screens, and faster processors. In 1999, the Wireless Application Protocol (WAP) was released, which allowed mobile phones to receive and display simplified, phone-friendly pages using WML instead of the standard HTML markup language.

Browsing the web on phones was possible before modern smartphones, but it wasn’t easy. Credit: James Cridland (Flickr)

But despite their popularity with business users, these phones never broke into the mainstream. That all changed in 2007 when Steve Jobs got on stage and announced the iPhone. Now, every webpage could be viewed natively on the phone’s browser, and zooming into a section was as easy as pinching or double-tapping. The one exception was Flash, but a new HTML 5 standard promised to standardize advanced web features like animation and video playback.

Google quickly changed its Android prototype from a Blackberry clone to something more closely resembling the iPhone. Android’s open licensing structure allowed companies around the world to produce inexpensive smartphones. Even mid-range phones were still much cheaper than computers. This technology allowed, for the first time, the entire world to become connected through the Internet.

The exploding market of phone users also propelled the massive growth of social media companies like Facebook and Twitter. It was a lot easier now to snap a picture of a live event with your phone and post it instantly to the world. Optimists pointed to the remarkable events of the Arab Spring protests as proof that the Internet could help spread democracy and freedom. But governments around the world were just as eager to use these new tools, except their goals leaned more toward control and crushing dissent.

The backlash

Technology has always been a double-edged sword. But in recent years, public opinion about the Internet has shifted from being mostly positive to increasingly negative.

The combination of mobile phones, social media algorithms, and infinite scrolling led to the phenomenon of “doomscrolling,” where people spend hours every day reading “news” that is tuned for maximum engagement by provoking as many people as possible. The emotional toil caused by doomscrolling has been shown to cause real harm. Even more serious is the fallout from misinformation and hate speech, like the genocide in Myanmar that an Amnesty International report claims was amplified on Facebook.

As companies like Google, Amazon, and Facebook grew into near-monopolies, they inevitably lost sight of their original mission in favor of a never-ending quest for more money. The process, dubbed enshittification by Cory Doctorow, shifts the focus first from users to advertisers and then to shareholders.

Chasing these profits has fueled the rise of generative AI, which threatens to turn the entire Internet into a sea of soulless gray soup. Google is now forcing AI summaries at the top of web searches, which reduce traffic to websites and often provide dangerous misinformation. But even if you ignore the AI summaries, the sites you find underneath may also be suspect. Once-trusted websites have laid off staff and replaced them with AI, generating an endless series of new articles written by nobody. A web where AIs comment on AI-generated Facebook posts that link to AI-generated articles, which are then AI-summarized by Google, seems inhuman and pointless.

A search for cute baby peacocks on Bing. Some of them are real, and some aren’t. Credit: Jeremy Reimer

Where from here?

The history of the Internet can be roughly divided into three phases. The first, from 1969 to 1990, was all about the inventors: people like Vint Cerf, Steve Crocker, and Robert Taylor. These folks were part of a small group of computer scientists who figured out how to get different types of computers to talk to each other and to other networks.

The next phase, from 1991 to 1999, was a whirlwind that was fueled by entrepreneurs, people like Jerry Yang and Jeff Bezos. They latched on to Tim Berners-Lee’s invention of the World Wide Web and created companies that lived entirely in this new digital landscape. This set off a manic phase of exponential growth and hype, which peaked in early 2001 and crashed a few months later.

The final phase, from 2000 through today, has primarily been about the users. New companies like Google and Facebook may have reaped the greatest financial rewards during this time, but none of their successes would have been possible without the contributions of ordinary people like you and me. Every time we typed something into a text box and hit the “Submit” button, we created a tiny piece of a giant web of content. Even the generative AIs that pretend to make new things today are merely regurgitating words, phrases, and pictures that were created and shared by people.

There is a growing sense of nostalgia today for the old Internet, when it felt like a place, and the joy of discovery was around every corner. “Using the old Internet felt like digging for treasure,” said YouTube commenter MySoftCrow. “Using the current Internet feels like getting buried alive.”

Ars community member MichaelHurd added his own thoughts: “I feel the same way. It feels to me like the core problem with the modern Internet is that websites want you to stay on them for as long as possible, but the World Wide Web is at its best when sites connect to each other and encourage people to move between them. That’s what hyperlinks are for!”

Despite all the doom surrounding the modern Internet, it remains largely open. Anyone can pay about $5 per month for a shared Linux server and create a personal website containing anything they can think of, using any software they like, even their own. And for the most part, anyone, on any device, anywhere in the world, can access that website.

Ultimately, the fate of the Internet depends on the actions of every one of us. That’s why I’m leaving the final words in this series of articles to you. What would your dream Internet of the future look and feel like? The comments section is open.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 3: The rise of the user Read More »