While AI bubble talk fills the air these days, with fears of overinvestment that could pop at any time, something of a contradiction is brewing on the ground: Companies like Google and OpenAI can barely build infrastructure fast enough to fill their AI needs.
During an all-hands meeting earlier this month, Google’s AI infrastructure head Amin Vahdat told employees that the company must double its serving capacity every six months to meet demand for artificial intelligence services, reports CNBC. Vahdat, a vice president at Google Cloud, presented slides showing the company needs to scale “the next 1000x in 4-5 years.”
While a thousandfold increase in compute capacity sounds ambitious by itself, Vahdat noted some key constraints: Google needs to be able to deliver this increase in capability, compute, and storage networking “for essentially the same cost and increasingly, the same power, the same energy level,” he told employees during the meeting. “It won’t be easy but through collaboration and co-design, we’re going to get there.”
It’s unclear how much of this “demand” Google mentioned represents organic user interest in AI capabilities versus the company integrating AI features into existing services like Search, Gmail, and Workspace. But whether users are using the features voluntarily or not, Google isn’t the only tech company struggling to keep up with a growing user base of customers using AI services.
Major tech companies are in a race to build out data centers. Google competitor OpenAI is planning to build six massive data centers across the US through its Stargate partnership project with SoftBank and Oracle, committing over $400 billion in the next three years to reach nearly 7 gigawatts of capacity. The company faces similar constraints serving its 800 million weekly ChatGPT users, with even paid subscribers regularly hitting usage limits for features like video synthesis and simulated reasoning models.
“The competition in AI infrastructure is the most critical and also the most expensive part of the AI race,” Vahdat said at the meeting, according to CNBC’s viewing of the presentation. The infrastructure executive explained that Google’s challenge goes beyond simply outspending competitors. “We’re going to spend a lot,” he said, but noted the real objective is building infrastructure that is “more reliable, more performant and more scalable than what’s available anywhere else.”
Alphabet’s recent market performance has been driven by investor confidence in the company’s ability to compete with OpenAI’s ChatGPT, as well as its development of specialized chips for AI that can compete with Nvidia’s. Nvidia recently reached a world-first $5 trillion valuation due to making GPUs that can accelerate the matrix math at the heart of AI computations.
Despite acknowledging that no company would be immune to a potential AI bubble burst, Pichai argued that Google’s unique position gives it an advantage. He told the BBC that the company owns what he called a “full stack” of technologies, from chips to YouTube data to models and frontier science research. This integrated approach, he suggested, would help the company weather any market turbulence better than competitors.
Pichai also told the BBC that people should not “blindly trust” everything AI tools output. The company currently faces repeated accuracy concerns about some of its AI models. Pichai said that while AI tools are helpful “if you want to creatively write something,” people “have to learn to use these tools for what they’re good at and not blindly trust everything they say.”
In the BBC interview, the Google boss also addressed the “immense” energy needs of AI, acknowledging that the intensive energy requirements of expanding AI ventures have caused slippage on Alphabet’s climate targets. However, Pichai insisted that the company still wants to achieve net zero by 2030 through investments in new energy technologies. “The rate at which we were hoping to make progress will be impacted,” Pichai said, warning that constraining an economy based on energy “will have consequences.”
Even with the warnings about a potential AI bubble, Pichai did not miss his chance to promote the technology, albeit with a hint of danger regarding its widespread impact. Pichai described AI as “the most profound technology” humankind has worked on.
“We will have to work through societal disruptions,” he said, adding that the technology would “create new opportunities” and “evolve and transition certain jobs.” He said people who adapt to AI tools “will do better” in their professions, whatever field they work in.
Share valuations based on past earnings have also reached their highest levels since the dotcom bubble 25 years ago, though the BoE noted they appear less extreme when based on investors’ expectations for future profits. “This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic,” the central bank said.
Toil and trouble?
The dotcom bubble offers a potentially instructive parallel to our current era. In the late 1990s, investors poured money into Internet companies based on the promise of a transformed economy, seemingly ignoring whether individual businesses had viable paths to profitability. Between 1995 and March 2000, the Nasdaq index rose 600 percent. When sentiment shifted, the correction was severe: the Nasdaq fell 78 percent from its peak, reaching a low point in October 2002.
Whether we’ll see the same thing or worse if an AI bubble pops is mere speculation at this point. But similarly to the early 2000s, the question about today’s market isn’t necessarily about the utility of AI tools themselves (the Internet was useful, after all, despite the bubble), but whether the amount of money being poured into the companies that sell them is out of proportion with the potential profits those improvements might bring.
We don’t have a crystal ball to determine when such a bubble might pop, or even if it is guaranteed to do so, but we’ll likely continue to see more warning signs ahead if AI-related deals continue to grow larger and larger over time.
This month, the Silicon Valley company filed a pair of lawsuits, neither of which have been previously reported, that demand hundreds of thousands of dollars in damages from two alleged vandals. Waymo attorneys said in court papers that the alleged vandalism, which ruined dozens of tires and a tail end, are a significant threat to the company’s reputation. Riding in a vehicle in which the steering wheel swivels on its own can be scary enough. Having to worry about attackers allegedly targeting the rides could undermine Waymo’s ride-hailing business before it even gets past its earliest stage.
Waymo, which falls under the umbrella of Google parent Alphabet, operates a ride-hailing service in San Francisco, Phoenix, and Los Angeles that is comparable to Uber and Lyft except with sensors and software controlling the driving. While its cars haven’t contributed to any known deadly crashes, US regulators continue to probe their sometimeserratic driving. Waymo spokesperson Sandy Karp says the company always prioritizes safety and that the lawsuits reflect that strategy. She declined further comment for this story.
In a filing last week in the California Superior Court of San Francisco County, Waymo sued a Tesla Model 3 driver whom it alleges intentionally rear-ended one of its autonomous Jaguar crossovers. According to the suit, the driver, Konstantine Nikka-Sher Piterman, claimed in a post on X that “Waymo just rekt me” before going on to ask Tesla CEO Elon Musk for a job. The other lawsuit from this month, filed in the same court, targets Ronaile Burton, who allegedly slashed the tires of at least 19 Waymo vehicles. San Francisco prosecutors have filed criminal charges against her to which she has pleaded not guilty. A hearing is scheduled for Tuesday.
Burton’s public defender, Adam Birka-White, says in a statement that Burton “is someone in need of help and not jail” and that prosecutors continue “to prioritize punishing poor people at the behest of corporations, in this case involving a tech company that is under federal investigation for creating dangerous conditions on our streets.”
An attorney for Burton in the civil case hasn’t been named in court records, and Burton is currently in jail and couldn’t be reached for comment. Piterman didn’t respond to a voicemail, a LinkedIn message, and emails seeking comment. He hasn’t responded in court to the accusations.
Based on available records from courts in San Francisco and Phoenix, it appears that Waymo hasn’t previously filed similar lawsuits.
In the Tesla case, Piterman “unlawfully, maliciously, and intentionally” sped his car past a stop sign and into a Waymo car in San Francisco on March 19, according to the company’s suit. When the Waymo tried to pull over, Piterman allegedly drove the Tesla into the Waymo car again. He then allegedly entered the Waymo and later threatened a Waymo representative who responded to the scene in person. San Francisco police cited Piterman, according to the lawsuit. The police didn’t respond to WIRED’s request for comment.
Google’s parent company, Alphabet, is in talks to buy cybersecurity start-up Wiz for about $23 billion, in what would be the largest acquisition in the tech group’s history, according to people familiar with the matter.
Alphabet’s discussions to acquire Wiz are still weeks away from completion, said one person with direct knowledge of the matter, while people briefed about the transaction said there was still a chance the deal would fall apart, with a number of details still needing to be addressed in talks.
If a deal were to be reached it would be a test case for antitrust regulators, which in recent years have been cracking down on tech groups buying out emerging companies in the sector. Alphabet’s last big deal came more than a decade ago with the $12.5 billion acquisition of Motorola Mobility.
The acquisition of Wiz would mark a further big push into cyber security for Alphabet, two years after it acquired Mandiant for $5.4 billion.
New York-headquartered Wiz has raised about $2 billion from investors since its founding four years ago, according to data provider PitchBook. The start-up, led by Israeli founder and former Microsoft executive Assaf Rappaport, was most recently valued at $12 billion. Its backers include venture capital firms Sequoia and Thrive.
Wiz, which counts multinational groups including Salesforce, Mars, and BMW as customers, helps companies secure programs in the cloud. That has led to a surge in revenue as corporations increasingly operate their software and store data online—Wiz has said it has hit about $350 million in annual recurring revenue, a metric often used by software start-ups.
A deal would be among the largest acquisitions of a company backed by venture capital.
Wiz declined to comment on the talks, which were first reported by The Wall Street Journal. Google did not immediately respond to a request for comment.
On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.
President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.
The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.
It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.
This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”
So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.
A roundtable of Big Tech CEOs attracts criticism
For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.
Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”
Enlarge/ EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).
Not even three weeks after the European Union’s Digital Markets Act (DMA) took effect, the European Commission (EC) announced Monday that it is already probing three out of six gatekeepers—Apple, Google, and Meta—for suspected non-compliance.
Apple will need to prove that changes to its app store and existing user options to swap out default settings easily are sufficient to comply with the DMA.
Similarly, Google’s app store rules will be probed, as well as any potentially shady practices unfairly preferencing its own services—like Google Shopping and Hotels—in search results.
Finally, Meta’s “Subscription for No Ads” option—allowing Facebook and Instagram users to opt out of personalized ad targeting for a monthly fee—may not fly under the DMA. Even if Meta follows through on its recent offer to slash these fees by nearly 50 percent, the model could be deemed non-compliant.
“The DMA is very clear: gatekeepers must obtain users’ consent to use their personal data across different services,” the EC’s commissioner for internal market, Thierry Breton, said Monday. “And this consent must be free!”
In total, the EC announced five investigations: two against Apple, two against Google, and one against Meta.
“We suspect that the suggested solutions put forward by the three companies do not fully comply with the DMA,” antitrust chief Margrethe Vestager said, ordering companies to “retain certain documents” viewed as critical to assessing evidence in the probe.
The EC’s investigations are expected to conclude within one year. If tech companies are found non-compliant, they risk fines of up to 10 percent of total worldwide turnover. Any repeat violations could spike fines to 20 percent.
“Moreover, in case of systematic infringements, the Commission may also adopt additional remedies, such as obliging a gatekeeper to sell a business or parts of it or banning the gatekeeper from acquisitions of additional services related to the systemic non-compliance,” the EC’s announcement said.
“These are the cases where we already have concrete evidence of possible non-compliance,” Breton said. “And this in less than 20 days of DMA implementation. But our monitoring and investigative work of course doesn’t stop here,” Breton said. “We may have to open other non-compliance cases soon.
Google and Apple have both issued statements defending their current plans for DMA compliance.
“To comply with the Digital Markets Act, we have made significant changes to the way our services operate in Europe,” Google’s competition director Oliver Bethell told Ars, promising to “continue to defend our approach in the coming months.”
“We’re confident our plan complies with the DMA, and we’ll continue to constructively engage with the European Commission as they conduct their investigations,” Apple’s spokesperson told Ars. “Teams across Apple have created a wide range of new developer capabilities, features, and tools to comply with the regulation. At the same time, we’ve introduced protections to help reduce new risks to the privacy, quality, and security of our EU users’ experience. Throughout, we’ve demonstrated flexibility and responsiveness to the European Commission and developers, listening and incorporating their feedback.”
A Meta spokesperson told Ars that Meta “designed Subscription for No Ads to address several overlapping regulatory obligations, including the DMA,” promising to comply with the DMA while arguing that “subscriptions as an alternative to advertising are a well-established business model across many industries.”
The EC’s announcement came after all designated gatekeepers were required to submit DMA compliance reports and scheduled public workshops to discuss DMA compliance. Those workshops conclude tomorrow with Microsoft and appear to be partly driving the EC’s decision to probe Apple, Google, and Meta.
“Stakeholders provided feedback on the compliance solutions offered,” Vestager said. “Their feedback tells us that certain compliance measures fail to achieve their objectives and fall short of expectations.”
Apple and Google app stores probed
Under the DMA, “gatekeepers can no longer prevent their business users from informing their users within the app about cheaper options outside the gatekeeper’s ecosystem,” Vestager said. “That is called anti-steering and is now forbidden by law.”
Stakeholders told the EC that Apple’s and Google’s fee structures appear to “go against” the DMA’s “free of charge” requirement, Vestager said, because companies “still charge various recurring fees and still limit steering.”
This feedback pushed the EC to launch its first two probes under the DMA against Apple and Google.
“We will investigate to what extent these fees and limitations defeat the purpose of the anti-steering provision and by that, limit consumer choice,” Vestager said.
These probes aren’t the end of Apple’s potential app store woes in the EU, either. Breton said that the EC has “many questions on Apple’s new business model” for the app store. These include “questions on the process that Apple used for granting and terminating membership of” its developer program, following a scandal where Epic Games’ account was briefly terminated.
“We also have questions on the fee structure and several other aspects of the business model,” Breton said, vowing to “check if they allow for real opportunities for app developers in line with the letter and the spirit of the DMA.”