“The amount of momentum from heavyweights in the tech industry is very much worth paying attention to,” said Caleb Henry, director of research at Quilty Space, in an interview. “If they start putting money behind it, we could see another transformation of what’s done in space.”
The essential function of a data center is to store, process, and transmit data. Historically, satellites have already done a lot of this, Henry said. Telecommunications satellites specialize in transmitting data. Imaging satellites store a lot of data and then dump it when they pass over ground stations. In recent years, onboard computers have gotten more sophisticated at processing data. Data centers in space could represent the next evolution of that.
Critics rightly note that it would require very large satellites with extensive solar panels to power data centers that rival ground-based infrastructure. However, SpaceX’s Starlink V3 satellites are unlike any previous space-based technology, Henry said.
A lot more capacity
SpaceX’s current Starlink V2 mini satellites have a maximum downlink capacity of approximately 100 Gbps. The V3 satellite is expected to increase this capacity by a factor of 10, to 1 Tbps. This is not unprecedented in satellite capacity, but it certainly is at scale.
For example, Viasat contracted with Boeing for the better part of a decade, spending hundreds of millions of dollars, to build Viasat-3, a geostationary satellite with a capacity of 1 Tbps. This single satellite may launch next week on an Atlas V rocket.
SpaceX plans to launch dozens of Starlink V3 satellites—Henry estimates the number is about 60—on each Starship rocket launch. Those launches could occur as soon as the first half of 2026, as SpaceX has already tested a satellite dispenser on its Starship vehicle.
“Nothing else in the rest of the satellite industry that comes close to that amount of capacity,” Henry said.
Exactly what “scaling up” Starlink V3 satellites might look like is not clear, but it doesn’t seem silly to expect it could happen. The very first operational Starlink satellites launched a little more than half a decade ago with a mass of about 300 kg and a capacity of 15Gbps. Starlink V3 satellites will likely mass 1,500 kg.
Partnerships and government contracts fuel optimism
At the GTC conference on Tuesday, Nvidia’s CEO went out of his way to repeatedly praise Donald Trump and his policies for accelerating domestic tech investment while warning that excluding China from Nvidia’s ecosystem could limit US access to half the world’s AI developers. The overall event stressed Nvidia’s role as an American company, with Huang even nodding to Trump’s signature slogan in his sign-off by thanking the audience for “making America great again.”
Trump’s cooperation is paramount for Nvidia because US export controls have effectively blocked Nvidia’s AI chips from China, costing the company billions of dollars in revenue. Bob O’Donnell of TECHnalysis Research told Reuters that “Nvidia clearly brought their story to DC to both educate and gain favor with the US government. They managed to hit most of the hottest and most influential topics in tech.”
Beyond the political messaging, Huang announced a series of partnerships and deals that apparently helped ease investor concerns about Nvidia’s future. The company announced collaborations with Uber Technologies, Palantir Technologies, and CrowdStrike Holdings, among others. Nvidia also revealed a $1 billion investment in Nokia to support the telecommunications company’s shift toward AI and 6G networking.
The agreement with Uber will power a fleet of 100,000 self-driving vehicles with Nvidia technology, with automaker Stellantis among the first to deliver the robotaxis. Palantir will pair Nvidia’s technology with its Ontology platform to use AI techniques for logistics insights, with Lowe’s as an early adopter. Eli Lilly plans to build what Nvidia described as the most powerful supercomputer owned and operated by a pharmaceutical company, relying on more than 1,000 Blackwell AI accelerator chips.
The $5 trillion valuation surpasses the total cryptocurrency market value and equals roughly half the size of the pan European Stoxx 600 equities index, Reuters notes. At current prices, Huang’s stake in Nvidia would be worth about $179.2 billion, making him the world’s eighth-richest person.
Trump’s plan was not welcomed by everyone. J.B. Branch, Big Tech accountability advocate for Public Citizen, in a statement provided to Ars, criticized Trump as giving “sweetheart deals” to tech companies that would cause “electricity bills to rise to subsidize discounted power for massive AI data centers.”
Infrastructure demands and energy requirements
Trump’s new AI plan tackles infrastructure head-on, stating that “AI is the first digital service in modern life that challenges America to build vastly greater energy generation than we have today.” To meet this demand, it proposes streamlining environmental permitting for data centers through new National Environmental Policy Act (NEPA) exemptions, making federal lands available for construction and modernizing the power grid—all while explicitly rejecting “radical climate dogma and bureaucratic red tape.”
The document embraces what it calls a “Build, Baby, Build!” approach—echoing a Trump campaign slogan—and promises to restore semiconductor manufacturing through the CHIPS Program Office, though stripped of “extraneous policy requirements.”
On the technology front, the plan directs Commerce to revise NIST’s AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Federal procurement would favor AI developers whose systems are “objective and free from top-down ideological bias.” The document strongly backs open source AI models and calls for exporting American AI technology to allies while blocking administration-labeled adversaries like China.
Security proposals include high-security military data centers and warnings that advanced AI systems “may pose novel national security risks” in cyberattacks and weapons development.
Critics respond with “People’s AI Action Plan”
Before the White House unveiled its plan, more than 90 organizations launched a competing “People’s AI Action Plan” on Tuesday, characterizing the Trump administration’s approach as “a massive handout to the tech industry” that prioritizes corporate interests over public welfare. The coalition includes labor unions, environmental justice groups, and consumer protection nonprofits.
“Decisions around where data centers get built have shifted dramatically over the last six months, with access to power now playing the most significant role in location scouting,” Joshi said. “The grid can’t keep pace with AI demands, so the industry is taking control with onsite power generation.”
Soluna, like other data center developers looking to rely on renewable energy, buys the excess power from wind, hydro, and solar plants that they can’t sell to the grid. By the end of the year, Soluna will have three facilities totaling 123 megawatts of capacity in Kentucky and Texas and seven projects in the works with upwards of 800 total megawatts.
Belizaire and I talked about how in Texas, where I report from, there’s plenty of curtailed energy from wind and solar farms because of the region’s transmission capacity. In West Texas, other data center developers are also taking advantage of the unused wind energy, far from major load centers like Dallas and Houston, by co-locating their giant warehouses full of advanced computers and high-powered cooling systems with the excess energy.
One data center developer using curtailed renewable power in Texas is IREN. The firm owns and operates facilities optimized for Bitcoin mining and AI. It developed a 7.5-gigawatt facility in Childress and broke ground on a 1.4-gigawatt data center in Sweetwater.
IREN purchases power through the state grid’s wholesale market during periods of oversupply, said Kent Draper, the company’s chief commercial officer, and reduces its consumption when prices are high. It’s able to do that by turning off its computers and minimizing power demand from its data centers.
But curtailment is an issue all over the world, Belizaire said, from Oklahoma, North Dakota, South Dakota, California, and Arizona in the US, to Northern Ireland, Germany, Portugal, and Australia.
“Anywhere where you have large utility-scale renewable development that’s been built out, you’re going to find it,” Belizaire said.
In a March analysis, the US Energy Information Administration reported that solar and wind power curtailments are increasing in California. In 2024, the grid operator for most of California curtailed 3.4 million megawatt hours of utility-scale wind and solar output, a 29 percent increase from the amount of electricity curtailed in 2023.
Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site.
Credit: SK tes
Inside the laptop/desktop examination bay at SK TES’s Fredericksburg, Va. site. Credit: SK tes
The details of each unit—CPU, memory, HDD size—are taken down and added to the asset tag, and the device is sent on to be physically examined. This step is important because “many a concealed drive finds its way into this line,” Kent Green, manager of this site, told me. Inside the machines coming from big firms, there are sometimes little USB, SD, SATA, or M.2 drives hiding out. Some were make-do solutions installed by IT and not documented, and others were put there by employees tired of waiting for more storage. “Some managers have been pretty surprised when they learn what we found,” Green said.
With everything wiped and with some sense of what they’re made of, each device gets a rating. It’s a three-character system, like “A-3-6,” based on function, cosmetic condition, and component value. Based on needs, trends, and other data, devices that are cleared for resale go to either wholesale, retail, component harvesting, or scrap.
Full-body laptop skins
Wiping down and prepping a laptop, potentially for a full-cover adhesive skin.
Credit: SK TES
Wiping down and prepping a laptop, potentially for a full-cover adhesive skin. Credit: SK TES
If a device has retail value, it heads into a section of this giant facility where workers do further checks. Automated software plays sounds on the speakers, checks that every keyboard key is sending signals, and checks that laptop batteries are at 80 percent capacity or better. At the end of the line is my favorite discovery: full-body laptop skins.
Some laptops—certain Lenovo, Dell, and HP models—are so ubiquitous in corporate fleets that it’s worth buying an adhesive laminating sticker in their exact shape. They’re an uncanny match for the matte black, silver, and slightly less silver finishes of the laptops, covering up any blemishes and scratches. Watching one of the workers apply this made me jealous of their ability to essentially reset a laptop’s condition (so one could apply whole new layers of swag stickers, of course). Once rated, tested, and stickered, laptops go into a clever “cradle” box, get the UN 3481 “battery inside” sticker, and can be sold through retail.
Data centers powering the generative AI boom are gulping water and exhausting electricity at what some researchers view as an unsustainable pace. Two entrepreneurs who met in high school a few years ago want to overcome that crunch with a fresh experiment: sinking the cloud into the sea.
Sam Mendel and Eric Kim launched their company, NetworkOcean, out of startup accelerator Y Combinator on August 15 by announcing plans to dunk a small capsule filled with GPU servers into San Francisco Bay within a month. “There’s this vital opportunity to build more efficient computer infrastructure that we’re gonna rely on for decades to come,” Mendel says.
The founders contend that moving data centers off land would slow ocean temperature rise by drawing less power and letting seawater cool the capsule’s shell, supplementing its internal cooling system. NetworkOcean’s founders have said a location in the bay would deliver fast processing speeds for the region’s buzzing AI economy.
But scientists who study the hundreds of square miles of brackish water say even the slightest heat or disturbance from NetworkOcean’s submersible could trigger toxic algae blooms and harm wildlife. And WIRED inquiries to several California and US agencies who oversee the bay found that NetworkOcean has been pursuing its initial test of an underwater data center without having sought, much less received, any permits from key regulators.
The outreach by WIRED prompted at least two agencies—the Bay Conservation and Development Commission and the San Francisco Regional Water Quality Control Board—to email NetworkOcean that testing without permits could run afoul of laws, according to public records and spokespeople for the agencies. Fines from the BCDC can run up to hundreds of thousands of dollars.
The nascent technology has already been in hot water in California. In 2016, the state’s coastal commission issued a previously unreported notice to Microsoft saying that the tech giant had violated the law the year before by plunging an unpermitted server vessel into San Luis Obispo Bay, about 250 miles south of San Francisco. The months-long test, part of what was known as Project Natick, had ended without apparent environmental harm by the time the agency learned of it, so officials decided not to fine Microsoft, according to the notice seen by WIRED.
The renewed scrutiny of underwater data centers has surfaced an increasingly common tension between innovative efforts to combat global climate change and long-standing environmental laws. Permitting takes months, if not years, and can cost millions of dollars, potentially impeding progress. Advocates of the laws argue that the process allows for time and input to better weigh trade-offs.
“Things are overregulated because people often don’t do the right thing,” says Thomas Mumley, recently retired assistant executive officer of the bay water board. “You give an inch, they take a mile. We have to be cautious.”
Over the last two weeks, including during an interview at the WIRED office, NetworkOcean’s founders have provided driblets of details about their evolving plans. Their current intention is to test their underwater vessel for about an hour, just below the surface of what Mendel would only describe as a privately owned and operated portion of the bay that he says is not subject to regulatory oversight. He insists that a permit is not required based on the location, design, and minimal impact. “We have been told by our potential testing site that our setup is environmentally benign,” Mendel says.
Mumley, the retired regulator, calls the assertion about not needing a permit “absurd.” Both Bella Castrodale, the BCDC’s lead enforcement attorney, and Keith Lichten, a water board division manager, say private sites and a quick dip in the bay aren’t exempt from permitting. Several other experts in bay rules tell WIRED that even if some quirk does preclude oversight, they believe NetworkOcean is sending a poor message to the public by not coordinating with regulators.
“Just because these centers would be out of sight does not mean they are not a major disturbance,” says Jon Rosenfield, science director at San Francisco Baykeeper, a nonprofit that investigates industrial polluters.
School project
Mendel and Kim say they tried to develop an underwater renewable energy device together during high school in Southern California before moving onto non-nautical pursuits. Mendel, 23, dropped out of college in 2022 and founded a platform for social media influencers.
About a year ago, he built a small web server using the DIY system Raspberry Pi to host another personal project, and temporarily floated the equipment in San Francisco Bay by attaching it to a buoy from a private boat in the Sausalito area. (Mendel declined to answer questions about permits.) After talking with Kim, also 23, about this experiment, the two decided to move in together and start NetworkOcean.
Their pitch is that underwater data centers are more affordable to develop and maintain, especially as electricity shortages limit sites on land. Surrounding a tank of hot servers with water naturally helps cools them, avoiding the massive resource drain of air-conditioning and also improving on the similar benefits of floating data centers. Developers of offshore wind farms are eager to electrify NetworkOcean vessels, Mendel says.
AMD has agreed to buy artificial intelligence infrastructure group ZT Systems in a $4.9 billion cash and stock transaction, extending a run of AI investments by the chip company as it seeks to challenge market leader Nvidia.
The California-based group said the acquisition would help accelerate the adoption of its Instinct line of AI data center chips, which compete with Nvidia’s popular graphics processing units (GPUs).
ZT Systems, a private company founded three decades ago, builds custom computing infrastructure for the biggest AI “hyperscalers.” While the company does not disclose its customers, the hyperscalers include the likes of Microsoft, Meta, and Amazon.
The deal marks AMD’s biggest acquisition since it bought Xilinx for $35 billion in 2022.
“It brings a thousand world-class design engineers into our team, it allows us to develop silicon and systems in parallel and, most importantly, get the newest AI infrastructure up and running in data centers as fast as possible,” AMD’s chief executive Lisa Su told the Financial Times.
“It really helps us deploy our technology much faster because this is what our customers are telling us [they need],” Su added.
The transaction is expected to close in the first half of 2025, subject to regulatory approval, after which New Jersey-based ZT Systems will be folded into AMD’s data center business group. The $4.9bn valuation includes up to $400mn contingent on “certain post-closing milestones.”
Citi and Latham & Watkins are advising AMD, while ZT Systems has retained Goldman Sachs and Paul, Weiss.
The move comes as AMD seeks to break Nvidia’s stranglehold on the AI data center chip market, which earlier this year saw Nvidia temporarily become the world’s most valuable company as big tech companies pour billions of dollars into its chips to train and deploy powerful new AI models.
Part of Nvidia’s success stems from its “systems” approach to the AI chip market, offering end-to-end computing infrastructure that includes pre-packaged server racks, networking equipment, and software tools to make it easier for developers to build AI applications on its chips.
AMD’s acquisition shows the chipmaker building out its own “systems” offering. The company rolled out its MI300 line of AI chips last year, and says it will launch its next-generation MI350 chip in 2025 to compete with Nvidia’s new Blackwell line of GPUs.
In May, Microsoft was one of the first AI hyperscalers to adopt the MI300, building it into its Azure cloud platform to run AI models such as OpenAI’s GPT-4. AMD’s quarterly revenue for the chips surpassed $1 billion for the first time in the three months to June 30.
But while AMD has feted the MI300 as its fastest-ever product ramp, its data center revenue still represented a fraction of the $22.6 billion that Nvidia’s data center business raked in for the quarter to the end of April.
In March, ZT Systems announced a partnership with Nvidia to build custom AI infrastructure using its Blackwell chips. “I think we certainly believe ZT as part of AMD will significantly accelerate the adoption of AMD AI solutions,” Su said, but “we have customer commitments and we are certainly going to honour those”.
Su added that she expected regulators’ review of the deal to focus on the US and Europe.
In addition to increasing its research and development spending, AMD says it has invested more than $1 billion over the past year to expand its AI hardware and software ecosystem.
In July the company announced it was acquiring Finnish AI start-up Silo AI for $665 million, the largest acquisition of a privately held AI startup in Europe in a decade.
This article was produced for ProPublica’s Local Reporting Network in partnership with The Seattle Times. Sign up for Dispatches to get stories like this one as soon as they are published.
When lawmakers in Washington set out to expand a lucrative tax break for the state’s data center industry in 2022, they included what some considered an essential provision: a study of the energy-hungry industry’s impact on the state’s electrical grid.
Gov. Jay Inslee vetoed that provision but let the tax break expansion go forward. As The Seattle Times and ProPublica recently reported, the industry has continued to grow and now threatens Washington’s effort to eliminate carbon emissions from electricity generation.
Washington’s experience with addressing the power demand of data centers parallels the struggles playing out in other states around the country where the industry has rapidly grown and tax breaks are a factor.
Virginia, home to the nation’s largest data center market, once debated running data centers on carbon-emitting diesel generators during power shortages to keep the lights on in the area. (That plan faced significant public pushback from environmental groups, and an area utility is exploring other options.)
Dominion Energy, the utility that serves most of Virginia’s data centers, has said that it intends to meet state requirements to decarbonize the grid by 2045, but that the task would be more challenging with rising demands driven largely by data centers, Inside Climate News reported. The utility also has indicated that new natural gas plants will be needed.
Some Virginia lawmakers and the state’s Republican governor have proposed reversing or dramatically altering the clean energy goals.
A northern Virginia lawmaker instead proposed attaching strings to the state’s data center tax break. This year, he introduced legislation saying data centers would only qualify if they maximized energy efficiency and found renewable resources. The bill died in Virginia’s General Assembly. But the state authorized a study of the industry and how tax breaks impact the grid.
“If we’re going to have data centers, which we all know to be huge consumers of electricity, let’s require them to be as efficient as possible,” said state Delegate Richard “Rip” Sullivan Jr., the Democrat who sponsored the original bill. “Let’s require them to use as little energy as possible to do their job.”
Inslee’s 2022 veto of a study similar to Virginia’s cited the fact that Northwest power planners already include data centers in their estimates of regional demand. But supporters of the legislation said their goal was to obtain more precise answers about Washington-specific electricity needs.
Georgia lawmakers this year passed a bill to halt the state’s data center tax break until data center power use could be analyzed. In the meantime, according to media reports, the state’s largest utility said it would use fossil fuels to make up an energy shortfall caused in part by data centers. Georgia Gov. Brian Kemp then vetoed the tax break pause in May.
Lawmakers in Connecticut and South Carolina have also debated policies to tackle data center power usage in the past year.
“Maybe we want to entice more of them to come. I just want to make sure that we understand the pros and the cons of that before we do it,” South Carolina’s Senate Majority Leader Shane Massey said in May, according to the South Carolina Daily Gazette.
Countries such as Ireland, Singapore, and the Netherlands have at times forced data centers to halt construction to limit strains on the power grid, according to a report by the nonprofit Tony Blair Institute for Global Change. The report’s recommendations for addressing data center power usage include encouraging the private sector to invest directly in renewables.
Sajjad Moazeni, a University of Washington professor who studies artificial intelligence and data center power consumption, said states should consider electricity impacts when formulating data center legislation. Moazeni’s recent research found that in just one day, ChatGPT, a popular artificial intelligence tool, used roughly as much power as 33,000 U.S. households use in a year.
“A policy can help both push companies to make these data centers more efficient and preserve a cleaner, better environment for us,” Moazeni said. “Policymakers need to consider a larger set of metrics on power usage and efficiency.”
Eli Sanders contributed research while a student with the Technology, Law and Public Policy Clinic at the University of Washington School of Law.
Cooling pipes at a Google data center in Douglas County, Georgia.
Google’s greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to “net zero” by 2030 in doubt.
The Silicon Valley company’s pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday.
Google said the jump highlighted “the challenge of reducing emissions” at the same time as it invests in the build-out of large language models and their associated applications and infrastructure, admitting that “the future environmental impact of AI” was “complex and difficult to predict.”
Chief Sustainability Officer Kate Brandt said the company remained committed to the 2030 target but stressed the “extremely ambitious” nature of the goal.
“We do still expect our emissions to continue to rise before dropping towards our goal,” said Brandt.
She added that Google was “working very hard” on reducing its emissions, including by signing deals for clean energy. There was also a “tremendous opportunity for climate solutions that are enabled by AI,” said Brandt.
As Big Tech giants including Google, Amazon, and Microsoft have outlined plans to invest tens of billions of dollars into AI, climate experts have raised concerns about the environmental impacts of the power-intensive tools and systems.
In May, Microsoft admitted that its emissions had risen by almost a third since 2020, in large part due to the construction of data centers. However, Microsoft co-founder Bill Gates last week also argued that AI would help propel climate solutions.
Meanwhile, energy generation and transmission constraints are already posing a challenge for the companies seeking to build out the new technology. Analysts at Bernstein said in June that AI would “double the rate of US electricity demand growth and total consumption could outstrip current supply in the next two years.”
In Tuesday’s report, Google said its 2023 energy-related emissions—which come primarily from data center electricity consumption—rose 37 percent year on year and overall represented a quarter of its total greenhouse gas emissions.
Google’s supply chain emissions—its largest chunk, representing 75 percent of its total emissions—also rose 8 percent. Google said they would “continue to rise in the near term” as a result in part of the build-out of the infrastructure needed to run AI systems.
Google has pledged to achieve net zero across its direct and indirect greenhouse gas emissions by 2030 and to run on carbon-free energy during every hour of every day within each grid it operates by the same date.
However, the company warned in Tuesday’s report that the “termination” of some clean energy projects during 2023 had pushed down the amount of renewables it had access to.
Meanwhile, the company’s data center electricity consumption had “outpaced” Google’s ability to bring more clean power projects online in the US and Asia-Pacific regions.
Google’s data center electricity consumption increased 17 percent in 2023, and amounted to approximately 7-10 percent of global data center electricity consumption, the company estimated. Its data centers also consumed 17 percent more water in 2023 than during the previous year, Google said.
Enlarge/ A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.
On Wednesday, authorities arrested former Google software engineer Linwei Ding in Newark, California, on charges of stealing AI trade secrets from the company. The US Department of Justice alleges that Ding, a Chinese national, committed the theft while secretly working with two China-based companies.
According to the indictment, Ding, who was hired by Google in 2019 and had access to confidential information about the company’s data centers, began uploading hundreds of files into a personal Google Cloud account two years ago.
The trade secrets Ding allegedly copied contained “detailed information about the architecture and functionality of GPU and TPU chips and systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of executing at the cutting edge of machine learning and AI technology,” according to the indictment.
Shortly after the alleged theft began, Ding was offered the position of chief technology officer at an early-stage technology company in China that touted its use of AI technology. The company offered him a monthly salary of about $14,800, plus an annual bonus and company stock. Ding reportedly traveled to China, participated in investor meetings, and sought to raise capital for the company.
Investigators reviewed surveillance camera footage that showed another employee scanning Ding’s name badge at the entrance of the building where Ding worked at Google, making him look like he was working from his office when he was actually traveling.
Ding also founded and served as the chief executive of a separate China-based startup company that aspired to train “large AI models powered by supercomputing chips,” according to the indictment. Prosecutors say Ding did not disclose either affiliation to Google, which described him as a junior employee. He resigned from Google on December 26 of last year.
The FBI served a search warrant at Ding’s home in January, seizing his electronic devices and later executing an additional warrant for the contents of his personal accounts. Authorities found more than 500 unique files of confidential information that Ding allegedly stole from Google. The indictment says that Ding copied the files into the Apple Notes application on his Google-issued Apple MacBook, then converted the Apple Notes into PDF files and uploaded them to an external account to evade detection.
“We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets,” Google spokesperson José Castañeda told Ars Technica. “After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement. We are grateful to the FBI for helping protect our information and will continue cooperating with them closely.”
Attorney General Merrick Garland announced the case against the 38-year-old at an American Bar Association conference in San Francisco. Ding faces four counts of federal trade secret theft, each carrying a potential sentence of up to 10 years in prison.