NVIDIA

your-10-year-old-graphics-card-can-run-dragon-age:-the-veilguard

Your 10-year-old graphics card can run Dragon Age: The Veilguard

Still kicking —

2014’s Nvidia GTX 970 is still a “minimum requirements” workhorse.

At this rate, it might be the only graphics card you'll ever need?

Enlarge / At this rate, it might be the only graphics card you’ll ever need?

When Dragon Age: Inquisition came out nearly 10 years ago, PC players could have invested $329 (~$435 in today’s dollars) in a brand-new GTX 970 graphics card to make the game look as good as possible on their high-end gaming rig. Surprisingly enough, that very same 2014 graphics card will still be able to run follow-up Dragon Age: The Veilguard (previously known as Dreadwolf) when it launches on October 31. If you’re using AMD cards, an even older Radeon R9 that you purchased back in 2013 will be able to run the game.

Veilguard‘s minimum specs are just the latest to show the workmanlike endurance of the humble GTX 970, which is currently available used on Newegg for as low as $140. Relatively recent big-budget PC releases like Baldur’s Gate 3 and Call of Duty: Modern Warfare 3 both use the old card (or the less powerful follow-up variant, the GTX 960) as their “minimum requirement” benchmark.

Not every big-budget PC game these days is so forgiving with its minimum specs, though. When Cyberpunk 2077 and Doom: Eternal launched in 2020, they both asked players to be sporting at least a GTX 1060, which had come out around four years prior.

For a bit of context, the GTX 970 was used as the “recommended” baseline spec for the mid-range “Oculus Ready” PCs needed to power the then-new Rift VR headset when it launched in 2016. Today, a $500 Meta Quest 3 headset gives you much better graphical performance in a self-contained portable package, no gaming PC required.

Veilguard players sticking with a GTX 970 shouldn’t expect to get the best graphical experience, of course. EA suggests an RTX 2070 (circa 2018) or a Radeon RX 5700Xt (circa 2019) to run the game at “recommended” specs. And you’ll need at least 16 GB of RAM and 100 GB of storage space.

Since work on Veilguard began in earnest in 2015, the game has suffered a string of high-profile staff departures: Creative Director Mike Laidlaw left in 2017; Executive Producer Mark Darrah and BioWare General Manager Casey Hudson left in late 2020; Senior Creative Director Matt Goldman left in late 2021; replacement Executive Producer Christian Daley left in early 2022; and producer Mac Walters left in early 2023.

The full requirements for Dragon Age: The Veilguard are as follows.

Minimum Requirements

OS: Windows 10/11 64-bit

Processor: Intel Core i5-8400 / AMD Ryzen 3 3300X(see notes)

Memory: 16GB

Graphics: Nvidia GTX 970/1650 / AMD Radeon R9 290X

DirectX: Version 12

Storage: 100GB available space

Additional Notes: SSD preferred, HDD supported; AMD CPUs on Windows 11 require AGESA V2 1.2.0.7

Recommended Requirements

OS: Windows 10/11 64-bit

Processor: Intel Core i9-9900K / AMD Ryzen 7 3700X (see notes)

Memory: 16GB

Graphics: Nvidia RTX 2070 / AMD Radeon RX 5700XT

DirectX: Version 12

Storage: 100GB SSD available space

Additional Notes: SSD required; AMD CPUs on Windows 11 require AGESA V2 1.2.0.7

Your 10-year-old graphics card can run Dragon Age: The Veilguard Read More »

us-probes-nvidia’s-acquisition-of-israeli-ai-startup

US probes Nvidia’s acquisition of Israeli AI startup

“monopoly choke points” —

Justice Department has increased scrutiny of the chipmaker’s power in the emerging sector.

US probes Nvidia’s acquisition of Israeli AI startup

Getty Images

The US Department of Justice is investigating Nvidia’s acquisition of Run:ai, an Israeli artificial intelligence startup, for potential antitrust violations, said a person familiar with discussions the government agency has had with third parties.

The DoJ has asked market participants about the competitive impact of the transaction, which Nvidia announced in April. The price was not disclosed but a report from TechCrunch estimated it at $700 million.

The scope of the probe remains unclear, the person said. But the DoJ has inquired about matters including whether the deal could quash emerging competition in the up-and-coming sector and entrench Nvidia’s dominant market position.

Nvidia on Thursday said the company “wins on merit” and “scrupulously adher[es] to all laws.”

“We’ll continue to support aspiring innovators in every industry and market and are happy to provide any information regulators need,” it added.

Run:ai did not immediately respond to a request for comment. The DoJ declined to comment.

The investigation comes as US regulators and enforcers have heightened scrutiny of anti-competitive behavior in AI, particularly where it dovetails with big tech groups such as Nvidia.

Jonathan Kanter, head of the DoJ’s antitrust division, told the Financial Times in June that he was examining “monopoly choke points” in areas including the data used to train large language models as well as access to essential hardware such as graphics processing unit chips. He added that the GPUs needed to train LLMs had become a “scarce resource.”

Nvidia dominates sales of the most advanced GPUs. Run:ai, which had an existing collaboration with the tech giant, has developed a platform that optimizes the use of GPUs.

As part of the probe, which was first reported by Politico, the DoJ is seeking information on how Nvidia decides the allocation of its chips, the person said.

Government lawyers are also inquiring about Nvidia’s software platform, Cuda, which enables chips originally designed for graphics to speed up AI applications and is seen by industry figures as one of Nvidia’s most critical tools.

The DoJ and the US Federal Trade Commission, a competition regulator, in June reached an agreement that divided antitrust oversight of critical AI players. The DoJ will spearhead probes into Nvidia, while the FTC will oversee the assessment of Microsoft and OpenAI, the startup behind ChatGPT.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

US probes Nvidia’s acquisition of Israeli AI startup Read More »

ai’s-future-in-grave-danger-from-nvidia’s-chokehold-on-chips,-groups-warn

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn

Controlling “the world’s computing destiny” —

Anti-monopoly groups want DOJ to probe Nvidia’s AI chip bundling, alleged price-fixing.

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn

Sen. Elizabeth Warren (D-Mass.) has joined progressive groups—including Demand Progress, Open Markets Institute, and the Tech Oversight Project—pressuring the US Department of Justice to investigate Nvidia’s dominance in the AI chip market due to alleged antitrust concerns, Reuters reported.

In a letter to the DOJ’s chief antitrust enforcer, Jonathan Kanter, groups demanding more Big Tech oversight raised alarms that Nvidia’s top rivals apparently “are struggling to gain traction” because “Nvidia’s near-absolute dominance of the market is difficult to counter” and “funders are wary of backing its rivals.”

Nvidia is currently “the world’s most valuable public company,” their letter said, worth more than $3 trillion after taking near-total control of the high-performance AI chip market. Particularly “astonishing,” the letter said, was Nvidia’s dominance in the market for GPU accelerator chips, which are at the heart of today’s leading AI. Groups urged Kanter to probe Nvidia’s business practices to ensure that rivals aren’t permanently blocked from competing.

According to the advocacy groups that strongly oppose Big Tech monopolies, Nvidia “now holds an 80 percent overall global market share in GPU chips and a 98 percent share in the data center market.” This “puts it in a position to crowd out competitors and set global pricing and the terms of trade,” the letter warned.

Earlier this year, inside sources reported that the DOJ and the Federal Trade Commission reached a deal where the DOJ would probe Nvidia’s alleged anti-competitive behavior in the booming AI industry, and the FTC would probe OpenAI and Microsoft. But there has been no official Nvidia probe announced, prompting progressive groups to push harder for the DOJ to recognize what they view as a “dire danger to the open market” that “well deserves DOJ scrutiny.”

Ultimately, the advocacy groups told Kanter that they fear Nvidia wielding “control over the world’s computing destiny,” noting that Nvidia’s cloud computing data centers don’t just power “Big Tech’s consumer products” but also “underpin every aspect of contemporary society, including the financial system, logistics, healthcare, and defense.”

They claimed that Nvidia is “leveraging” its “scarce chips” to force customers to buy its “chips, networking, and programming software as a package.” Such bundling and “price-fixing,” their letter warned, appear to be “the same kinds of anti-competitive tactics that the courts, in response to actions brought by the Department of Justice against other companies, have found to be illegal” and could perhaps “stifle innovation.”

Although data from TechInsights suggested that Nvidia’s chip shortage and cost actually helped companies like AMD and Intel sell chips in 2023, both Nvidia rivals reported losses in market share earlier this year, Yahoo Finance reported.

Perhaps most closely monitoring Nvidia’s dominance, France antitrust authorities launched an investigation into Nvidia last month over antitrust concerns, the letter said, “making it the first enforcer to act against the computer chip maker,” Reuters reported.

Since then, the European Union and the United Kingdom, as well as the US, have heightened scrutiny, but their seeming lag to follow through with an official investigation may only embolden Nvidia, as the company allegedly “believes its market behavior is above the law,” the progressive groups wrote. Suspicious behavior includes allegations that “Nvidia has continued to sell chips to Chinese customers and provide them computing access” despite a “Department of Commerce ban on trading with Chinese companies due to national security and human rights concerns.”

“Its chips have been confirmed to be reaching blacklisted Chinese entities,” their letter warned, citing a Wall Street Journal report.

Nvidia’s dominance apparently impacts everyone involved with AI. According to the letter, Nvidia seemingly “determining who receives inventory from a limited supply, setting premium pricing, and contractually blocking customers from doing business with competitors” is “alarming” the entire AI industry. That includes “both small companies (who find their supply choked off) and the Big Tech AI giants.”

Kanter will likely be receptive to the letter. In June, Fast Company reported that Kanter told an audience at an AI conference that there are “structures and trends in AI that should give us pause.” He further suggested that any technology that “relies on massive amounts of data and computing power” can “give already dominant firms a substantial advantage,” according to Fast Company’s summary of his remarks.

AI’s future in grave danger from Nvidia’s chokehold on chips, groups warn Read More »

elon-musk-claims-he-is-training-“the-world’s-most-powerful-ai-by-every-metric”

Elon Musk claims he is training “the world’s most powerful AI by every metric”

the biggest, most powerful —

One snag: xAI might not have the electrical power contracts to do it.

Elon Musk, chief executive officer of Tesla Inc., during a fireside discussion on artificial intelligence risks with Rishi Sunak, UK prime minister, in London, UK, on Thursday, Nov. 2, 2023.

Enlarge / Elon Musk, chief executive officer of Tesla Inc., during a fireside discussion on artificial intelligence risks with Rishi Sunak, UK prime minister, in London, UK, on Thursday, Nov. 2, 2023.

On Monday, Elon Musk announced the start of training for what he calls “the world’s most powerful AI training cluster” at xAI’s new supercomputer facility in Memphis, Tennessee. The billionaire entrepreneur and CEO of multiple tech companies took to X (formerly Twitter) to share that the so-called “Memphis Supercluster” began operations at approximately 4: 20 am local time that day.

Musk’s xAI team, in collaboration with X and Nvidia, launched the supercomputer cluster featuring 100,000 liquid-cooled H100 GPUs on a single RDMA fabric. This setup, according to Musk, gives xAI “a significant advantage in training the world’s most powerful AI by every metric by December this year.”

Given issues with xAI’s Grok chatbot throughout the year, skeptics would be justified in questioning whether those claims will match reality, especially given Musk’s tendency for grandiose, off-the-cuff remarks on the social media platform he runs.

Power issues

According to a report by News Channel 3 WREG Memphis, the startup of the massive AI training facility marks a milestone for the city. WREG reports that xAI’s investment represents the largest capital investment by a new company in Memphis’s history. However, the project has raised questions among local residents and officials about its impact on the area’s power grid and infrastructure.

WREG reports that Doug McGowen, president of Memphis Light, Gas and Water (MLGW), previously stated that xAI could consume up to 150 megawatts of power at peak times. This substantial power requirement has prompted discussions with the Tennessee Valley Authority (TVA) regarding the project’s electricity demands and connection to the power system.

The TVA told the local news station, “TVA does not have a contract in place with xAI. We are working with xAI and our partners at MLGW on the details of the proposal and electricity demand needs.”

The local news outlet confirms that MLGW has stated that xAI moved into an existing building with already existing utility services, but the full extent of the company’s power usage and its potential effects on local utilities remain unclear. To address community concerns, WREG reports that MLGW plans to host public forums in the coming days to provide more information about the project and its implications for the city.

For now, Tom’s Hardware reports that Musk is side-stepping power issues by installing a fleet of 14 VoltaGrid natural gas generators that provide supplementary power to the Memphis computer cluster while his company works out an agreement with the local power utility.

As training at the Memphis Supercluster gets underway, all eyes are on xAI and Musk’s ambitious goal of developing the world’s most powerful AI by the end of the year (by which metric, we are uncertain), given the competitive landscape in AI at the moment between OpenAI/Microsoft, Amazon, Apple, Anthropic, and Google. If such an AI model emerges from xAI, we’ll be ready to write about it.

This article was updated on July 24, 2024 at 1: 11 pm to mention Musk installing natural gas generators onsite in Memphis.

Elon Musk claims he is training “the world’s most powerful AI by every metric” Read More »

the-next-nvidia-driver-makes-even-more-gpus-“open,”-in-a-specific,-quirky-way

The next Nvidia driver makes even more GPUs “open,” in a specific, quirky way

You know open when you see it —

You can’t see inside the firmware, but more open code can translate it for you.

GeForce RTX 4060 cards on display in a case

Getty Images

You have to read the headline on Nvidia’s latest GPU announcement slowly, parsing each clause as it arrives.

“Nvidia transitions fully” sounds like real commitment, a burn-the-boats call. “Towards open-source GPU,” yes, evoking the company’s “first step” announcement a little over two years ago, so this must be progress, right? But, back up a word here, then finish: “GPU kernel modules.”

So, Nvidia has “achieved equivalent or better application performance with our open-source GPU kernel modules,” and added some new capabilities to them. And now most of Nvidia’s modern GPUs will default to using open source GPU kernel modules, starting with driver release R560, with dual GPL and MIT licensing. But Nvidia has moved most of its proprietary functions into a proprietary, closed-source firmware blob. The parts of Nvidia’s GPUs that interact with the broader Linux system are open, but the user-space drivers and firmware are none of your or the OSS community’s business.

Is it better than what existed before? Certainly. AMD and Intel have maintained open source GPU drivers, in both the kernel and user space, for years, though also with proprietary firmware. This brings Nvidia a bit closer to the Linux community and allows for community debugging and contribution. There’s no indication that Nvidia aims to go further with its open source moves, however, and its modules remain outside the main kernel, packaged up for users to install themselves.

Not all GPUs will be able to use the open source drivers: a number of chips from the Maxwell, Pascal, and Volta lines; GPUs from the Turing, Ampere, Ada Lovelace, and Hopper architectures are recommended to switch to the open bits; and Grace Hopper and Blackwell units must do so.

As noted by Hector Martin, a developer on the Asahi Linux distribution, at the time of the first announcement, this shift makes it easier to sandbox closed-source code while using Nvidia hardware. But the net amount of closed-off code is about the same as before.

Nvidia’s blog post has details on how to integrate its open kernel modules onto various systems, including CUDA setups.

The next Nvidia driver makes even more GPUs “open,” in a specific, quirky way Read More »

in-bid-to-loosen-nvidia’s-grip-on-ai,-amd-to-buy-finnish-startup-for-$665m

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AI tech stack —

The acquisition is the largest of its kind in Europe in a decade.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AMD is to buy Finnish artificial intelligence startup Silo AI for $665 million in one of the largest such takeovers in Europe as the US chipmaker seeks to expand its AI services to compete with market leader Nvidia.

California-based AMD said Silo’s 300-member team would use its software tools to build custom large language models (LLMs), the kind of AI technology that underpins chatbots such as OpenAI’s ChatGPT and Google’s Gemini. The all-cash acquisition is expected to close in the second half of this year, subject to regulatory approval.

“This agreement helps us both accelerate our customer engagements and deployments while also helping us accelerate our own AI tech stack,” Vamsi Boppana, senior vice president of AMD’s artificial intelligence group, told the Financial Times.

The acquisition is the largest of a privately held AI startup in Europe since Google acquired UK-based DeepMind for around 400 million pounds in 2014, according to data from Dealroom.

The deal comes at a time when buyouts by Silicon Valley companies have come under tougher scrutiny from regulators in Brussels and the UK. Europe-based AI startups, including Mistral, DeepL, and Helsing, have raised hundreds of millions of dollars this year as investors seek out a local champion to rival US-based OpenAI and Anthropic.

Helsinki-based Silo AI, which is among the largest private AI labs in Europe, offers tailored AI models and platforms to enterprise customers. The Finnish company launched an initiative last year to build LLMs in European languages, including Swedish, Icelandic, and Danish.

AMD’s AI technology competes with that of Nvidia, which has taken the lion’s share of the high-performance chip market. Nvidia’s success has propelled its valuation past $3 trillion this year as tech companies push to build the computing infrastructure needed to power the biggest AI models. AMD started to roll out its MI300 chips late last year in a direct challenge to Nvidia’s “Hopper” line of chips.

Peter Sarlin, Silo AI co-founder and chief executive, called the acquisition the “logical next step” as the Finnish group seeks to become a “flagship” AI company.

Silo AI is committed to “open source” AI models, which are available for free and can be customized by anyone. This distinguishes it from the likes of OpenAI and Google, which favor their own proprietary or “closed” models.

The startup previously described its family of open models, called “Poro,” as an important step toward “strengthening European digital sovereignty” and democratizing access to LLMs.

The concentration of the most powerful LLMs into the hands of a few US-based Big Tech companies is meanwhile attracting attention from antitrust regulators in Washington and Brussels.

The Silo deal shows AMD seeking to scale its business quickly and drive customer engagement with its own offering. AMD views Silo, which builds custom models for clients, as a link between its “foundational” AI software and the real-world applications of the technology.

Software has become a new battleground for semiconductor companies as they try to lock in customers to their hardware and generate more predictable revenues, outside the boom-and-bust chip sales cycle.

Nvidia’s success in the AI market stems from its multibillion-dollar investment in Cuda, its proprietary software that allows chips originally designed for processing computer graphics and video games to run a wider range of applications.

Since starting to develop Cuda in 2006, Nvidia has expanded its software platform to include a range of apps and services, largely aimed at corporate customers that lack the in-house resources and skills that Big Tech companies have to build on its technology.

Nvidia now offers more than 600 “pre-trained” models, meaning they are simpler for customers to deploy. The Santa Clara, California-based group last month started rolling out a “microservices” platform, called NIM, which promises to let developers build chatbots and AI “co-pilot” services quickly.

Historically, Nvidia has offered its software free of charge to buyers of its chips, but said this year that it planned to charge for products such as NIM.

AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft, and Intel have also worked on Triton.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M Read More »

us-agencies-to-probe-ai-dominance-of-nvidia,-microsoft,-and-openai

US agencies to probe AI dominance of Nvidia, Microsoft, and OpenAI

AI Antitrust —

DOJ to probe Nvidia while FTC takes lead in investigating Microsoft and OpenAI.

A large Nvidia logo at a conference hall

Enlarge / Nvidia logo at Impact 2024 event in Poznan, Poland on May 16, 2024.

Getty Images | NurPhoto

The US Justice Department and Federal Trade Commission reportedly plan investigations into whether Nvidia, Microsoft, and OpenAI are snuffing out competition in artificial intelligence technology.

The agencies struck a deal on how to divide up the investigations, The New York Times reported yesterday. Under this deal, the Justice Department will take the lead role in investigating Nvidia’s behavior while the FTC will take the lead in investigating Microsoft and OpenAI.

The agencies’ agreement “allows them to proceed with antitrust investigations into the dominant roles that Microsoft, OpenAI, and Nvidia play in the artificial intelligence industry, in the strongest sign of how regulatory scrutiny into the powerful technology has escalated,” the NYT wrote.

One potential area of investigation is Nvidia’s chip dominance, “including how the company’s software locks customers into using its chips, as well as how Nvidia distributes those chips to customers,” the report said. An Nvidia spokesperson declined to comment when contacted by Ars today.

High-end GPUs are “scarce,” antitrust chief says

Jonathan Kanter, the assistant attorney general in charge of the DOJ’s antitrust division, discussed the agency’s plans in an interview with the Financial Times this week. Kanter said the DOJ is examining “monopoly choke points and the competitive landscape” in AI.

The DOJ’s examination of the sector encompasses “everything from computing power and the data used to train large language models, to cloud service providers, engineering talent and access to essential hardware such as graphics processing unit chips,” the FT wrote.

Kanter said regulators are worried that AI is “at the high-water mark of competition, not the floor” and want to take action before smaller competitors are shut out of the market. The GPUs needed to train large language models are a “scarce resource,” he was quoted as saying.

“Sometimes the most meaningful intervention is when the intervention is in real time,” Kanter told the Financial Times. “The beauty of that is you can be less invasive.”

Microsoft deal scrutinized

The FTC is scrutinizing Microsoft over a March 2024 move in which it hired the CEO of artificial intelligence startup Inflection and most of the company’s staff and paid Inflection $650 million as part of a licensing deal to resell its technology. The FTC is investigating whether Microsoft structured the deal “to avoid a government antitrust review of the transaction,” The Wall Street Journal reported today.

“Companies are required to report acquisitions valued at more than $119 million to federal antitrust-enforcement agencies, which have the option to investigate a deal’s impact on competition,” the WSJ wrote. The FTC reportedly sent subpoenas to Microsoft and Inflection in an attempt “to determine whether Microsoft crafted a deal that would give it control of Inflection but also dodge FTC review of the transaction.”

Inflection built a large language model and a chatbot called Pi. Former Inflection employees are now working on Microsoft’s Copilot chatbot.

“If the agency finds that Microsoft should have reported and sought government review of its deal with Inflection, the FTC could bring an enforcement action against Microsoft,” the WSJ report said. “Officials could ask a court to fine Microsoft and suspend the transaction while the FTC conducts a full-scale investigation of the deal’s impact on competition.”

Microsoft told the WSJ that it complied with antitrust laws, that Inflection continues to operate independently, and that the deals gave Microsoft “the opportunity to recruit individuals at Inflection AI and build a team capable of accelerating Microsoft Copilot.”

OpenAI

Microsoft’s investment in OpenAI has also faced regulatory scrutiny, particularly in Europe. Microsoft has a profit-sharing agreement with OpenAI.

Microsoft President Brad Smith defended the partnership in comments to the Financial Times this week. “The partnerships that we’re pursuing have demonstrably added competition to the marketplace,” Smith was quoted as saying. “I might argue that Microsoft’s partnership with OpenAI has created this new AI market,” and that OpenAI “would not have been able to train or deploy its models” without Microsoft’s help, he said.

We contacted OpenAI today and will update this article if it provides any comment.

In January 2024, the FTC launched an inquiry into AI-related investments and partnerships involving Alphabet, Amazon, Anthropic, Microsoft, and OpenAI.

The FTC also started a separate investigation into OpenAI last year. A civil investigative demand sent to OpenAI focused on potentially unfair or deceptive privacy and data security practices, and “risks of harm to consumers, including reputational harm.” The probe focused partly on “generation of harmful or misleading content.”

US agencies to probe AI dominance of Nvidia, Microsoft, and OpenAI Read More »

nvidia-emails:-elon-musk-diverting-tesla-gpus-to-his-other-companies

Nvidia emails: Elon Musk diverting Tesla GPUs to his other companies

why not just make cars? —

The Tesla CEO is accused of diverting resources from the company again.

A row of server racks

Enlarge / Tesla will have to rely on its Dojo supercomputer for a while longer after CEO Elon Musk diverted 12,000 Nvidia GPU clusters to X instead.

Tesla

Elon Musk is yet again being accused of diverting Tesla resources to his other companies. This time, it’s high-end H100 GPU clusters from Nvidia. CNBC’s Lora Kolodny reports that while Tesla ordered these pricey computers, emails from Nvidia staff show that Musk instead redirected 12,000 GPUs to be delivered to his social media company X.

It’s almost unheard of for a profitable automaker to pivot its business into another sector, but that appears to be the plan at Tesla as Musk continues to say that the electric car company is instead destined to be an AI and robotics firm instead.

Does Tesla make cars or AI?

That explains why Musk told investors in April that Tesla had spent $1 billion on GPUs in the first three months of this year, almost as much as it spent on R&D, despite being desperate for new models to add to what is now an old and very limited product lineup that is suffering rapidly declining sales in the US and China.

Despite increasing federal scrutiny here in the US, Tesla has reduced the price of its controversial “full-self driving” assist, and the automaker is said to be close to rolling out the feature in China. (Questions remain about how many Chinese Teslas would be able to utilize this feature given that a critical chip was left out of 1.2 million cars built there during the chip shortage.)

Perfecting this driver assist would be very valuable to Tesla, which offers FSD as a monthly subscription as an alternative to a one-off payment. The profit margins for subscription software services vastly outstrip the margins Tesla can make selling physical cars, which dropped to just 5.5 percent for Q1 2024. And Tesla says that massive GPU clusters are needed to develop FSD’s software.

Isn’t Tesla desperate for Nvidia GPUs?

Tesla has been developing its own in-house supercomputer for AI, called Dojo. But Musk has previously said that computer could be redundant if Tesla could source more H100s. “If they could deliver us enough GPUs, we might not need Dojo, but they can’t because they’ve got so many customers,” Musk said during a July 2023 investor day.

Which makes his decision to have his other companies jump all the more notable. In December, an internal Nvidia memo seen by CNBC said, “Elon prioritizing X H100 GPU cluster deployment at X versus Tesla by redirecting 12k of shipped H100 GPUs originally slated for Tesla to X instead. In exchange, original X orders of 12k H100 slated for Jan and June to be redirected to Tesla.”

X and the affiliated xAi are developing generative AI products like large language models.

Not the first time

This is not the first time that Musk has been accused of diverting resources (and his time) from publicly held Tesla to his other privately owned enterprises. In December 2022, US Sen. Elizabeth Warren (D-Mass.) wrote to Tesla asking Tesla to explain whether Musk was diverting Tesla resources to X (then called Twitter):

This use of Tesla employees raises obvious questions about whether Mr. Musk is appropriating resources from a publicly traded firm, Tesla, to benefit his own private company, Twitter. This, of course, would violate Mr. Musk’s legal duty of loyalty to Tesla and trigger questions about the Tesla Board’s responsibility to prevent such actions, and may also run afoul other “anti-tunneling rules that aim to prevent corporate insiders from extracting resources from their firms.”

Musk giving time meant (and compensated) for by Tesla to SpaceX, X, and his other ventures was also highlighted as a problem by the plaintiffs in a successful lawsuit to overturn a $56 billion stock compensation package.

And last summer, the US Department of Justice opened an investigation into whether Musk used Tesla resources to build a mansion for the CEO in Texas; the probe has since expanded to cover behavior stretching back to 2017.

These latest accusations of misuse of Tesla resources come at a time when Musk is asking shareholders to reapprove what is now a $46 billion stock compensation plan.

Nvidia emails: Elon Musk diverting Tesla GPUs to his other companies Read More »

nvidia-jumps-ahead-of-itself-and-reveals-next-gen-“rubin”-ai-chips-in-keynote-tease

Nvidia jumps ahead of itself and reveals next-gen “Rubin” AI chips in keynote tease

Swing beat —

“I’m not sure yet whether I’m going to regret this,” says CEO Jensen Huang at Computex 2024.

Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024.

Enlarge / Nvidia’s CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024.

On Sunday, Nvidia CEO Jensen Huang reached beyond Blackwell and revealed the company’s next-generation AI-accelerating GPU platform during his keynote at Computex 2024 in Taiwan. Huang also detailed plans for an annual tick-tock-style upgrade cycle of its AI acceleration platforms, mentioning an upcoming Blackwell Ultra chip slated for 2025 and a subsequent platform called “Rubin” set for 2026.

Nvidia’s data center GPUs currently power a large majority of cloud-based AI models, such as ChatGPT, in both development (training) and deployment (inference) phases, and investors are keeping a close watch on the company, with expectations to keep that run going.

During the keynote, Huang seemed somewhat hesitant to make the Rubin announcement, perhaps wary of invoking the so-called Osborne effect, whereby a company’s premature announcement of the next iteration of a tech product eats into the current iteration’s sales. “This is the very first time that this next click as been made,” Huang said, holding up his presentation remote just before the Rubin announcement. “And I’m not sure yet whether I’m going to regret this or not.”

Nvidia Keynote at Computex 2023.

The Rubin AI platform, expected in 2026, will use HBM4 (a new form of high-bandwidth memory) and NVLink 6 Switch, operating at 3,600GBps. Following that launch, Nvidia will release a tick-tock iteration called “Rubin Ultra.” While Huang did not provide extensive specifications for the upcoming products, he promised cost and energy savings related to the new chipsets.

During the keynote, Huang also introduced a new ARM-based CPU called “Vera,” which will be featured on a new accelerator board called “Vera Rubin,” alongside one of the Rubin GPUs.

Much like Nvidia’s Grace Hopper architecture, which combines a “Grace” CPU and a “Hopper” GPU to pay tribute to the pioneering computer scientist of the same name, Vera Rubin refers to Vera Florence Cooper Rubin (1928–2016), an American astronomer who made discoveries in the field of deep space astronomy. She is best known for her pioneering work on galaxy rotation rates, which provided strong evidence for the existence of dark matter.

A calculated risk

Nvidia CEO Jensen Huang reveals the

Enlarge / Nvidia CEO Jensen Huang reveals the “Rubin” AI platform for the first time during his keynote at Computex 2024 on June 2, 2024.

Nvidia’s reveal of Rubin is not a surprise in the sense that most big tech companies are continuously working on follow-up products well in advance of release, but it’s notable because it comes just three months after the company revealed Blackwell, which is barely out of the gate and not yet widely shipping.

At the moment, the company seems to be comfortable leapfrogging itself with new announcements and catching up later; Nvidia just announced that its GH200 Grace Hopper “Superchip,” unveiled one year ago at Computex 2023, is now in full production.

With Nvidia stock rising and the company possessing an estimated 70–95 percent of the data center GPU market share, the Rubin reveal is a calculated risk that seems to come from a place of confidence. That confidence could turn out to be misplaced if a so-called “AI bubble” pops or if Nvidia misjudges the capabilities of its competitors. The announcement may also stem from pressure to continue Nvidia’s astronomical growth in market cap with nonstop promises of improving technology.

Accordingly, Huang has been eager to showcase the company’s plans to continue pushing silicon fabrication tech to its limits and widely broadcast that Nvidia plans to keep releasing new AI chips at a steady cadence.

“Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and we push everything to technology limits,” Huang said during Sunday’s Computex keynote.

Despite Nvidia’s recent market performance, the company’s run may not continue indefinitely. With ample money pouring into the data center AI space, Nvidia isn’t alone in developing accelerator chips. Competitors like AMD (with the Instinct series) and Intel (with Guadi 3) also want to win a slice of the data center GPU market away from Nvidia’s current command of the AI-accelerator space. And OpenAI’s Sam Altman is trying to encourage diversified production of GPU hardware that will power the company’s next generation of AI models in the years ahead.

Nvidia jumps ahead of itself and reveals next-gen “Rubin” AI chips in keynote tease Read More »

geforce-now-has-made-steam-deck-streaming-much-easier-than-it-used-to-be

GeForce Now has made Steam Deck streaming much easier than it used to be

Easy, but we’re talking Linux easy —

Ask someone who previously did it the DIY way.

Fallout 4 running on a Steam Deck through GeForce Now

Enlarge / Streaming Fallout 4 from GeForce Now might seem unnecessary, unless you know how running it natively has been going.

Kevin Purdy

The Steam Deck is a Linux computer. There is, technically, very little you cannot get running on it, given enough knowledge, time, and patience. That said, it’s never a bad thing when someone has done all the work for you, leaving you to focus on what matters: sneaking game time on the couch.

GeForce Now, Nvidia’s game-streaming service that uses your own PC gaming libraries, has made it easier for Steam Deck owners to get its service set up on their Deck. On the service’s Download page, there is now a section for Gaming Handheld Devices. Most of the device links provide the service’s Windows installer, since devices like the ROG Ally and Lenovo Legion Go run Windows. Some note that GeForce Now is already installed on devices like the Razer Edge and Logitech G Cloud.

But Steam Deck types are special. We get a Unix-style executable script, a folder with all the necessary Steam icon image assets, and a README.md file.

It has technically been possible all this time, if a Deck owner was willing to fiddle about with installing Chrome in desktop mode, tweaking a half-dozen Steam settings, and then navigating the GeForce Now site with a trackpad. GeForce Now’s script, once you download it from a browser in the Deck’s desktop mode, does a few things:

  • Installs the Google Chrome browser through the Deck’s built-in Flatpak support
  • Adjusts Chrome’s settings to allow for gamepad support in the browser
  • Sets up GeForce Now in Steam with proper command line options and icons for every window.

That last bit about the icons may seem small, but it’s a pain in the butt to find properly sized images for the many different kinds of images Steam can show for a game in your library when selected, having recently played, and so on. As for the script itself, it worked fine, even with me having previously installed Chrome and created a different Steam shortcut. I got a notice on first launch that Chrome couldn’t update, so I was missing out on all its “new features,” but that could likely be unrelated.

I was almost disappointed that GeForce Now's script just quietly worked and then asked me to head back into Gaming Mode. Too easy!

I was almost disappointed that GeForce Now’s script just quietly worked and then asked me to head back into Gaming Mode. Too easy!

Kevin Purdy

GeForce Now isn’t for everyone, and certainly not for every Steam Deck owner. Because the standard Steam Deck LCD screen only goes to 800p and 60 Hz, paying for a rig running in a remote data center to power your high-resolution, impressive-looking game doesn’t always make sense. With the advent of the Steam Deck OLED, however, the games look a lot brighter and more colorful and run up to 90 Hz. You also get a lot more battery life from streaming than you do from local hardware, which is still pretty much the same as it was with the LCD model.

GeForce Now also offers a free membership option and $4 “day passes” to test if your Wi-Fi (or docked Ethernet) connection would make a $10/month Priority or $20/month Ultimate membership worthwhile (both with cheaper pre-paid prices). The service has in recent months been adding games from Game Pass subscriptions and Microsoft Store purchases, Blizzard (i.e., Battle.net), and a lot of same-day Steam launch titles.

If you’re already intrigued by GeForce Now for your other screens and were wondering if it could fly on a Steam Deck, now it does, and it’s only about 10 percent as painful. Whether that’s more or less painful than buying your own GPU and running your own Deck streaming is another matter.

GeForce Now has made Steam Deck streaming much easier than it used to be Read More »

critics-question-tech-heavy-lineup-of-new-homeland-security-ai-safety-board

Critics question tech-heavy lineup of new Homeland Security AI safety board

Adventures in 21st century regulation —

CEO-heavy board to tackle elusive AI safety concept and apply it to US infrastructure.

A modified photo of a 1956 scientist carefully bottling

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.

This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”

So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.

A roundtable of Big Tech CEOs attracts criticism

For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.

Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”

Critics question tech-heavy lineup of new Homeland Security AI safety board Read More »

home-assistant-has-a-new-foundation-and-a-goal-to-become-a-consumer-brand

Home Assistant has a new foundation and a goal to become a consumer brand

An Open Home stuffed full of code —

Can a non-profit foundation get Home Assistant to the point of Home Depot boxes?

Open Home Foundation logo on a multicolor background

Open Home Foundation

Home Assistant, until recently, has been a wide-ranging and hard-to-define project.

The open smart home platform is an open source OS you can run anywhere that aims to connect all your devices together. But it’s also bespoke Raspberry Pi hardware, in Yellow and Green. It’s entirely free, but it also receives funding through a private cloud services company, Nabu Casa. It contains tiny board project ESPHome and other inter-connected bits. It has wide-ranging voice assistant ambitions, but it doesn’t want to be Alexa or Google Assistant. Home Assistant is a lot.

After an announcement this weekend, however, Home Assistant’s shape is a bit easier to draw out. All of the project’s ambitions now fall under the Open Home Foundation, a non-profit organization that now contains Home Assistant and more than 240 related bits. Its mission statement is refreshing, and refreshingly honest about the state of modern open source projects.

The three pillars of the Open Home Foundation.

The three pillars of the Open Home Foundation.

Open Home Foundation

“We’ve done this to create a bulwark against surveillance capitalism, the risk of buyout, and open-source projects becoming abandonware,” the Open Home Foundation states in a press release. “To an extent, this protection extends even against our future selves—so that smart home users can continue to benefit for years, if not decades. No matter what comes.” Along with keeping Home Assistant funded and secure from buy-outs or mission creep, the foundation intends to help fund and collaborate with external projects crucial to Home Assistant, like Z-Wave JS and Zigbee2MQTT.

My favorite video.

Home Assistant’s ambitions don’t stop with money and board seats, though. They aim to “be an active political advocate” in the smart home field, toward three primary principles:

  • Data privacy, which means devices with local-only options, and cloud services with explicit permissions
  • Choice in using devices with one another through open standards and local APIs
  • Sustainability by repurposing old devices and appliances beyond company-defined lifetimes

Notably, individuals cannot contribute modest-size donations to the Open Home Foundation. Instead, the foundation asks supporters to purchase a Nabu Casa subscription or contribute code or other help to its open source projects.

From a few lines of Python to a foundation

Home Assistant founder Paulus Schoutsen wanted better control of his Philips Hue smart lights just before 2014 or so and wrote a Python script to do so. Thousands of volunteer contributions later, Home Assistant was becoming a real thing. Schoutsen and other volunteers inevitably started to feel overwhelmed by the “free time” coding and urgent bug fixes. So Schoutsen, Ben Bangert, and Pascal Vizeli founded Nabu Casa, a for-profit firm intended to stabilize funding and paid work on Home Assistant.

Through that stability, Home Assistant could direct full-time work to various projects, take ownership of things like ESPHome, and officially contribute to open standards like Zigbee, Z-Wave, and Matter. But Home Assistant was “floating in a kind of undefined space between a for-profit entity and an open-source repository on GitHub,” according to the foundation. The Open Home Foundation creates the formal home for everything that needs it and makes Nabu Casa a “special, rules-bound inaugural partner” to better delineate the business and non-profit sides.

Home Assistant as a Home Depot box?

In an interview with The Verge’s Jennifer Pattison Tuohy, and in a State of the Open Home stream over the weekend, Schoutsen also suggested that the Foundation gives Home Assistant a more stable footing by which to compete against the bigger names in smart homes, like Amazon, Google, Apple, and Samsung. The Home Assistant Green starter hardware will sell on Amazon this year, along with HA-badged extension dongles. A dedicated voice control hardware device that enables a local voice assistant is coming before year’s end. Home Assistant is partnering with Nvidia and its Jetson edge AI platform to help make local assistants better, faster, and more easily integrated into a locally controlled smart home.

That also means Home Assistant is growing as a brand, not just a product. Home Assistant’s “Works With” program is picking up new partners and has broad ambitions. “We want to be a consumer brand,” Schoutsen told Tuohy. “You should be able to walk into a Home Depot and be like, ‘I care about my privacy; this is the smart home hub I need.’”

Where does this leave existing Home Assistant enthusiasts, who are probably familiar with the feeling of a tech brand pivoting away from them? It’s hard to imagine Home Assistant dropping its advanced automation tools and YAML-editing offerings entirely. But Schoutsen suggested he could imagine a split between regular and “advanced” users down the line. But Home Assistant’s open nature, and now its foundation, should ensure that people will always be able to remix, reconfigure, or re-release the version of smart home choice they prefer.

Home Assistant has a new foundation and a goal to become a consumer brand Read More »