On Monday, OpenAI announced the GPT-4.1 model family, its newest series of AI language models that brings a 1 million token context window to OpenAI for the first time and continues a long tradition of very confusing AI model names. Three confusing new names, in fact: GPT‑4.1, GPT‑4.1 mini, and GPT‑4.1 nano.
According to OpenAI, these models outperform GPT-4o in several key areas. But in an unusual move, GPT-4.1 will only be available through the developer API, not in the consumer ChatGPT interface where most people interact with OpenAI’s technology.
The 1 million token context window—essentially the amount of text the AI can process at once—allows these models to ingest roughly 3,000 pages of text in a single conversation. This puts OpenAI’s context windows on par with Google’s Gemini models, which have offered similar extended context capabilities for some time.
At the same time, the company announced it will retire the GPT-4.5 Preview model in the API—a temporary offering launched in February that one critic called a “lemon”—giving developers until July 2025 to switch to something else. However, it appears GPT-4.5 will stick around in ChatGPT for now.
So many names
If this sounds confusing, well, that’s because it is. OpenAI CEO Sam Altman acknowledged OpenAI’s habit of terrible product names in February when discussing the roadmap toward the long-anticipated (and still theoretical) GPT-5.
“We realize how complicated our model and product offerings have gotten,” Altman wrote on X at the time, referencing a ChatGPT interface already crowded with choices like GPT-4o, various specialized GPT-4o versions, GPT-4o mini, the simulated reasoning o1-pro, o3-mini, and o3-mini-high models, and GPT-4. The stated goal for GPT-5 will be consolidation, a branding move to unify o-series models and GPT-series models.
So, how does launching another distinctly numbered model, GPT-4.1, fit into that grand unification plan? It’s hard to say. Altman foreshadowed this kind of ambiguity in March 2024, telling Lex Friedman the company had major releases coming but was unsure about names: “before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what you’d expect…”
Elon Musk’s “relentless” attacks on OpenAI must cease, court filing says.
Yesterday, OpenAI counter-sued Elon Musk, alleging that Musk’s “sham” bid to buy OpenAI was intentionally timed to maximally disrupt and potentially even frighten off investments from honest bidders.
Slamming Musk for attempting to become an “AGI dictator,” OpenAI said that if Musk’s allegedly “relentless” yearslong campaign of “harassment” isn’t stopped, Musk could end up taking over OpenAI and tanking its revenue the same way he did with Twitter.
In its filing, OpenAI argued that Musk and the other investors who joined his bid completely fabricated the $97.375 billion offer. It was allegedly not based on OpenAI’s projections or historical performance, like Musk claimed, but instead appeared to be “a comedic reference to Musk’s favorite sci-fi” novel, Iain Banks’ Look to Windward. Musk and others also provided “no evidence of financing to pay the nearly $100 billion purchase price,” OpenAI said.
And perhaps most damning, one of Musk’s backers, Ron Baron, appeared “flustered” when asked about the deal on CNBC, OpenAI alleged. On air, Baron admitted that he didn’t follow the deal closely and that “the point of the bid, as pitched to him (plainly by Musk) was not to buy OpenAI’s assets, but instead to obtain ‘discovery’ and get ‘behind the wall’ at OpenAI,” the AI company’s court filing alleged.
Likely poisoning potential deals most, OpenAI suggested, was the idea that Musk might take over OpenAI and damage its revenue like he did with Twitter. Just the specter of that could repel talent, OpenAI feared, since “the prospect of a Musk takeover means chaos and arbitrary employment action.”
And “still worse, the threat of a Musk takeover is a threat to the very mission of building beneficial AGI,” since xAI is allegedly “the worst offender” in terms of “inadequate safety measures,” according to one study, and X’s chatbot, Grok, has “become a leading spreader of misinformation and inflammatory political rhetoric,” OpenAI said. Even xAI representatives had to admit that users discovering that Grok consistently responds that “President Donald Trump and Musk deserve the death penalty” was a “really terrible and bad failure,” OpenAI’s filing said.
Despite Musk appearing to only be “pretending” to be interested in purchasing OpenAI—and OpenAI ultimately rejecting the offer—the company still had to cover the costs of reviewing the bid. And beyond bearing costs and confronting an artificially raised floor on the company’s valuation supposedly frightening off investors, “a more serious toll” of “Musk’s most recent ploy” would be OpenAI lacking resources to fulfill its mission to benefit humanity with AI “on terms uncorrupted by unlawful harassment and interference,” OpenAI said.
OpenAI has demanded a jury trial and is seeking an injunction to stop Musk’s alleged unfair business practices—which they claimed are designed to impair competition in the nascent AI field “for the sole benefit of Musk’s xAI” and “at the expense of the public interest.”
“The risk of future, irreparable harm from Musk’s unlawful conduct is acute, and the risk that that conduct continues is high,” OpenAI alleged. “With every month that has passed, Musk has intensified and expanded the fronts of his campaign against OpenAI, and has proven himself willing to take ever more dramatic steps to seek a competitive advantage for xAI and to harm [OpenAI CEO Sam] Altman, whom, in the words of the president of the United States, Musk ‘hates.'”
OpenAI also wants Musk to cover the costs it incurred from entertaining the supposedly fake bid, as well as pay punitive damages to be determined at trial for allegedly engaging “in wrongful conduct with malice, oppression, and fraud.”
“You can’t sue your way to AGI,” an OpenAI blog said.
In response to OpenAI’s filing, Musk’s lawyer, Marc Toberoff, provided a statement to Ars.
“Had OpenAI’s Board genuinely considered the bid, as they were obligated to do, they would have seen just how serious it was,” Toberoff said. “It’s telling that having to pay fair market value for OpenAI’s assets allegedly ‘interferes’ with their business plans. It’s apparent they prefer to negotiate with themselves on both sides of the table than engage in a bona fide transaction in the best interests of the charity and the public interest.”
Musk’s attempt to become an “AGI dictator”
According to OpenAI’s filing, “Musk has tried every tool available to harm OpenAI” ever since OpenAI refused to allow Musk to become an “AGI dictator” and fully control OpenAI by absorbing it into Tesla in 2018.
Musk allegedly “demanded sole control of the new for-profit, at least in the short term: He would be CEO, own a majority equity stake, and control a majority of the board,” OpenAI said. “He would—in his own words—’unequivocally have initial control of the company.'”
At the time, OpenAI rejected Musk’s offer, viewing it as in conflict with its mission to avoid corporate control and telling Musk:
“You stated that you don’t want to control the final AGI, but during this negotiation, you’ve shown to us that absolute control is extremely important to you. … The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. … So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.”
This news did not sit well with Musk, OpenAI said.
“Musk was incensed,” OpenAI told the court. “If he could not control the contemplated for-profit entity, he would not participate in it.”
Back then, Musk departed from OpenAI somewhat “amicably,” OpenAI said, although Musk insisted it was “obvious” that OpenAI would fail without him. However, after OpenAI instead became a global AI leader, Musk quietly founded xAI, OpenAI alleged, failing to publicly announce his new company while deceptively seeking a “moratorium” on AI development, apparently to slow down rivals so that xAI could catch up.
OpenAI also alleges that this is when Musk began intensifying his attacks on OpenAI while attempting to poach its top talent and demanding access to OpenAI’s confidential, sensitive information as a former donor and director—”without ever disclosing he was building a competitor in secret.”
And the attacks have only grown more intense since then, said OpenAI, claiming that Musk planted stories in the media, wielded his influence on X, requested government probes into OpenAI, and filed multiple legal claims, including seeking an injunction to halt OpenAI’s business.
“Most explosively,” OpenAI alleged that Musk pushed attorneys general of California and Delaware “to force OpenAI, Inc., without legal basis, to auction off its assets for the benefit of Musk and his associates.”
Meanwhile, OpenAI noted, Musk has folded his social media platform X into xAI, announcing its valuation was at $80 billion and gaining “a major competitive advantage” by getting “unprecedented direct access to all the user data flowing through” X. Further, Musk intends to expand his “Colossus,” which is “believed to be the world’s largest supercomputer,” “tenfold.” That could help Musk “leap ahead” of OpenAI, suggesting Musk has motive to delay OpenAI’s growth while he pursues that goal.
That’s why Musk “set in motion a campaign of harassment, interference, and misinformation designed to take down OpenAI and clear the field for himself,” OpenAI alleged.
Even while counter-suing, OpenAI appears careful not to poke the bear too hard. In the court filing and on X, OpenAI praised Musk’s leadership skills and the potential for xAI to dominate the AI industry, partly due to its unique access to X data. But ultimately, OpenAI seems to be happy to be operating independently of Musk now, asking the court to agree that “Elon’s never been about the mission” of benefiting humanity with AI, “he’s always had his own agenda.”
“Elon is undoubtedly one of the greatest entrepreneurs of our time,” OpenAI said on X. “But these antics are just history on repeat—Elon being all about Elon.”
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
OpenAI is entering the final stages of designing its long-rumored AI processor with the aim of decreasing the company’s dependence on Nvidia hardware, according to a Reuters report released Monday. The ChatGPT creator plans to send its chip designs to Taiwan Semiconductor Manufacturing Co. (TSMC) for fabrication within the next few months, but the chip has not yet been formally announced.
The OpenAI chip’s full capabilities, technical details, and exact timeline are still unknown, but the company reportedly intends to iterate on the design and improve it over time, giving it leverage in negotiations with chip suppliers—and potentially granting the company future independence with a chip design it controls outright.
In the past, we’ve seen other tech companies, such as Microsoft, Amazon, Google, and Meta, create their own AI acceleration chips for reasons that range from cost reduction to relieving shortages of AI chips supplied by Nvidia, which enjoys a near-market monopoly on high-powered GPUs (such as the Blackwell series) for data center use.
In October 2023, we covered a report about OpenAI’s intention to create its own AI accelerator chips for similar reasons, so OpenAI’s custom chip project has been in the works for some time. In early 2024, OpenAI CEO Sam Altman also began spending considerable time traveling around the world trying to raise up to a reported $7 trillion to increase world chip fabrication capacity.
He then shared his concerns about how human-level AI models and robotics that are capable of replacing all human labor may require a complete re-think of how humans value both labor and themselves.
“We’ve recognized that we’ve reached the point as a technological civilization where the idea, there’s huge abundance and huge economic value, but the idea that the way to distribute that value is for humans to produce economic labor, and this is where they feel their sense of self worth,” he added. “Once that idea gets invalidated, we’re all going to have to sit down and figure it out.”
The eye-catching comments, similar to comments about AGI made recently by OpenAI CEO Sam Altman, come as Anthropic negotiates a $2 billion funding round that would value the company at $60 billion. Amodei disclosed that Anthropic’s revenue multiplied tenfold in 2024.
Amodei distances himself from “AGI” term
Even with his dramatic predictions, Amodei distanced himself from a term for this advanced labor-replacing AI favored by Altman, “artificial general intelligence” (AGI), calling it in a separate CNBC interview from the same event in Switzerland a marketing term.
Instead, he prefers to describe future AI systems as a “country of geniuses in a data center,” he told CNBC. Amodei wrote in an October 2024 essay that such systems would need to be “smarter than a Nobel Prize winner across most relevant fields.”
On Monday, Google announced an additional $1 billion investment in Anthropic, bringing its total commitment to $3 billion. This follows Amazon’s $8 billion investment over the past 18 months. Amazon plans to integrate Claude models into future versions of its Alexa speaker.
Video of the Stargate announcement conference at the White House.
Despite optimism from the companies involved, as CNN reports, past presidential investment announcements have yielded mixed results. In 2017, Trump and Foxconn unveiled plans for a $10 billion Wisconsin electronics factory promising 13,000 jobs. The project later scaled back to a $672 million investment with fewer than 1,500 positions. The facility now operates as a Microsoft AI data center.
The Stargate announcement wasn’t Trump’s only major AI move announced this week. It follows the newly inaugurated US president’s reversal of a 2023 Biden executive order on AI risk monitoring and regulation.
Altman speaks, Musk responds
On Tuesday, OpenAI CEO Sam Altman appeared at a White House press conference alongside Present Trump, Oracle CEO Larry Ellison, and SoftBank CEO Masayoshi Son to announce Stargate.
Altman said he thinks Stargate represents “the most important project of this era,” allowing AGI to emerge in the United States. He believes that future AI technology could create hundreds of thousands of jobs. “We wouldn’t be able to do this without you, Mr. President,” Altman added.
Responding to off-camera questions from Trump about AI’s potential to spur scientific development, Altman said he believes AI will accelerate the discoveries for cures of diseases like cancer and heart disease.
Screenshots of Elon Musk challenging the Stargate announcement on X.
Meanwhile on X, Trump ally and frequent Altman foe Elon Musk immediately attacked the Stargate plan, writing, “They don’t actually have the money,” and following up with a claim that we cannot yet substantiate, saying, “SoftBank has well under $10B secured. I have that on good authority.”
Musk’s criticism has complex implications given his very close ties to Trump, his history of litigating against OpenAI (which he co-founded and later left), and his own goals with his xAI company.
OpenAI has finally shared details about its plans to shake up its core business by shifting to a for-profit corporate structure.
On Thursday, OpenAI posted on its blog, confirming that in 2025, the existing for-profit arm will be transformed into a Delaware-based public benefit corporation (PBC). As a PBC, OpenAI would be required to balance its shareholders’ and stakeholders’ interests with the public benefit. To achieve that, OpenAI would offer “ordinary shares of stock” while using some profits to further its mission—”ensuring artificial general intelligence (AGI) benefits all of humanity”—to serve a social good.
To compensate for losing control over the for-profit, the nonprofit would have some shares in the PBC, but it’s currently unclear how many will be allotted. Independent financial advisors will help OpenAI reach a “fair valuation,” the blog said, while promising the new structure would “multiply” the donations that previously supported the nonprofit.
“Our plan would result in one of the best resourced nonprofits in history,” OpenAI said. (During its latest funding round, OpenAI was valued at $157 billion.)
OpenAI claimed the nonprofit’s mission would be more sustainable under the proposed changes, as the costs of AI innovation only continue to compound. The new structure would set the PBC up to control OpenAI’s operations and business while the nonprofit would “hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science,” OpenAI said.
Some of OpenAI’s rivals, such as Anthropic and Elon Musk’s xAI, use a similar corporate structure, OpenAI noted.
Critics had previously pushed back on this plan, arguing that humanity may be better served if the nonprofit continues controlling the for-profit arm of OpenAI. But OpenAI argued that the old way made it hard for the Board “to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit.
On Monday, OpenAI kicked off its annual DevDay event in San Francisco, unveiling four major API updates for developers that integrate the company’s AI models into their products. Unlike last year’s single-location event featuring a keynote by CEO Sam Altman, DevDay 2024 is more than just one day, adopting a global approach with additional events planned for London on October 30 and Singapore on November 21.
The San Francisco event, which was invitation-only and closed to press, featured on-stage speakers going through technical presentations. Perhaps the most notable new API feature is the Realtime API, now in public beta, which supports speech-to-speech conversations using six preset voices and enables developers to build features very similar to ChatGPT’s Advanced Voice Mode (AVM) into their applications.
OpenAI says that the Realtime API streamlines the process of creating voice assistants. Previously, developers had to use multiple models for speech recognition, text processing, and text-to-speech conversion. Now, they can handle the entire process with a single API call.
The company plans to add audio input and output capabilities to its Chat Completions API in the next few weeks, allowing developers to input text or audio and receive responses in either format.
Two new options for cheaper inference
OpenAI also announced two features that may help developers balance performance and cost when making AI applications. “Model distillation” offers a way for developers to fine-tune (customize) smaller, cheaper models like GPT-4o mini using outputs from more advanced models such as GPT-4o and o1-preview. This potentially allows developers to get more relevant and accurate outputs while running the cheaper model.
Also, OpenAI announced “prompt caching,” a feature similar to one introduced by Anthropic for its Claude API in August. It speeds up inference (the AI model generating outputs) by remembering frequently used prompts (input tokens). Along the way, the feature provides a 50 percent discount on input tokens and faster processing times by reusing recently seen input tokens.
And last but not least, the company expanded its fine-tuning capabilities to include images (what it calls “vision fine-tuning”), allowing developers to customize GPT-4o by feeding it both custom images and text. Basically, developers can teach the multimodal version of GPT-4o to visually recognize certain things. OpenAI says the new feature opens up possibilities for improved visual search functionality, more accurate object detection for autonomous vehicles, and possibly enhanced medical image analysis.
Where’s the Sam Altman keynote?
Enlarge/ OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.
Getty Images
Unlike last year, DevDay isn’t being streamed live, though OpenAI plans to post content later on its YouTube channel. The event’s programming includes breakout sessions, community spotlights, and demos. But the biggest change since last year is the lack of a keynote appearance from the company’s CEO. This year, the keynote was handled by the OpenAI product team.
On last year’s inaugural DevDay, November 6, 2023, OpenAI CEO Sam Altman delivered a Steve Jobs-style live keynote to assembled developers, OpenAI employees, and the press. During his presentation, Microsoft CEO Satya Nadella made a surprise appearance, talking up the partnership between the companies.
Eleven days later, the OpenAI board fired Altman, triggering a week of turmoil that resulted in Altman’s return as CEO and a new board of directors. Just after the firing, Kara Swisher relayed insider sources that said Altman’s DevDay keynote and the introduction of the GPT store had been a precipitating factor in the firing (though not the key factor) due to some internal disagreements over the company’s more consumer-like direction since the launch of ChatGPT.
With that history in mind—and the focus on developers above all else for this event—perhaps the company decided it was best to let Altman step away from the keynote and let OpenAI’s technology become the key focus of the event instead of him. We are purely speculating on that point, but OpenAI has certainly experienced its share of drama over the past month, so it may have been a prudent decision.
Despite the lack of a keynote, Altman is present at Dev Day San Francisco today and is scheduled to do a closing “fireside chat” at the end (which has not yet happened as of this writing). Also, Altman made a statement about DevDay on X, noting that since last year’s DevDay, OpenAI had seen some dramatic changes (literally):
From last devday to this one:
*98% decrease in cost per token from GPT-4 to 4o mini
*50x increase in token volume across our systems
*excellent model intelligence progress
*(and a little bit of drama along the way)
In a follow-up tweet delivered in his trademark lowercase, Altman shared a forward-looking message that referenced the company’s quest for human-level AI, often called AGI: “excited to make even more progress from this devday to the next one,” he wrote. “the path to agi has never felt more clear.”
Enlarge/ The Apple Park campus in Cupertino, California.
A few weeks back, it was reported that Apple was exploring investing in OpenAI, the company that makes ChatGPT, the GPT model, and other popular generative AI products. Now, a new report from The Wall Street Journal claims that Apple has abandoned those plans.
The article simply says Apple “fell out of the talks to join the round.” The round is expected to close in a week or so and may raise as much as $6.5 billion for the growing Silicon Valley company. Had Apple gone through with the move, it would have been a rare event—though not completely unprecedented—for Apple to invest in another company that size.
OpenAI is still expected to raise the funds it seeks from other sources. The report claims Microsoft is expected to invest around $1 billion in this round. Microsoft has already invested substantial sums in OpenAI, whose GPT models power Microsoft AI tools like Copilot and Bing chat.
Nvidia is also a likely major investor in this round.
Apple will soon offer limited ChatGPT integration in an upcoming iOS update, though it plans to support additional models like Google’s Gemini further down the line, offering users a choice similar to how they pick a default search engine or web browser.
OpenAI has been on a successful tear with its products and models, establishing itself as a leader in the rapidly growing industry. However, it has also been beset by drama and controversy—most recently, some key leaders at OpenAI departed the company abruptly, and it shifted its focus from a research-focused organization that was beholden to a nonprofit, to a for-profit company under CEO Sam Altman. Also, former Apple design lead Jony Ive is confirmed to be working on a new AI product of some kind.
But The Wall Street Journal did not specify which (if any) of these facts are reasons why Apple chose to back out of the investment.
Enlarge/ Mira Murati, Chief Technology Officer of OpenAI, speaks during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023.
On Wednesday, OpenAI Chief Technical Officer Mira Murati announced she is leaving the company in a surprise resignation shared on the social network X. Murati joined OpenAI in 2018, serving for six-and-a-half years in various leadership roles, most recently as the CTO.
“After much reflection, I have made the difficult decision to leave OpenAI,” she wrote in a letter to the company’s staff. “While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years,” she continued, referring to OpenAI CEO Sam Altman and President Greg Brockman. “There’s never an ideal time to step away from a place one cherishes, yet this moment feels right.”
At OpenAI, Murati was in charge of overseeing the company’s technical strategy and product development, including the launch and improvement of DALL-E, Codex, Sora, and the ChatGPT platform, while also leading research and safety teams. In public appearances, Murati often spoke about ethical considerations in AI development.
Murati’s decision to leave the company comes when OpenAI finds itself at a major crossroads with a plan to alter its nonprofit structure. According to a Reuters report published today, OpenAI is working to reorganize its core business into a for-profit benefit corporation, removing control from its nonprofit board. The move, which would give CEO Sam Altman equity in the company for the first time, could potentially value OpenAI at $150 billion.
Murati stated her decision to leave was driven by a desire to “create the time and space to do my own exploration,” though she didn’t specify her future plans.
Proud of safety and research work
Enlarge/ OpenAI CTO Mira Murati seen debuting GPT-4o during OpenAI’s Spring Update livestream on May 13, 2024.
OpenAI
In her departure announcement, Murati highlighted recent developments at OpenAI, including innovations in speech-to-speech technology and the release of OpenAI o1. She cited what she considers the company’s progress in safety research and the development of “more robust, aligned, and steerable” AI models.
Altman replied to Murati’s tweet directly, expressing gratitude for Murati’s contributions and her personal support during challenging times, likely referring to the tumultuous period in November 2023 when the OpenAI board of directors briefly fired Altman from the company.
“It’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally,” he wrote. “I feel tremendous gratitude towards her for what she has helped us build and accomplish, but I most of all feel personal gratitude towards her for the support and love during all the hard times. I am excited for what she’ll do next.”
Not the first major player to leave
Enlarge/ An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman (on leave), Sutskever (now former Chief Scientist), CEO Sam Altman, and soon-to-be-former CTO Mira Murati.
With Murati’s exit, Altman remains one of the few long-standing senior leaders at OpenAI, which has seen significant shuffling in its upper ranks recently. In May 2024, former Chief Scientist Ilya Sutskever left to form his own company, Safe Superintelligence, Inc. (SSI), focused on building AI systems that far surpass humans in logical capabilities. That came just six months after Sutskever’s involvement in the temporary removal of Altman as CEO.
John Schulman, an OpenAI co-founder, departed earlier in 2024 to join rival AI firm Anthropic, and in August, OpenAI President Greg Brockman announced he would be taking a temporary sabbatical until the end of the year.
The leadership shuffles have raised questions among critics about the internal dynamics at OpenAI under Altman and the state of OpenAI’s future research path, which has been aiming toward creating artificial general intelligence (AGI)—a hypothetical technology that could potentially perform human-level intellectual work.
“Question: why would key people leave an organization right before it was just about to develop AGI?” asked xAI developer Benjamin De Kraker in a post on X just after Murati’s announcement. “This is kind of like quitting NASA months before the moon landing,” he wrote in a reply. “Wouldn’t you wanna stick around and be part of it?”
Altman mentioned that more information about transition plans would be forthcoming, leaving questions about who will step into Murati’s role and how OpenAI will adapt to this latest leadership change as the company is poised to adopt a corporate structure that may consolidate more power directly under Altman. “We’ll say more about the transition plans soon, but for now, I want to take a moment to just feel thanks,” Altman wrote.
Enlarge/ An ABC handout promotional image for “AI and the Future of Us: An Oprah Winfrey Special.”
On Thursday, ABC announced an upcoming TV special titled, “AI and the Future of Us: An Oprah Winfrey Special.” The one-hour show, set to air on September 12, aims to explore AI’s impact on daily life and will feature interviews with figures in the tech industry, like OpenAI CEO Sam Altman and Bill Gates. Soon after the announcement, some AI critics began questioning the guest list and the framing of the show in general.
“Sure is nice of Oprah to host this extended sales pitch for the generative AI industry at a moment when its fortunes are flagging and the AI bubble is threatening to burst,” tweeted author Brian Merchant, who frequently criticizes generative AI technology in op-eds, social media, and through his “Blood in the Machine” AI newsletter.
“The way the experts who are not experts are presented as such what a train wreck,” replied artist Karla Ortiz, who is a plaintiff in a lawsuit against several AI companies. “There’s still PLENTY of time to get actual experts and have a better discussion on this because yikes.”
The trailer for Oprah’s upcoming TV special on AI.
On Friday, Ortiz created a lengthy viral thread on X that detailed her potential issues with the program, writing, “This event will be the first time many people will get info on Generative AI. However it is shaping up to be a misinformed marketing event starring vested interests (some who are under a litany of lawsuits) who ignore the harms GenAi inflicts on communities NOW.”
Critics of generative AI like Ortiz question the utility of the technology, its perceived environmental impact, and what they see as blatant copyright infringement. In training AI language models, tech companies like Meta, Anthropic, and OpenAI commonly use copyrighted material gathered without license or owner permission. OpenAI claims that the practice is “fair use.”
Oprah’s guests
According to ABC, the upcoming special will feature “some of the most important and powerful people in AI,” which appears to roughly translate to “famous and publicly visible people related to tech.” Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”
As a guest representing ChatGPT-maker OpenAI, Sam Altman will explain “how AI works in layman’s terms” and discuss “the immense personal responsibility that must be borne by the executives of AI companies.” Karla Ortiz specifically criticized Altman in her thread by saying, “There are far more qualified individuals to speak on what GenAi models are than CEOs. Especially one CEO who recently said AI models will ‘solve all physics.’ That’s an absurd statement and not worthy of your audience.”
In a nod to present-day content creation, YouTube creator Marques Brownlee will appear on the show and reportedly walk Winfrey through “mind-blowing demonstrations of AI’s capabilities.”
Brownlee’s involvement received special attention from some critics online. “Marques Brownlee should be absolutely ashamed of himself,” tweeted PR consultant and frequent AI critic Ed Zitron, who frequently heaps scorn on generative AI in his own newsletter. “What a disgraceful thing to be associated with.”
Other guests include Tristan Harris and Aza Raskin from the Center for Humane Technology, who aim to highlight “emerging risks posed by powerful and superintelligent AI,” an existential risk topic that has its own critics. And FBI Director Christopher Wray will reveal “the terrifying ways criminals and foreign adversaries are using AI,” while author Marilynne Robinson will reflect on “AI’s threat to human values.”
Going only by the publicized guest list, it appears that Oprah does not plan to give voice to prominent non-doomer critics of AI. “This is really disappointing @Oprah and frankly a bit irresponsible to have a one-sided conversation on AI without informed counterarguments from those impacted,” tweeted TV producer Theo Priestley.
Others on the social media network shared similar criticism about a perceived lack of balance in the guest list, including Dr. Margaret Mitchell of Hugging Face. “It could be beneficial to have an AI Oprah follow-up discussion that responds to what happens in [the show] and unpacks generative AI in a more grounded way,” she said.
Oprah’s AI special will air on September 12 on ABC (and a day later on Hulu) in the US, and it will likely elicit further responses from the critics mentioned above. But perhaps that’s exactly how Oprah wants it: “It may fascinate you or scare you,” Winfrey said in a promotional video for the special. “Or, if you’re like me, it may do both. So let’s take a breath and find out more about it.”
Enlarge/ Sam Altman, chief executive officer of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum (WEF) in Davos, Switzerland, on Tuesday, Jan. 16, 2024.
OpenAI is facing increasing pressure to prove it’s not hiding AI risks after whistleblowers alleged to the US Securities and Exchange Commission (SEC) that the AI company’s non-disclosure agreements had illegally silenced employees from disclosing major safety concerns to lawmakers.
In a letter to OpenAI yesterday, Senator Chuck Grassley (R-Iowa) demanded evidence that OpenAI is no longer requiring agreements that could be “stifling” its “employees from making protected disclosures to government regulators.”
Specifically, Grassley asked OpenAI to produce current employment, severance, non-disparagement, and non-disclosure agreements to reassure Congress that contracts don’t discourage disclosures. That’s critical, Grassley said, so that it will be possible to rely on whistleblowers exposing emerging threats to help shape effective AI policies safeguarding against existential AI risks as technologies advance.
Grassley has apparently twice requested these records without a response from OpenAI, his letter said. And so far, OpenAI has not responded to the most recent request to send documents, Grassley’s spokesperson, Clare Slattery, told The Washington Post.
“It’s not enough to simply claim you’ve made ‘updates,’” Grassley said in a statement provided to Ars. “The proof is in the pudding. Altman needs to provide records and responses to my oversight requests so Congress can accurately assess whether OpenAI is adequately protecting its employees and users.”
In addition to requesting OpenAI’s recently updated employee agreements, Grassley pushed OpenAI to be more transparent about the total number of requests it has received from employees seeking to make federal disclosures since 2023. The senator wants to know what information employees wanted to disclose to officials and whether OpenAI actually approved their requests.
Along the same lines, Grassley asked OpenAI to confirm how many investigations the SEC has opened into OpenAI since 2023.
Together, these documents would shed light on whether OpenAI employees are potentially still being silenced from making federal disclosures, what kinds of disclosures OpenAI denies, and how closely the SEC is monitoring OpenAI’s seeming efforts to hide safety risks.
“It is crucial OpenAI ensure its employees can provide protected disclosures without illegal restrictions,” Grassley wrote in his letter.
He has requested a response from OpenAI by August 15 so that “Congress may conduct objective and independent oversight on OpenAI’s safety protocols and NDAs.”
OpenAI did not immediately respond to Ars’ request for comment.
On X, Altman wrote that OpenAI has taken steps to increase transparency, including “working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations.” He also confirmed that OpenAI wants “current and former employees to be able to raise concerns and feel comfortable doing so.”
“This is crucial for any company, but for us especially and an important part of our safety plan,” Altman wrote. “In May, we voided non-disparagement terms for current and former employees and provisions that gave OpenAI the right (although it was never used) to cancel vested equity. We’ve worked hard to make it right.”
In July, whistleblowers told the SEC that OpenAI should be required to produce not just current employee contracts, but all contracts that contained a non-disclosure agreement to ensure that OpenAI hasn’t been obscuring a history or current practice of obscuring AI safety risks. They want all current and former employees to be notified of any contract that included an illegal NDA and for OpenAI to be fined for every illegal contract.
Enlarge/ Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.
On Wednesday, former OpenAI Chief Scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the goal of safely building “superintelligence,” which is a hypothetical form of artificial intelligence that surpasses human intelligence, possibly in the extreme.
“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product,” wrote Sutskever on X. “We will do it through revolutionary breakthroughs produced by a small cracked team.“
Sutskever was a founding member of OpenAI and formerly served as the company’s chief scientist. Two others are joining Sutskever at SSI initially: Daniel Levy, who formerly headed the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple between 2013 and 2017. The trio posted a statement on the company’s new website.
Enlarge/ A screen capture of Safe Superintelligence’s initial formation announcement captured on June 20, 2024.
Sutskever and several of his co-workers resigned from OpenAI in May, six months after Sutskever played a key role in ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure—and OpenAI executives such as Altman wished him well on his new adventures—another resigning member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “over the past years, safety culture and processes [had] taken a backseat to shiny products” at OpenAI. Leike joined OpenAI competitor Anthropic later in May.
A nebulous concept
OpenAI is currently seeking to create AGI, or artificial general intelligence, which would hypothetically match human intelligence at performing a wide variety of tasks without specific training. Sutskever hopes to jump beyond that in a straight moonshot attempt, with no distractions along the way.
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” said Sutskever in an interview with Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
During his former job at OpenAI, Sutskever was part of the “Superalignment” team studying how to “align” (shape the behavior of) this hypothetical form of AI, sometimes called “ASI” for “artificial super intelligence,” to be beneficial to humanity.
As you can imagine, it’s difficult to align something that does not exist, so Sutskever’s quest has met skepticism at times. On X, University of Washington computer science professor (and frequent OpenAI critic) Pedro Domingos wrote, “Ilya Sutskever’s new company is guaranteed to succeed, because superintelligence that is never achieved is guaranteed to be safe.“
Much like AGI, superintelligence is a nebulous term. Since the mechanics of human intelligence are still poorly understood—and since human intelligence is difficult to quantify or define because there is no one set type of human intelligence—identifying superintelligence when it arrives may be tricky.
Already, computers far surpass humans in many forms of information processing (such as basic math), but are they superintelligent? Many proponents of superintelligence imagine a sci-fi scenario of an “alien intelligence” with a form of sentience that operates independently of humans, and that is more or less what Sutskever hopes to achieve and control safely.
“You’re talking about a giant super data center that’s autonomously developing technology,” he told Bloomberg. “That’s crazy, right? It’s the safety of that that we want to contribute to.”