openai

openai-holds-back-wide-release-of-voice-cloning-tech-due-to-misuse-concerns

OpenAI holds back wide release of voice-cloning tech due to misuse concerns

AI speaks letters, text-to-speech or TTS, text-to-voice, speech synthesis applications, generative Artificial Intelligence, futuristic technology in language and communication.

Voice synthesis has come a long way since 1978’s Speak & Spell toy, which once wowed people with its state-of-the-art ability to read words aloud using an electronic voice. Now, using deep-learning AI models, software can create not only realistic-sounding voices, but also convincingly imitate existing voices using small samples of audio.

Along those lines, OpenAI just announced Voice Engine, a text-to-speech AI model for creating synthetic voices based on a 15-second segment of recorded audio. It has provided audio samples of the Voice Engine in action on its website.

Once a voice is cloned, a user can input text into the Voice Engine and get an AI-generated voice result. But OpenAI is not ready to widely release its technology yet. The company initially planned to launch a pilot program for developers to sign up for the Voice Engine API earlier this month. But after more consideration about ethical implications, the company decided to scale back its ambitions for now.

“In line with our approach to AI safety and our voluntary commitments, we are choosing to preview but not widely release this technology at this time,” the company writes. “We hope this preview of Voice Engine both underscores its potential and also motivates the need to bolster societal resilience against the challenges brought by ever more convincing generative models.”

Voice cloning tech in general is not particularly new—we’ve covered several AI voice synthesis models since 2022, and the tech is active in the open source community with packages like OpenVoice and XTTSv2. But the idea that OpenAI is inching toward letting anyone use their particular brand of voice tech is notable. And in some ways, the company’s reticence to release it fully might be the bigger story.

OpenAI says that benefits of its voice technology include providing reading assistance through natural-sounding voices, enabling global reach for creators by translating content while preserving native accents, supporting non-verbal individuals with personalized speech options, and assisting patients in recovering their own voice after speech-impairing conditions.

But it also means that anyone with 15 seconds of someone’s recorded voice could effectively clone it, and that has obvious implications for potential misuse. Even if OpenAI never widely releases its Voice Engine, the ability to clone voices has already caused trouble in society through phone scams where someone imitates a loved one’s voice and election campaign robocalls featuring cloned voices from politicians like Joe Biden.

Also, researchers and reporters have shown that voice-cloning technology can be used to break into bank accounts that use voice authentication (such as Chase’s Voice ID), which prompted Sen. Sherrod Brown (D-Ohio), the chairman of the US Senate Committee on Banking, Housing, and Urban Affairs, to send a letter to the CEOs of several major banks in May 2023 to inquire about the security measures banks are taking to counteract AI-powered risks.

OpenAI holds back wide release of voice-cloning tech due to misuse concerns Read More »

nvidia-unveils-blackwell-b200,-the-“world’s-most-powerful-chip”-designed-for-ai

Nvidia unveils Blackwell B200, the “world’s most powerful chip” designed for AI

There’s no knowing where we’re rowing —

208B transistor chip can reportedly reduce AI cost and energy consumption by up to 25x.

The GB200

Enlarge / The GB200 “superchip” covered with a fanciful blue explosion.

Nvidia / Benj Edwards

On Monday, Nvidia unveiled the Blackwell B200 tensor core chip—the company’s most powerful single-chip GPU, with 208 billion transistors—which Nvidia claims can reduce AI inference operating costs (such as running ChatGPT) and energy consumption by up to 25 times compared to the H100. The company also unveiled the GB200, a “superchip” that combines two B200 chips and a Grace CPU for even more performance.

The news came as part of Nvidia’s annual GTC conference, which is taking place this week at the San Jose Convention Center. Nvidia CEO Jensen Huang delivered the keynote Monday afternoon. “We need bigger GPUs,” Huang said during his keynote. The Blackwell platform will allow the training of trillion-parameter AI models that will make today’s generative AI models look rudimentary in comparison, he said. For reference, OpenAI’s GPT-3, launched in 2020, included 175 billion parameters. Parameter count is a rough indicator of AI model complexity.

Nvidia named the Blackwell architecture after David Harold Blackwell, a mathematician who specialized in game theory and statistics and was the first Black scholar inducted into the National Academy of Sciences. The platform introduces six technologies for accelerated computing, including a second-generation Transformer Engine, fifth-generation NVLink, RAS Engine, secure AI capabilities, and a decompression engine for accelerated database queries.

Press photo of the Grace Blackwell GB200 chip, which combines two B200 GPUs with a Grace CPU into one chip.

Enlarge / Press photo of the Grace Blackwell GB200 chip, which combines two B200 GPUs with a Grace CPU into one chip.

Several major organizations, such as Amazon Web Services, Dell Technologies, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI, are expected to adopt the Blackwell platform, and Nvidia’s press release is replete with canned quotes from tech CEOs (key Nvidia customers) like Mark Zuckerberg and Sam Altman praising the platform.

GPUs, once only designed for gaming acceleration, are especially well suited for AI tasks because their massively parallel architecture accelerates the immense number of matrix multiplication tasks necessary to run today’s neural networks. With the dawn of new deep learning architectures in the 2010s, Nvidia found itself in an ideal position to capitalize on the AI revolution and began designing specialized GPUs just for the task of accelerating AI models.

Nvidia’s data center focus has made the company wildly rich and valuable, and these new chips continue the trend. Nvidia’s gaming GPU revenue ($2.9 billion in the last quarter) is dwarfed in comparison to data center revenue (at $18.4 billion), and that shows no signs of stopping.

A beast within a beast

Press photo of the Nvidia GB200 NVL72 data center computer system.

Enlarge / Press photo of the Nvidia GB200 NVL72 data center computer system.

The aforementioned Grace Blackwell GB200 chip arrives as a key part of the new NVIDIA GB200 NVL72, a multi-node, liquid-cooled data center computer system designed specifically for AI training and inference tasks. It combines 36 GB200s (that’s 72 B200 GPUs and 36 Grace CPUs total), interconnected by fifth-generation NVLink, which links chips together to multiply performance.

A specification chart for the Nvidia GB200 NVL72 system.

Enlarge / A specification chart for the Nvidia GB200 NVL72 system.

“The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads and reduces cost and energy consumption by up to 25x,” Nvidia said.

That kind of speed-up could potentially save money and time while running today’s AI models, but it will also allow for more complex AI models to be built. Generative AI models—like the kind that power Google Gemini and AI image generators—are famously computationally hungry. Shortages of compute power have widely been cited as holding back progress and research in the AI field, and the search for more compute has led to figures like OpenAI CEO Sam Altman trying to broker deals to create new chip foundries.

While Nvidia’s claims about the Blackwell platform’s capabilities are significant, it’s worth noting that its real-world performance and adoption of the technology remain to be seen as organizations begin to implement and utilize the platform themselves. Competitors like Intel and AMD are also looking to grab a piece of Nvidia’s AI pie.

Nvidia says that Blackwell-based products will be available from various partners starting later this year.

Nvidia unveils Blackwell B200, the “world’s most powerful chip” designed for AI Read More »

apple-may-hire-google-to-power-new-iphone-ai-features-using-gemini—report

Apple may hire Google to power new iPhone AI features using Gemini—report

Bake a cake as fast as you can —

With Apple’s own AI tech lagging behind, the firm looks for a fallback solution.

A Google

Benj Edwards

On Monday, Bloomberg reported that Apple is in talks to license Google’s Gemini model to power AI features like Siri in a future iPhone software update coming later in 2024, according to people familiar with the situation. Apple has also reportedly conducted similar talks with ChatGPT maker OpenAI.

The potential integration of Google Gemini into iOS 18 could bring a range of new cloud-based (off-device) AI-powered features to Apple’s smartphone, including image creation or essay writing based on simple prompts. However, the terms and branding of the agreement have not yet been finalized, and the implementation details remain unclear. The companies are unlikely to announce any deal until Apple’s annual Worldwide Developers Conference in June.

Gemini could also bring new capabilities to Apple’s widely criticized voice assistant, Siri, which trails newer AI assistants powered by large language models (LLMs) in understanding and responding to complex questions. Rumors of Apple’s own internal frustration with Siri—and potential remedies—have been kicking around for some time. In January, 9to5Mac revealed that Apple had been conducting tests with a beta version of iOS 17.4 that used OpenAI’s ChatGPT API to power Siri.

As we have previously reported, Apple has also been developing its own AI models, including a large language model codenamed Ajax and a basic chatbot called Apple GPT. However, the company’s LLM technology is said to lag behind that of its competitors, making a partnership with Google or another AI provider a more attractive option.

Google launched Gemini, a language-based AI assistant similar to ChatGPT, in December and has updated it several times since. Many industry experts consider the larger Gemini models to be roughly as capable as OpenAI’s GPT-4 Turbo, which powers the subscription versions of ChatGPT. Until just recently, with the emergence of Gemini Ultra and Claude 3, OpenAI’s top model held a fairly wide lead in perceived LLM capability.

The potential partnership between Apple and Google could significantly impact the AI industry, as Apple’s platform represents more than 2 billion active devices worldwide. If the agreement gets finalized, it would build upon the existing search partnership between the two companies, which has seen Google pay Apple billions of dollars annually to make its search engine the default option on iPhones and other Apple devices.

However, Bloomberg reports that the potential partnership between Apple and Google is likely to draw scrutiny from regulators, as the companies’ current search deal is already the subject of a lawsuit by the US Department of Justice. The European Union is also pressuring Apple to make it easier for consumers to change their default search engine away from Google.

With so much potential money on the line, selecting Google for Apple’s cloud AI job could potentially be a major loss for OpenAI in terms of bringing its technology widely into the mainstream—with a market representing billions of users. Even so, any deal with Google or OpenAI may be a temporary fix until Apple can get its own LLM-based AI technology up to speed.

Apple may hire Google to power new iPhone AI features using Gemini—report Read More »

openai:-the-board-expands

OpenAI: The Board Expands

It is largely over.

The investigation into events has concluded, finding no wrongdoing anywhere.

The board has added four new board members, including Sam Altman. There will still be further additions.

Sam Altman now appears firmly back in control of OpenAI.

None of the new board members have been previously mentioned on this blog, or known to me at all.

They are mysteries with respect to AI. As far as I can tell, all three lack technical understanding of AI and have no known prior opinions or engagement on topics of AI, AGI and AI safety of any kind including existential risk.

Microsoft and investors indeed so far have came away without a seat. They also, however, lack known strong bonds to Altman, so this is not obviously a board fully under his control if there were to be another crisis. They now have the gravitas the old board lacked. One could reasonably expect the new board to be concerned with ‘AI Ethics’ broadly construed in a way that could conflict with Altman, or with diversity, equity and inclusion.

One must also remember that the public is very concerned about AI existential risk when the topic is brought up, so ‘hire people with other expertise that have not looked at AI in detail yet’ does not mean the new board members will dismiss such concerns, although it could also be that they were picked because they don’t care. We will see.

Prior to the report summary and board expansion announcements, The New York Times put out an article leaking potentially key information, in ways that looked like an advance leak from at least one former board member, claiming that Mira Murati and Ilya Sutskever were both major sources of information driving the board to fire Sam Altman, while not mentioning other concerns. Mira Murati has strongly denied these claims and has the publicly expressed confidence and thanks of Sam Altman.

I continue to believe that my previous assessments of what happened were broadly accurate, with new events providing additional clarity. My assessments were centrally offered in OpenAI: The Battle of the Board, which outlines my view of what happened. Other information is also in OpenAI: Facts From a Weekend and OpenAI: Altman Returns.

This post covers recent events, completing the story arc for now. There remain unanswered questions, in particular what will ultimately happen with Ilya Sutskever, and the views and actions of the new board members. We will wait and see.

The important question, as I have said from the beginning, is: Who is the new board?

We have the original three members, plus four more. Sam Altman is one very solid vote for Sam Altman. Who are the other three?

We’re announcing three new members to our Board of Directors as a first step towards our commitment to expansion: Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, former EVP and General Counsel at Sony Corporation and Fidji Simo, CEO and Chair of Instacart. Additionally, Sam Altman, CEO, will rejoin the OpenAI Board of Directors. 

Sue, Nicole and Fidji have experience in leading global organizations and navigating complex regulatory environments, including backgrounds in technology, nonprofit and board governance. They will work closely with current board members Adam D’Angelo, Larry Summers and Bret Taylor as well as Sam and OpenAI’s senior management. 

Bret Taylor, Chair of the OpenAI board, stated, “I am excited to welcome Sue, Nicole, and Fidji to the OpenAI Board of Directors. Their experience and leadership will enable the Board to oversee OpenAI’s growth, and to ensure that we pursue OpenAI’s mission of ensuring artificial general intelligence benefits all of humanity.”

Dr. Sue Desmond-Hellmann is a non-profit leader and physician. Dr. Desmond-Hellmann currently serves on the Boards of Pfizer and the President’s Council of Advisors on Science and Technology. She previously was a Director at Proctor and Gamble, Meta (Facebook), and the Bill & Melinda Gates Medical Research institute. She served as the Chief Executive Officer of the Bill & Melinda Gates Foundation from 2014 to 2020. From 2009-2014 she was Professor and Chancellor of the University of California, San Francisco (UCSF), the first woman to hold the position. She also previously served as President of Product Development at Genentech, where she played a leadership role in the development of the first gene-targeted cancer drugs. 

Nicole Seligman is a globally recognized corporate and civic leader and lawyer. She currently serves on three public company corporate boards – Paramount Global, MeiraGTx Holdings PLC, and Intuitive Machines, Inc. Seligman held several senior leadership positions at Sony entities, including EVP and General Counsel at Sony Corporation, where she oversaw functions including global legal and compliance matters. She also served as President of Sony Entertainment, Inc., and simultaneously served as President of Sony Corporation of America. Seligman also currently holds nonprofit leadership roles at the Schwarzman Animal Medical Center and The Doe Fund in New York City. Previously, Seligman was a partner in the litigation practice at Williams & Connolly LLP in Washington, D.C., working on complex civil and criminal matters and counseling a wide range of clients, including President William Jefferson Clinton and Hillary Clinton. She served as a law clerk to Justice Thurgood Marshall on the Supreme Court of the United States.

Fidji Simo is a consumer technology industry veteran, having spent more than 15 years leading the operations, strategy and product development for some of the world’s leading businesses. She is the Chief Executive Officer and Chair of Instacart. She also serves as a member of the Board of Directors at Shopify. Prior to joining Instacart, Simo was Vice President and Head of the Facebook App. Over the last decade at Facebook, she oversaw the Facebook App, including News Feed, Stories, Groups, Video, Marketplace, Gaming, News, Dating, Ads and more. Simo founded the Metrodora Institute, a multidisciplinary medical clinic and research foundation dedicated to the care and cure of neuroimmune axis disorders and serves as President of the Metrodora Foundation.

This tells us who they are in some senses, and nothing in other important senses.

I did some quick investigation, including asking multiple LLMs and asking Twitter, about the new board members and the implications.

It is not good news.

The new board clearly looks like it represents an attempt to pivot:

  1. Towards legitimacy, legibility and credibility. Gravitas to outsiders.

  2. Towards legal and regulatory expertise, especially via Seligman.

  3. Towards traditional corporate concerns and profit maximization.

  4. Towards broadly ‘AI Ethics,’ emphasis on social impact, perhaps DEI.

  5. Away from people who understand the technology behind AI.

  6. Away from people concerned about existential risk.

I have nothing against any of these new board members. But neither do I have anything for them, either. I have perhaps never previously heard their names.

We do have this tiny indirect link to work with, I suppose, although it is rather generic praise indeed:

Otherwise this was the most positive thing anyone had to say overall, no one had anything more detailed that this in any direction:

Nathan Helm-Burger: Well, they all sound competent, charitably inclined, and agentic.

Hopefully they are also canny, imaginative, cautious, forward-thinking and able to extrapolate into the future… Those attributes are harder to judge from their bios.

The pivot away from any technological domain knowledge, towards people who know other areas instead, is the most striking. There is deep expertise in non-profits and big corporations, with legal and regulatory issues, with major technologies and so on. But these people (presumably) don’t know AI, and seem rather busy in terms of getting up to speed on what they may or may not realize is their most important job they will ever have. I don’t know their views on existential risk because none of them have, as far as I know, expressed such views at all. That seems not to be a concern here at all.

Contrast this with a board that included Toner, Sutskever and Brockman along with Altman. The board will not have anyone on it that can act as a sanity check on technical claims or risk assessments from Altman, perhaps D’Angelo will be the closest thing left to that. No one will be able to evaluate claims, express concerns, and ensure everyone has the necessary facts. It is not only Microsoft that got shut out.

So this seems quite bad. In a normal situation, if trouble does not find them, they will likely let Altman do whatever he wants. However, with only Altman as a true insider, if trouble does happen then the results will be harder to predict or control. Altman has in key senses won, but from his perspective he should worry he has perhaps unleashed a rather different set of monsters.

This is the negative case in a nutshell:

Jaeson Booker: My impression is they’re a bunch of corporate shills, with no knowledge of AI, but just there to secure business ties and use political/legal leverage.

And there was also this in response to the query in question:

Day to day, Altman has a free hand, these are a bunch of busy business people.

However, that is not the important purpose of the board. The board is not there to give you prestige or connections. Or rather, it is partly for that, but that is the trap that prestige and connections lay for us.

The purpose of the board is to control the company. The purpose of the board is to decide whether to fire the CEO, and to choose future iterations of the board.

The failure to properly understand this before is part of how things got to this point. If the same mistake is repeating itself, then so be it.

The board previously intended to have a final size of nine members. Early indications are that the board is likely to further expand this year.

I would like to see at least one person with strong technical expertise other than Sam Altman, and at least one strong advocate for existential risk concerns.

Altman would no doubt like to see Brockman come back, and to secure those slots for his loyalists generally as soon as possible. A key question is whether this board lets him do that.

One also notes that they appointed three women on International Women’s Day.

Everyone is relieved to put this formality behind them, Sam Altman most of all. Even if the investigation was never meant to go anywhere, it forced everyone involved to be careful. That danger has now passed.

Washington Post: In a summary OpenAI released of the findings from an investigation by the law firm WilmerHale into Altman’s ouster, the law firm found that the company’s previous board fired Altman because of a “breakdown in the relationship and loss of trust between the prior board and Mr. Altman.” Brockman, Altman’s close deputy, was removed from OpenAI’s board when the decision to fire the CEO was announced.

The firm did not find any problems when it came to OpenAI’s product safety, finances or its statements to investors, OpenAI said. The Securities and Exchange Commission is probing whether OpenAI misled its investors.

As part of the board announcement, Altman and Taylor held a short conference call with reporters. The two sat side by side against a red brick wall as Taylor explained the law firm’s review and how it found no evidence of financial or safety wrongdoing at the company. He referred to the CEO sitting next to him as “Mr. Altman,” then joked about the formality of the term.

“I’m pleased this whole thing is over,” Altman said. He said he was sorry for how he handled parts of his relationship with a prior board member. “I could have handled that situation with more grace and care. I apologize for that.”

Indeed, here is their own description of the investigation, in full, bold is mine:

On December 8, 2023, the Special Committee retained WilmerHale to conduct a review of the events concerning the November 17, 2023 removal of Sam Altman and Greg Brockman from the OpenAI Board of Directors and Mr. Altman’s termination as CEO. WilmerHale reviewed more than 30,000 documents; conducted dozens of interviews, including of members of OpenAI’s prior Board, OpenAI executives, advisors to the prior Board, and other pertinent witnesses; and evaluated various corporate actions.

The Special Committee provided WilmerHale with the resources and authority necessary to conduct a comprehensive review. Many OpenAI employees, as well as current and former Board members, cooperated with the review process. WilmerHale briefed the Special Committee several times on the progress and conclusions of the review.

WilmerHale evaluated management and governance issues that had been brought to the prior Board’s attention, as well as additional issues that WilmerHale identified in the course of its review. WilmerHale found there was a breakdown in trust between the prior Board and Mr. Altman that precipitated the events of November 17.

WilmerHale reviewed the public post issued by the prior Board on November 17 and concluded that the statement accurately recounted the prior Board’s decision and rationales. WilmerHale found that the prior Board believed at the time that its actions would mitigate internal management challenges and did not anticipate that its actions would destabilize the Company. WilmerHale also found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners. Instead, it was a consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman. WilmerHale found the prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders, and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns. WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman, but also found that his conduct did not mandate removal.

After reviewing the WilmerHale findings, the Special Committee recommended to the full Board that it endorse the November 21 decision to rehire Mr. Altman and Mr. Brockman. With knowledge of the review’s findings, the Special Committee expressed its full confidence in Mr. Altman and Mr. Brockman’s ongoing leadership of OpenAI.

The Special Committee is pleased to conclude this review and looks forward to continuing with the important work of OpenAI.

And consider this tidbit from The Washington Post:

One person familiar with the investigation who had been interviewed by the firm said WilmerHale did not offer a way to confidentially share relevant information.

If Altman is doing the types of things people say he is doing, and you do not offer a confidential way to share relevant information, that tells me you are not so interested in finding wrongdoing by Sam Altman.

Taken together with an inability to offer confidentiality, it would be difficult for a summary statement to louder scream the message ‘we wanted this all to go away quietly and had no interest in a real investigation if we can avoid one.’ There was still the possibility that one could not be avoided, if the gun was sufficiently openly smoking. That turned out not to be the case. So instead, they are exonerating the board in theory, saying they messed up in practice, and moving on.

One detail that I bolded is that the board did not anticipate that firing Altman would destabilize OpenAI, that they thought he would not fight back. If true, then in hindsight this looks like a truly epic error on their part.

But what if it wasn’t, from their perspective at the time? Altman had a very real choice to make.

  1. Destabilize OpenAI and risk its disintegration to fight for his job, in a way that very rarely happens when CEOs are fired by their boards. Consider the removals at Uber and WeWork.

  2. Do what he said he would do at the initial board meeting and help with the transition, then move on to his next thing. Maybe raise a ton of money for chip factories or energy production, or a rival AI company. Plenty of people would have rushed to fund him, and then he’d have founder equity.

Altman decided to fight back in a way rarely seen, and in a way he did not do at YC when he was ejected there. He won, but there was real risk that OpenAI could have fallen apart. He was counting on the board both to botch the fight, and then to cave rather than let OpenAI potentially fail. And yes, he was right, but that does not mean it wasn’t a gamble, or that he was sure to make it.

I still consider this a massive error by the board, for five reasons.

  1. Character is fate. Altman has a history and reputation of being excellent in and relishing such fights. Altman was going to fight if there was a way to fight.

  2. The stakes are so high. OpenAI potentially is fate-of-the-world level stakes.

  3. There was little lock-in. OpenAI is nothing without its people or relationship with Microsoft, in a way that is not true for most companies.

  4. Altman had no equity. He loses nothing if the company is destroyed.

  5. The board knew it had in other ways a precarious position, lacking gravitas, the trust of the employees and the full support of their new interim CEO or a secured strong pick for a permanent replacement, and that they were unwilling to justify their actions. This was a much weaker hand than normal.

Yes, some of that is of course hindsight. There were still many reasons to realize this was a unique situation, where Altman would be uniquely poised to fight.

Early indications are that we are unlikely to see the full report this year.

Prior to the release of the official investigation results and announcement of expansion of the board, the New York Times reported new information, including some that seems hard to not have come from at least one board member. Gwern then offered perspective analyzing what it meant that this article was published while the final report was not yet announced.

Gwern’s take was that Altman had previously had a serious threat to worry about with the investigation, it was not clear he would be able to retain control. He was forced to be cautious, to avoid provocations.

We discussed this a bit, and I was convinced that Altman had more reason than I realized to be worried about this at the time. Even though Summers and Taylor were doing a mostly fake investigation, it was only mostly fake, and you never know what smoking guns might turn up. Plus, Altman could not be confident it was mostly fake until late in the game, because if Summers and Taylor were doing a real investigation they would have every reason not to tip their hand. Yes, no one could talk confidentially as far as Altman knew, but who knows what deals could be struck, or who might be willing to risk it anyway?

That, Gwern reported, is over now. Altman will be fully in charge. Mira Muratori (who we now know was directly involved, not simply someone initially willing to be interim CEO) and Ilya Sutskever will have their roles reduced or leave. The initial board might not formally be under Altman’s control but will become so over time.

I do not give the investigation as much credit for being a serious threat as Gwern does, but certainly it was a good reason to exercise more caution in the interim to ensure that remained true. Also Mira Murati has denied the story, and has at least publicly retained the confidence and thanks of Altman.

Here is Gwern’s full comment:

Gwern: An OA update: it’s been quiet, but the investigation is about over. And Sam Altman won.

To recap, because I believe I haven’t been commenting much on this since December (this is my last big comment, skimming my LW profile):

WilmerHale was brought in to do the investigation. The tender offer, to everyone’s relief, went off. A number of embarrassing new details about Sam Altman have surfaced: in particular, about his enormous chip fab plan with substantial interest from giants like Sematek, and how the OA VC Fund turns out to be owned by Sam Altman (his explanation was it saved some paperwork and he just forgot to ever transfer it to OA).

Ilya Sutskever remains in hiding and lawyered up. There have been increasing reports the past week or two that the WilmerHale investigation was coming to a close – and I am told that the investigators were not offering confidentiality and the investigation was narrowly scoped to the firing. (There was also some OA drama with the Musk lawfare & the OA response, but aside from offering an abject lesson in how not to redact sensitive information, it’s irrelevant and unimportant.)

The news today comes from the NYT leaking information from the final report: “Key OpenAI Executive [Mira Murati] Played a Pivotal Role in Sam Altman’s Ouster”.

The main theme of the article is clarifying Murati’s role: as I speculated, she was in fact telling the Board about Altman’s behavior patterns, and it fills in that she had gone further and written it up in a memo to him, and even threatened to leave with Sutskever.

But it reveals a number of other important claims: the investigation is basically done and wrapping up. The new board apparently has been chosen. Sutskever’s lawyer has gone on the record stating that Sutskever did not approach the board about Altman (?!). And it reveals the board confronted Altman over his ownership of the OA VC Fund (in addition to all his many other compromises of interest).

So, what does that mean?

I think that what these indirectly reveal is simple: Sam Altman has won. The investigation will exonerate him, and it is probably true that it was so narrowly scoped from the beginning that it was never going to plausibly provide grounds for his ouster. What these leaks are, are a loser’s spoiler move: the last gasps of the anti-Altman faction, reduced to leaking bits from the final report to friendly media (Metz/NYT) to annoy Altman, and strike first. They got some snippets out before the Altman faction shops around highly selective excerpts to their own friendly media outlets (the usual suspects – The Information, Kara Swisher) from the final officialized report to set the official record (at which point the rest of the confidential report is sent down the memory hole). Welp, it’s been an interesting few months, but l’affaire Altman is over. RIP.

Evidence, aside from simply asking who benefits from these particular leaks at the last minute, is that Sutskever remains in hiding & his lawyer is implausibly denying he had anything to do with it, while if you read Altman on social media, you’ll notice that he’s become ever more talkative since December, particularly in the last few weeks – positively glorying in the instant memeification of ‘$7 trillion’ – as has OA PRand we have heard no more rhetoric about what an amazing team of execs OA has and how he’s so proud to have tutored them to replace him. Because there will be no need to replace him now. The only major reasons he will have to leave is if it’s necessary as a stepping stone to something even higher (eg. running the $7t chip fab consortium, running for US President) or something like a health issue.

So, upshot: I speculate that the report will exonerate Altman (although it can’t restore his halo, as it cannot & will not address things like his firing from YC which have been forced out into public light by this whole affair) and he will be staying as CEO and may be returning to the expanded board; the board will probably include some weak uncommitted token outsiders for their diversity and independence, but have an Altman plurality and we will see gradual selective attrition/replacement in favor of Altman loyalists until he has a secure majority robust to at least 1 flip and preferably 2. Having retaken irrevocable control of OA, further EA purges should be unnecessary, and Altman will probably refocus on the other major weakness exposed by the coup: the fact that his frenemy MS controls OA’s lifeblood. (The fact that MS was such a potent weapon for Altman in the fight is a feature while he’s outside the building, but a severe bug once he’s back inside.)

People are laughing at the ‘$7 trillion’. But Altman isn’t laughing. Those GPUs are life and death for OA now. And why should he believe he can’t do it? Things have always worked out for him before…

Predictions, if being a bit more quantitative will help clarify my speculations here: Altman will still be CEO of OA on June 1st (85%); the new OA board will include Altman (60%); Ilya Sutskever and Mira Murati will leave OA or otherwise take on some sort of clearly diminished role by year-end (90%, 75%); the full unexpurgated non-summary report will not be released (85%, may be hard to judge because it’d be easy to lie about); serious chip fab/Tigris efforts will continue (75%); Microsoft’s observer seat will be upgraded to a voting seat (25%).

Eric Newcomer (usually a bit more acute than this) asks “One thing that I find weird: OpenAI comms is giving very pro Altman statements when the board/WilmerHale are still conducting the investigation. Isn’t communications supposed to work for the company, not just the CEO? The board is in charge here still, no?” NARRATOR: “The board is not in charge still.”

That continues to be the actual key question in practice. Who controls the final board, and to what extent will that board be in charge?

We now know who that board will be, suggesting the answer is that Altman may not be in firm control of the board, but the board is likely to give him a free hand to do what he wants. For all practical purposes, Altman is back in charge until something happens.

The other obvious question is: What actually happened? What did Sam Altman do (or not do)? Why did the board try to fire Altman in the first place?

Whoever leaked to the Times, and their writers, have either a story or a theory.

From the NYT article: In October, Ms. Murati approached some members of the board and expressed concerns about Mr. Altman’s leadership, the people said.

She described what some considered to be Mr. Altman’s playbook, which included manipulating executives to get what he wanted. First, Ms. Murati said Mr. Altman would tell people what they wanted to hear to charm them and support his decisions. If they did not go along with his plans or if it took too long for them to make a decision, he would then try to undermine the credibility of people who challenged him, the people said.

Ms. Murati told the board she had previously sent a private memo to Mr. Altman outlining some of her concerns with his behavior and shared some details of the memo with the board, the people said.

Around the same time in October, Dr. Sutskever approached members of the board and expressed similar issues about Mr. Altman, the people said.

Some members of the board were concerned that Ms. Murati and Dr. Sutskever would leave the company if Mr. Altman’s behavior was not addressed. They also grew concerned the company would see an exodus of talent if top lieutenants left.

There were other factors that went into the decision. Some members were concerned about the creation of the OpenAI Startup Fund, a venture fund started by Mr. Altman.

You know what does not come up in the article? Any mention of AI safety or existential risk, any mention of Effective Altruism, or any member of the board members other than Ilya, including Helen Toner, or any attempt by Altman to alter the board. This is all highly conspicuous by its absence, even the absence of any note of its absence.

There is also no mention of either Murati or Sutskever describing any particular incident. The things they describe are a pattern of behavior, a style, a way of being. Any given instance of it, any individual action, is easy to overlook and impossible to much condemn. It is only in the pattern of many such incidents that things emerge.

Or perhaps one can say, this was an isolated article about one particular incident. It was not a claim that anything else was unimportant, or that this was the central thing going on. And technically I think that is correct? The implication is still clear – if I left with the impression that this was being claimed, I assume other readers did as well.

But was that the actual main issue? Could this have not only not about safety, as I have previously said, but also mostly not about the board? Perhaps the board was mostly trying to prevent key employees from leaving, and thought losing Altman was less disruptive?

I find the above story hard to believe as a central story. The idea that the board did this primarily to not lose Murati and Sutskever and the exodus they would cause does not really make sense. If you are afraid to lose Murati and Sutskever, I mean that would no doubt suck if it happened, but it is nothing compared to risking getting rid of Altman. Even in the best case, where he does leave quietly, you are going to likely lose a bunch of other valuable people starting with Brockman.

It only makes sense as a reason if it is one of many different provocations, a straw that breaks the camel’s back. In which case, it could certainly have been a forcing function, a reason to act now instead of later.

Another arguments against this being central is that it doesn’t match the board’s explanation, or their (and Mira Murati’s at the time) lack of a further explanation.

It certainly does not match Mira Murati’s current story. A reasonable response is ‘she would say spin this way no matter what, she has to’ but this totally rhymes with the way The New York Times is known to operate. Her story is exactly compatible with NYT operating at the boundaries of the laws of bounded distrust, and using them to paint what they think is a negative picture of a tech company (that they are actively suing) situation:

Mira Murati: Governance of an institution is critical for oversight, stability, and continuity. I am happy that the independent review has concluded and we can all move forward united. It has been disheartening to witness the previous board’s efforts to scapegoat me with anonymous and misleading claims in a last-ditch effort to save face in the media. Here is the message I sent to my team last night. Onward.

Hi everyone,

Some of you may have seen a NYT article about me and the old board. I find it frustrating that some people seem to want to cause chaos as we are trying to move on, but to very briefly comment on the specific claims there:

Sam and I have a strong and productive partnership and I have not been shy about sharing feedback with him directly. I never reached out to the board to give feedback about Sam. However, when individual board members reached out directly to me for feedback about Sam, I provided it-all feedback Sam already knew. That does not in any way mean that I am responsible for or supported the old board’s actions, which I still find perplexing. I fought their actions aggressively and we all worked together to bring Sam back.

Really looking forward to get the board review done and put gossip behind us.

(back to work )

I went back and forth on that question, but yes I do think it is compatible, and indeed we can construct the kind of events that allow on to technically characterize events the way NYT does, and also for Mira Murati to say what she said and not be lying. Or, of course, either side could indeed be lying.

What we do know is that the board tried to fire Sam Altman once, for whatever combination of reasons. That did not work, and the investigation seems to not have produced any smoking guns and won’t bring him down, although the 85% from Gwern that Altman remains in charge doesn’t seem different from what I would have said when the investigation started.

OpenAI comms in general are not under the board’s control. That is clear. That is how this works. Altman gets to do what he wants. If the board does not like it, they can fire him. Except, of course, they can’t do that, not without strong justification, and ‘the comms are not balanced’ won’t cut it. So Altman gives out the comms he wants, tries to raise trillions, and so on.

I read all this as Altman gambling that he can take full effective control again and that the new board won’t do anything about it. He is probably right, at least for now, but also that is the kind of risk Altman runs and game he plays. His strategy has been proven to alienate those around him, to cause trouble, as one would expect if someone was pursuing important goals aggressively and taking risks. Which indeed is what Altman should do, if he believes in what he is doing, you don’t succeed at this level by playing it safe, and you have to play the style and hand you are dealt, but he will doubtless take it farther than is wise. As Gwern puts it, why shouldn’t he, things have always worked out before. People who push envelopes like this learn to keep pushing them until things blow up, they are not scared off for long by close calls.

Altman’s way of being and default strategy is all but designed to not reveal, to him or to us, any signs of trouble unless and until things do blow up. It is not a coincidence that the firing seemed to come out of nowhere the first time. One prediction is that, if Altman does get taken out internally, which I do not expect any time soon, it will once again look like it came out of nowhere.

Another conclusion this reinforces is the need for good communication, the importance of not hiding behind your legal advisors. At the time everyone thought Mira Murati was the reluctant temporary steward of the board and as surprised as anyone, which is also a position she is standing by now. If that was not true, it was rather important to say it was not true.

Then, as we all know, whatever Murati’s initial position was, both Sutskever and Murati ultimately backed Altman. What happened after that? Sutskever remains in limbo months later.

Washington Post: “I love Ilya. I think Ilya loves OpenAI,” [Altman] said, adding that he hopes to work with the AI scientist for many years to come.

As far as we know, Mira Murati is doing fine, and focused on the work, with Sam Altman’s full support.

If those who oppose Altman get shut out, one should note that this is what you would expect. We all know the fate of those who come at the king and miss. Those who think that they can then use the correct emojis, turn on their allies to let the usurper get power, and then they will be spared. You will never be spared for selling out like that.

So watching what happens to Murati is the strongest sign of what Altman thinks happened, as well as whether Murati was indeed ready to leave, which together is likely our best evidence of what happened with respect to Murati.

Sam Altman (on Twitter): I’m very happy to welcome our new board members: Fidji Simo, Sue Desmond-Hellmann, and Nicole Seligman, and to continue to work with Bret, Larry, and Adam.

I’m thankful to everyone on our team for being resilient (a great OpenAI skill!) and staying focused during a challenging time.

In particular, I want to thank Mira for our strong partnership and her leadership during the drama, since, and in all the quiet moments where it really counts. And Greg, who plays a special leadership role without which OpenAI would simply not exist. Being in the trenches always sucks, but it’s much better being there with the two of them. I learned a lot from this experience.

One thing I’ll say now: when I believed a former board member was harming OpenAI through some of their actions, I should have handled that situation with more grace and care. I apologize for this, and I wish I had done it differently. I assume a genuine belief in the crucial importance of getting AGI right from everyone involved.

We have important work in front of us, and we can’t wait to show you what’s next.

This is a gracious statement. 

It also kind of gives the game away in that last paragraph, with its non-apology. Which I want to emphasize that I very much appreciate. This is not fully candid, but it is more candor than we had right to expect, and to that extent it is the good kind of being candid.

Thus, we have confirmation that Sam Altman thought Helen Toner was harming OpenAI through some of her actions, and that he ‘should have handled that situation with more grace and care.’

This seems highly compatible with what I believe happened, which was that he attempted to use an unrelated matter to get her removed from the board, including misrepresenting the views of board members to other board members to try and get this to happen, and that this then came to light. The actions Altman felt were hurting OpenAI were thus presumably distinct from the actions Altman then tried to use to remove her. 

Even if I do not have the details correct there, it seems highly implausible, given this statement, that events related to his issues with Toner were not important factors in the board’s decision to fire Altman.

There is no mention at all of Ilya Sutskever here. It would have been the right place for Altman to put up an olive branch, if he wanted reconciliation so Sutskever could focus on superalignment and keeping us safe, an assignment everyone should want him to have if he is willing. Instead, Altman continues to be silent on this matter.

Whether or not either of them leaked anything to NYT, they issued a public statement as well.

Helen Toner and Tasha McCauley: OpenAl’s mission is to ensure that artificial general intelligence benefits all of humanity. The OpenAl structure empowers the board to prioritize this mission above all else, including business interests.

Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI. We hope the new board does its job in governing OpenAl and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.

There are a great many people doing important work at OpenAl. We wish them and the new board success.

That is not the sentiment of two people convinced everything is fine now. This is a statement that things are very much not fine, and that the investigation did not do its job, and that they lack faith in the newly selected board, although they may be hopeful.

Alas, they continue to be unable or unwilling to share any details, so there is not much more that one can say.

Taking all this at face value, here is the simple case for why all this is terrible:

Jeffrey Ladish: I don’t trust Sam Altman to lead an AGI project. I think he’s a deeply untrustworthy individual, low in integrity and high in power seeking.

It doesn’t bring me joy to say this. I rather like Sam Altman. I like his writing, I like the way he communicates clearly, I like how he strives for what he believes is good.

But I know people who have worked with him. He lies to people, he says things to your face and says another thing behind your back. He has a reputation for this, though people are often afraid to talk about this openly. He is extremely good at what he does. He is extremely good at politics. He schemes to outmaneuver people within his own companies and projects. This is not the kind of person who can be trusted to lead a project that will shape the entire world and the entire future.

Elon Musk: !

Adnan Chaumette: What do you make of 95% of OpenAI staff fully disagreeing with you on this?

We’re talking about wealthy employees with great job prospects anywhere else, so I highly doubt there’s some financial motives for them to side with him as strongly as they did.

Jeffrey Ladish: There was a lot of pressure to sign, so I don’t think the full 95% would disagree with what I said

Also, unfortunately, equity is a huge motivator here. I know many great people at OpenAI and have a huge amount of respect for their safety work. But also, the incentives really suck, many people have a lot of money at stake

I too like Sam Altman, his writing and the way he often communicates. I would add I am a big fan of his medical and fusion efforts. He has engaged for real with the ideas that I consider most important, even if he has a different option I know he takes the concerns seriously. Most of all, I would emphasize: He strives for what he believes is good. Yes, he will doubtless sometimes fool himself, as you are always the easiest one for you to fool, but it is remarkable how many people in his position do not remotely pass these bars.

I too also have very strong concerns that we are putting a person whose highest stats are political maneuvering and deception, who is very high in power seeking, into this position. By all reports, you cannot trust what this man tells you.

I have even stronger concerns about him not having proper strong oversight, capable of reigning in or firing him if necessary, and of understanding what is going on and forcing key decisions. I do not believe this is going to be that board.

There are also a number of very clear specific concerns.

Sam Altman is kind of trying to raise $7 trillion dollars for chips and electrical power generation. This seems to go directly against OpenAI’s mission, and completely undercut the overhang argument for why AGI must be built quickly. You cannot both claim that AGI is inevitable because we have so many chips so need to rush forward before others do, and that we urgently need to build more chips so we can rush forward to have AGI. Or you can, but you’re being disingenuous somewhere, at best. Also the fact that Altman owns the OpenAI venture fund while lacking equity in OpenAI itself is at least rather suspicious.

Sam Altman seems to have lied to board members about the views of other board members in an attempt to take control of the board, and this essentially represents him (potentially) succeeding in doing that. If you lie to the board in an attempt to subvert the board, you must be fired, full stop. I still think this happened.

I also think lying to those around him is a clear pattern of behavior that will not stop, and that threatens to prevent proper raising and handling of key safety concerns in the future. I do not trust a ‘whistleblower process’ to get around this. If Sam Altman operates in a way that prevents communication, and SNAFU applies, that would be very bad for everyone even if Altman wants to proceed safely.

The choice of new board members, to the extent it was influenced by Altman, perhaps reflects an admirable lack of selecting reliable allies (we do not know enough to know, and could reflect no reliable allies that fit the requirements for those slots being available) but it also seems to reflect a lack of desire for effective checks and understanding of the technical and safety problems on the path to AGI. It seems vital to have someone other than Altman on the board with deep technical expertise, and someone who can advocate for technical existential risk concerns. With a seven (or ultimately nine) member board there is room for those people without such factions having control, yet they are seemingly not present.

The 95% rate at which employees signed the letter – a latter that did not commit them to anything at all – is indicative of a combination of factors, primarily the board’s horribly botched communications, and including the considerations that it was a free action whereas not signing was not, and the money they had at stake. It indicates Altman is good at politics, and that he did win over the staff versus the alternative, but it says little about the concerns here.

Here are two recent Roon quotes that seemed relevant, regardless of his intent:

Roon (Member of OpenAI technical staff): Most people are of median moral caliber and i didn’t really recognize that as troubling before.

It’s actually really hard to be a good person and rise above self interest.

In terms of capital structure or economics nerds try to avoid thinking about this with better systems; in the free for all of interpersonal relations there’s no avoiding it.

Steve Jobs was a sociopath who abandoned his daughter for a while and then gaslit anyone who tried to make him take responsibility.

My disagreement is that I think it is exactly better systems of various sorts, not only legal but also cultural norms and other strategies, to mitigate this issue. There’s no avoiding it but you can mitigate, and woe to those who do not do so.

Roon (2nd thread): People’s best qualities are exactly the same as their tragic flaws that’ll destroy them.

This is often why greatness is a transitory phenomenon, especially in young people. Most alpha turns out to be levered beta. You inhabit a new extreme way of existing and reap great rewards; then the Gods punish you because being extreme is obviously risky.

People who invest only in what makes them great will become extremely fragile and blow up.

Sometimes. Other times it is more that the great thing covered up or ran ahead of the flaws, or prevented investment in fixing the flaws. This makes me wonder if I took on, or perhaps am still taking on, insufficient leverage, and taking insufficient risk, at least from a social welfare perspective?

While we could be doing better than Sam Altman, I still do think we could be, and likely would be, instead doing so much worse without him. He is very much ‘above replacement.’ I would take him in a heartbeat over the CEOs of Google and Microsoft, of Meta and Mistral. If I could replace him with Emmett Shear and keep all the other employees, I would do it, but that is not the world we live in.

An obvious test will be what happens to and with the board going forward. There are at least two appointments remaining to get to nine, even if all current members were to stay which seems unlikely. Will we get our non-insider technical expert and our clear safety advocate slash skeptic? How many obvious allies like Brockman will be included? Will we see evidence the new board is actively engaged and involved? And so on.

What happens with Ilya Sutskever matters. The longer he remains in limbo, the worse a sign that is. Ideal would be him back heading superalignment and clearly free to speak about related issues. Short of that, it would be good to see him fully extracted.

Another key test will be whether OpenAI rushes to release a new model soon. GPT-4 has been out for a little over a year. That would normally mean there is still a lot of time left before the next release, but now Gemini and Claude are both roughly on par.

How will Sam Altman respond? 

If they rush a ‘GPT-5’ out the door within a few months, without the type of testing and evaluation they did for GPT-4, then that tells us a lot. 

Sam Altman will soon do an interview with Lex Fridman. Letting someone talk for hours is always insightful, even if as per usual with Lex Fridman they do not get the hard hitting questions, that is not his way. That link includes some questions a hostile interviewer would ask, some of which would also be good questions for Lex to ask in his own style.

What non-obvious things would I definitely ask after some thought, but two or more orders of magnitude less thought than I’d give if I was doing the interview?

  1. I definitely want him to be asked about the seeming contradiction between the overhang argument and the chips project, and about how much of that project is chips versus electricity and other project details.

  2. I’d ask him technical details about their preparedness framework and related issues, to see how engaged he is with that and where his head is landing on such questions. This should include what scary capabilities we might see soon.

  3. I’d ask how he sees his relationship with the new board, how he plans to keep them informed, ensure that they have access to employees and new projects and products, and how they will have input on key decisions short of firing him, and how he plans to address the current lack of technical expertise or safety advocacy. How will OpenAI ensure it is not effectively another commercial business?

  4. I’d ask him about OpenAI’s lobbying especially with regard to the EU AI Act and what they or Microsoft will commit to in the future in terms of not opposing efforts, and how government can help labs be responsible and do the right things.

  5. I’d check his views on potential AI consciousness and how to handle it because it’s good to sanity check there.

  6. I’d ask what he means when he says AGI will come and things will not change much for a while, and to discuss the changes in terminology here generally. What exactly is he envisioning as an AGI when he says that? What type of AGI is OpenAI’s mission, would they then stop there? How does that new world look? Why wouldn’t it lead to something far more capable quickly? Ideally you spend a lot of time here, in cooperative exploratory mode.

  7. To the extent Lex is capable I would ask about all sorts of technical questions, see Dwarkesh’s interviews with Dario Amodei and Demis Hassabis for how to do this well. I would ask about his take on Leike’s plans for Superalignment, and how to address obvious problems such as the ‘AI alignment researcher’ also being a capabilities researcher.

  8. Indeed, I would ask: Are you eager to go sit down with Dwarkesh Patel soon?

  9. I mostly wouldn’t focus on asking what happened with the firing, because I do not expect to be able to get much useful out of him there, but you can try. My view is something like, you would want to pre-negotiate on this issue. If Altman wants to face hostile questioning and is down for it, sure do it, otherwise don’t press.

  10. For Ilya, I would do my best to nail him down specifically on two questions: Is he still employed by OpenAI? And does he have your full faith and confidence, if he chooses to do so, to continue to head up the Superalignment Taskforce?

There is of course so much more, and so much more thinking to do on such questions. A few hours can only scratch the surface, so you have to pick your battles and focus. This is an open invitation to Lex Fridman or anyone else who gets an interview with Altman, if you want my advice or to talk about how to maximize your opportunity, I’m happy to help.

There have been and will be many moments that inform us. Let’s pay attention.

OpenAI: The Board Expands Read More »

openai-ceo-altman-wasn’t-fired-because-of-scary-new-tech,-just-internal-politics

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics

Adventures in optics —

As Altman cements power, OpenAI announces three new board members—and a returning one.

OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

Enlarge / OpenAI CEO Sam Altman speaks during the OpenAI DevDay event on November 6, 2023, in San Francisco.

On Friday afternoon Pacific Time, OpenAI announced the appointment of three new members to the company’s board of directors and released the results of an independent review of the events surrounding CEO Sam Altman’s surprise firing last November. The current board expressed its confidence in the leadership of Altman and President Greg Brockman, and Altman is rejoining the board.

The newly appointed board members are Dr. Sue Desmond-Hellmann, former CEO of the Bill and Melinda Gates Foundation; Nicole Seligman, former EVP and global general counsel of Sony; and Fidji Simo, CEO and chair of Instacart. These additions notably bring three women to the board after OpenAI met criticism about its restructured board composition last year. In addition, Sam Altman has rejoined the board.

The independent review, conducted by law firm WilmerHale, investigated the circumstances that led to Altman’s abrupt removal from the board and his termination as CEO on November 17, 2023. Despite rumors to the contrary, the board did not fire Altman because they got a peek at scary new AI technology and flinched. “WilmerHale… found that the prior Board’s decision did not arise out of concerns regarding product safety or security, the pace of development, OpenAI’s finances, or its statements to investors, customers, or business partners.”

Instead, the review determined that the prior board’s actions stemmed from a breakdown in trust between the board and Altman.

After reportedly interviewing dozens of people and reviewing over 30,000 documents, WilmerHale found that while the prior board acted within its purview, Altman’s termination was unwarranted. “WilmerHale found that the prior Board acted within its broad discretion to terminate Mr. Altman,” OpenAI wrote, “but also found that his conduct did not mandate removal.”

Additionally, the law firm found that the decision to fire Altman was made in undue haste: “The prior Board implemented its decision on an abridged timeframe, without advance notice to key stakeholders and without a full inquiry or an opportunity for Mr. Altman to address the prior Board’s concerns.”

Altman’s surprise firing occurred after he attempted to remove Helen Toner from OpenAI’s board due to disagreements over her criticism of OpenAI’s approach to AI safety and hype. Some board members saw his actions as deceptive and manipulative. After Altman returned to OpenAI, Toner resigned from the OpenAI board on November 29.

In a statement posted on X, Altman wrote, “i learned a lot from this experience. one think [sic] i’ll say now: when i believed a former board member was harming openai through some of their actions, i should have handled that situation with more grace and care. i apologize for this, and i wish i had done it differently.”

A tweet from Sam Altman posted on March 8, 2024.

Enlarge / A tweet from Sam Altman posted on March 8, 2024.

Following the review’s findings, the Special Committee of the OpenAI Board recommended endorsing the November 21 decision to rehire Altman and Brockman. The board also announced several enhancements to its governance structure, including new corporate governance guidelines, a strengthened Conflict of Interest Policy, a whistleblower hotline, and additional board committees focused on advancing OpenAI’s mission.

After OpenAI’s announcements on Friday, resigned OpenAI board members Toner and Tasha McCauley released a joint statement on X. “Accountability is important in any company, but it is paramount when building a technology as potentially world-changing as AGI,” they wrote. “We hope the new board does its job in governing OpenAI and holding it accountable to the mission. As we told the investigators, deception, manipulation, and resistance to thorough oversight should be unacceptable.”

OpenAI CEO Altman wasn’t fired because of scary new tech, just internal politics Read More »

some-teachers-are-now-using-chatgpt-to-grade-papers

Some teachers are now using ChatGPT to grade papers

robots in disguise —

New AI tools aim to help with grading, lesson plans—but may have serious drawbacks.

An elementary-school-aged child touching a robot hand.

In a notable shift toward sanctioned use of AI in schools, some educators in grades 3–12 are now using a ChatGPT-powered grading tool called Writable, reports Axios. The tool, acquired last summer by Houghton Mifflin Harcourt, is designed to streamline the grading process, potentially offering time-saving benefits for teachers. But is it a good idea to outsource critical feedback to a machine?

Writable lets teachers submit student essays for analysis by ChatGPT, which then provides commentary and observations on the work. The AI-generated feedback goes to teacher review before being passed on to students so that a human remains in the loop.

“Make feedback more actionable with AI suggestions delivered to teachers as the writing happens,” Writable promises on its AI website. “Target specific areas for improvement with powerful, rubric-aligned comments, and save grading time with AI-generated draft scores.” The service also provides AI-written writing-prompt suggestions: “Input any topic and instantly receive unique prompts that engage students and are tailored to your classroom needs.”

Writable can reportedly help a teacher develop a curriculum, although we have not tried the functionality ourselves. “Once in Writable you can also use AI to create curriculum units based on any novel, generate essays, multi-section assignments, multiple-choice questions, and more, all with included answer keys,” the site claims.

The reliance on AI for grading will likely have drawbacks. Automated grading might encourage some educators to take shortcuts, diminishing the value of personalized feedback. Over time, the augmentation from AI may allow teachers to be less familiar with the material they are teaching. The use of cloud-based AI tools may have privacy implications for teachers and students. Also, ChatGPT isn’t a perfect analyst. It can get things wrong and potentially confabulate (make up) false information, possibly misinterpret a student’s work, or provide erroneous information in lesson plans.

Yet, as Axios reports, proponents assert that AI grading tools like Writable may free up valuable time for teachers, enabling them to focus on more creative and impactful teaching activities. The company selling Writable promotes it as a way to empower educators, supposedly offering them the flexibility to allocate more time to direct student interaction and personalized teaching. Of course, without an in-depth critical review, all claims should be taken with a huge grain of salt.

Amid these discussions, there’s a divide among parents regarding the use of AI in evaluating students’ academic performance. A recent poll of parents revealed mixed opinions, with nearly half of the respondents open to the idea of AI-assisted grading.

As the generative AI craze permeates every space, it’s no surprise that Writable isn’t the only AI-powered grading tool on the market. Others include Crowdmark, Gradescope, and EssayGrader. McGraw Hill is reportedly developing similar technology aimed at enhancing teacher assessment and feedback.

Some teachers are now using ChatGPT to grade papers Read More »

openai-clarifies-the-meaning-of-“open”-in-its-name,-responding-to-musk-lawsuit

OpenAI clarifies the meaning of “open” in its name, responding to Musk lawsuit

The OpenAI logo as an opening to a red brick wall.

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, OpenAI published a blog post titled “OpenAI and Elon Musk” in response to a lawsuit Musk filed last week. The ChatGPT maker shared several archived emails from Musk that suggest he once supported a pivot away from open source practices in the company’s quest to develop artificial general intelligence (AGI). The selected emails also imply that the “open” in “OpenAI” means that the ultimate result of its research into AGI should be open to everyone but not necessarily “open source” along the way.

In one telling exchange from January 2016 shared by the company, OpenAI Chief Scientist Illya Sutskever wrote, “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).”

In response, Musk replied simply, “Yup.”

Read 8 remaining paragraphs | Comments

OpenAI clarifies the meaning of “open” in its name, responding to Musk lawsuit Read More »

the-ai-wars-heat-up-with-claude-3,-claimed-to-have-“near-human”-abilities

The AI wars heat up with Claude 3, claimed to have “near-human” abilities

The Anthropic Claude 3 logo.

Enlarge / The Anthropic Claude 3 logo.

On Monday, Anthropic released Claude 3, a family of three AI language models similar to those that power ChatGPT. Anthropic claims the models set new industry benchmarks across a range of cognitive tasks, even approaching “near-human” capability in some cases. It’s available now through Anthropic’s website, with the most powerful model being subscription-only. It’s also available via API for developers.

Claude 3’s three models represent increasing complexity and parameter count: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Sonnet powers the Claude.ai chatbot now for free with an email sign-in. But as mentioned above, Opus is only available through Anthropic’s web chat interface if you pay $20 a month for “Claude Pro,” a subscription service offered through the Anthropic website. All three feature a 200,000-token context window. (The context window is the number of tokens—fragments of a word—that an AI language model can process at once.)

We covered the launch of Claude in March 2023 and Claude 2 in July that same year. Each time, Anthropic fell slightly behind OpenAI’s best models in capability while surpassing them in terms of context window length. With Claude 3, Anthropic has perhaps finally caught up with OpenAI’s released models in terms of performance, although there is no consensus among experts yet—and the presentation of AI benchmarks is notoriously prone to cherry-picking.

A Claude 3 benchmark chart provided by Anthropic.

Enlarge / A Claude 3 benchmark chart provided by Anthropic.

Claude 3 reportedly demonstrates advanced performance across various cognitive tasks, including reasoning, expert knowledge, mathematics, and language fluency. (Despite the lack of consensus over whether large language models “know” or “reason,” the AI research community commonly uses those terms.) The company claims that the Opus model, the most capable of the three, exhibits “near-human levels of comprehension and fluency on complex tasks.”

That’s quite a heady claim and deserves to be parsed more carefully. It’s probably true that Opus is “near-human” on some specific benchmarks, but that doesn’t mean that Opus is a general intelligence like a human (consider that pocket calculators are superhuman at math). So, it’s a purposely eye-catching claim that can be watered down with qualifications.

According to Anthropic, Claude 3 Opus beats GPT-4 on 10 AI benchmarks, including MMLU (undergraduate level knowledge), GSM8K (grade school math), HumanEval (coding), and the colorfully named HellaSwag (common knowledge). Several of the wins are very narrow, such as 86.8 percent for Opus vs. 86.4 percent on a five-shot trial of MMLU, and some gaps are big, such as 84.9 percent on HumanEval over GPT-4’s 67.0 percent. But what that might mean, exactly, to you as a customer is difficult to say.

“As always, LLM benchmarks should be treated with a little bit of suspicion,” says AI researcher Simon Willison, who spoke with Ars about Claude 3. “How well a model performs on benchmarks doesn’t tell you much about how the model ‘feels’ to use. But this is still a huge deal—no other model has beaten GPT-4 on a range of widely used benchmarks like this.”

The AI wars heat up with Claude 3, claimed to have “near-human” abilities Read More »

elon-musk-sues-openai-and-sam-altman,-accusing-them-of-chasing-profits

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

YA Musk lawsuit —

OpenAI is now a “closed-source de facto subsidiary” of Microsoft, says lawsuit.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

Elon Musk has sued OpenAI and its chief executive Sam Altman for breach of contract, alleging they have compromised the start-up’s original mission of building artificial intelligence systems for the benefit of humanity.

In the lawsuit, filed to a San Francisco court on Thursday, Musk’s lawyers wrote that OpenAI’s multibillion-dollar alliance with Microsoft had broken an agreement to make a major breakthrough in AI “freely available to the public.”

Instead, the lawsuit said, OpenAI was working on “proprietary technology to maximise profits for literally the largest company in the world.”

The legal fight escalates a long-running dispute between Musk, who has founded his own AI company, known as xAI, and OpenAI, which has received a $13 billion investment from Microsoft.

Musk, who helped co-found OpenAI in 2015, said in his legal filing he had donated $44 million to the group, and had been “induced” to make contributions by promises, “including in writing,” that it would remain a non-profit organisation.

He left OpenAI’s board in 2018 following disagreements with Altman on the direction of research. A year later, the group established the for-profit arm that Microsoft has invested into.

Microsoft’s president Brad Smith told the Financial Times this week that while the companies were “very important partners,” “Microsoft does not control OpenAI.”

Musk’s lawsuit alleges that OpenAI’s latest AI model, GPT4, released in March last year, breached the threshold for artificial general intelligence (AGI), at which computers function at or above the level of human intelligence.

The Microsoft deal only gives the tech giant a licence to OpenAI’s pre-AGI technology, the lawsuit said, and determining when this threshold is reached is key to Musk’s case.

The lawsuit seeks a court judgment over whether GPT4 should already be considered to be AGI, arguing that OpenAI’s board was “ill-equipped” to make such a determination.

The filing adds that OpenAI is also building another model, Q*, that will be even more powerful and capable than GPT4. It argues that OpenAI is committed under the terms of its founding agreement to make such technology available publicly.

“Mr. Musk has long recognised that AGI poses a grave threat to humanity—perhaps the greatest existential threat we face today,” the lawsuit says.

“To this day, OpenAI, Inc.’s website continues to profess that its charter is to ensure that AGI ‘benefits all of humanity’,” it adds. “In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”

OpenAI maintains it has not yet achieved AGI, despite its models’ success in language and reasoning tasks. Large language models like GPT4 still generate errors, fabrications and so-called hallucinations.

The lawsuit also seeks to “compel” OpenAI to adhere to its founding agreement to build technology that does not simply benefit individuals such as Altman and corporations such as Microsoft.

Musk’s own xAI company is a direct competitor to OpenAI and launched its first product, a chatbot named Grok, in December.

OpenAI declined to comment. Representatives for Musk have been approached for comment. Microsoft did not immediately respond to a request for comment.

The Microsoft-OpenAI alliance is being reviewed by competition watchdogs in the US, EU and UK.

The US Securities and Exchange Commission issued subpoenas to OpenAI executives in November as part of an investigation into whether Altman had misled its investors, according to people familiar with the move.

That investigation came shortly after OpenAI’s board fired Altman as chief executive only to reinstate him days later. A new board has since been instituted including former Salesforce co-chief executive Bret Taylor as chair.

There is an ongoing internal review of the former board’s allegations against Altman by independent law firm WilmerHale.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits Read More »

microsoft-partners-with-openai-rival-mistral-for-ai-models,-drawing-eu-scrutiny

Microsoft partners with OpenAI-rival Mistral for AI models, drawing EU scrutiny

The European Approach —

15M euro investment comes as Microsoft hosts Mistral’s GPT-4 alternatives on Azure.

Velib bicycles are parked in front of the the U.S. computer and micro-computing company headquarters Microsoft on January 25, 2023 in Issy-les-Moulineaux, France.

On Monday, Microsoft announced plans to offer AI models from Mistral through its Azure cloud computing platform, which came in conjunction with a 15 million euro non-equity investment in the French firm, which is often seen as a European rival to OpenAI. Since then, the investment deal has faced scrutiny from European Union regulators.

Microsoft’s deal with Mistral, known for its large language models akin to OpenAI’s GPT-4 (which powers the subscription versions of ChatGPT), marks a notable expansion of its AI portfolio at a time when its well-known investment in California-based OpenAI has raised regulatory eyebrows. The new deal with Mistral drew particular attention from regulators because Microsoft’s investment could convert into equity (partial ownership of Mistral as a company) during Mistral’s next funding round.

The development has intensified ongoing investigations into Microsoft’s practices, particularly related to the tech giant’s dominance in the cloud computing sector. According to Reuters, EU lawmakers have voiced concerns that Mistral’s recent lobbying for looser AI regulations might have been influenced by its relationship with Microsoft. These apprehensions are compounded by the French government’s denial of prior knowledge of the deal, despite earlier lobbying for more lenient AI laws in Europe. The situation underscores the complex interplay between national interests, corporate influence, and regulatory oversight in the rapidly evolving AI landscape.

Avoiding American influence

The EU’s reaction to the Microsoft-Mistral deal reflects broader tensions over the role of Big Tech companies in shaping the future of AI and their potential to stifle competition. Calls for a thorough investigation into Microsoft and Mistral’s partnership have been echoed across the continent, according to Reuters, with some lawmakers accusing the firms of attempting to undermine European legislative efforts aimed at ensuring a fair and competitive digital market.

The controversy also touches on the broader debate about “European champions” in the tech industry. France, along with Germany and Italy, had advocated for regulatory exemptions to protect European startups. However, the Microsoft-Mistral deal has led some, like MEP Kim van Sparrentak, to question the motives behind these exemptions, suggesting they might have inadvertently favored American Big Tech interests.

“That story seems to have been a front for American-influenced Big Tech lobby,” said Sparrentak, as quoted by Reuters. Sparrentak has been a key architect of the EU’s AI Act, which has not yet been passed. “The Act almost collapsed under the guise of no rules for ‘European champions,’ and now look. European regulators have been played.”

MEP Alexandra Geese also expressed concerns over the concentration of money and power resulting from such partnerships, calling for an investigation. Max von Thun, Europe director at the Open Markets Institute, emphasized the urgency of investigating the partnership, criticizing Mistral’s reported attempts to influence the AI Act.

Also on Monday, amid the partnership news, Mistral announced Mistral Large, a new large language model (LLM) that Mistral says “ranks directly after GPT-4 based on standard benchmarks.” Mistral has previously released several open-weights AI models that have made news for their capabilities, but Mistral Large will be a closed model only available to customers through an API.

Microsoft partners with OpenAI-rival Mistral for AI models, drawing EU scrutiny Read More »

openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI is now boldly claiming that The New York Times “paid someone to hack OpenAI’s products” like ChatGPT to “set up” a lawsuit against the leading AI maker.

In a court filing Monday, OpenAI alleged that “100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts” do not reflect how normal people use ChatGPT.

Instead, it allegedly took The Times “tens of thousands of attempts to generate” these supposedly “highly anomalous results” by “targeting and exploiting a bug” that OpenAI claims it is now “committed to addressing.”

According to OpenAI this activity amounts to “contrived attacks” by a “hired gun”—who allegedly hacked OpenAI models until they hallucinated fake NYT content or regurgitated training data to replicate NYT articles. NYT allegedly paid for these “attacks” to gather evidence to support The Times’ claims that OpenAI’s products imperil its journalism by allegedly regurgitating reporting and stealing The Times’ audiences.

“Contrary to the allegations in the complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times,” OpenAI argued in a motion that seeks to dismiss the majority of The Times’ claims. “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will.”

In the filing, OpenAI described The Times as enthusiastically reporting on its chatbot developments for years without raising any concerns about copyright infringement. OpenAI claimed that it disclosed that The Times’ articles were used to train its AI models in 2020, but The Times only cared after ChatGPT’s popularity exploded after its debut in 2022.

According to OpenAI, “It was only after this rapid adoption, along with reports of the value unlocked by these new technologies, that the Times claimed that OpenAI had ‘infringed its copyright[s]’ and reached out to demand ‘commercial terms.’ After months of discussions, the Times filed suit two days after Christmas, demanding ‘billions of dollars.'”

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that “what OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint.”

Crosby told Ars that OpenAI’s filing notably “doesn’t dispute—nor can they—that they copied millions of The Times’ works to build and power its commercial products without our permission.”

“Building new products is no excuse for violating copyright law, and that’s exactly what OpenAI has done on an unprecedented scale,” Crosby said.

OpenAI argued that the court should dismiss claims alleging direct copyright, contributory infringement, Digital Millennium Copyright Act violations, and misappropriation, all of which it describes as “legally infirm.” Some fail because they are time-barred—seeking damages on training data for OpenAI’s older models—OpenAI claimed. Others allegedly fail because they misunderstand fair use or are preempted by federal laws.

If OpenAI’s motion is granted, the case would be substantially narrowed.

But if the motion is not granted and The Times ultimately wins—and it might—OpenAI may be forced to wipe ChatGPT and start over.

“OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it’s too late to bring a claim for infringement or hold them accountable. We disagree,” Crosby told Ars. “It’s noteworthy that OpenAI doesn’t dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models.”

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit Read More »

tyler-perry-puts-$800-million-studio-expansion-on-hold-because-of-openai’s-sora

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora

The Synthetic Screen —

Perry: Mind-blowing AI video-generation tools “will touch every corner of our industry.”

Tyler Perry in 2022.

Enlarge / Tyler Perry in 2022.

In an interview with The Hollywood Reporter published Thursday, filmmaker Tyler Perry spoke about his concerns related to the impact of AI video synthesis on entertainment industry jobs. In particular, he revealed that he has suspended a planned $800 million expansion of his production studio after seeing what OpenAI’s recently announced AI video generator Sora can do.

“I have been watching AI very closely,” Perry said in the interview. “I was in the middle of, and have been planning for the last four years… an $800 million expansion at the studio, which would’ve increased the backlot a tremendous size—we were adding 12 more soundstages. All of that is currently and indefinitely on hold because of Sora and what I’m seeing. I had gotten word over the last year or so that this was coming, but I had no idea until I saw recently the demonstrations of what it’s able to do. It’s shocking to me.”

OpenAI, the company behind ChatGPT, revealed a preview of Sora’s capabilities last week. Sora is a text-to-video synthesis model, and it uses a neural network—previously trained on video examples—that can take written descriptions of a scene and turn them into high-definition video clips up to 60 seconds long. Sora caused shock in the tech world because it appeared to surpass other AI video generators in capability dramatically. It seems that a similar shock also rippled into adjacent professional fields. “Being told that it can do all of these things is one thing, but actually seeing the capabilities, it was mind-blowing,” Perry said in the interview.

Tyler Perry Studios, which the actor and producer acquired in 2015, is a 330-acre lot located in Atlanta and is one of the largest film production facilities in the United States. Perry, who is perhaps best known for his series of Madea films, says that technology like Sora worries him because it could make the need for building sets or traveling to locations obsolete. He cites examples of virtual shooting in the snow of Colorado or on the Moon just by using a text prompt. “This AI can generate it like nothing.” The technology may represent a radical reduction in costs necessary to create a film, and that will likely put entertainment industry jobs in jeopardy.

“It makes me worry so much about all of the people in the business,” he told The Hollywood Reporter. “Because as I was looking at it, I immediately started thinking of everyone in the industry who would be affected by this, including actors and grip and electric and transportation and sound and editors, and looking at this, I’m thinking this will touch every corner of our industry.”

You can read the full interview at The Hollywood Reporter, which did an excellent job of covering Perry’s thoughts on a technology that may end up fundamentally disrupting Hollywood. To his mind, AI tech poses an existential risk to the entertainment industry that it can’t ignore: “There’s got to be some sort of regulations in order to protect us. If not, I just don’t see how we survive.”

Perry also looks beyond Hollywood and says that it’s not just filmmaking that needs to be on alert, and he calls for government action to help retain human employment in the age of AI. “If you look at it across the world, how it’s changing so quickly, I’m hoping that there’s a whole government approach to help everyone be able to sustain.”

Tyler Perry puts $800 million studio expansion on hold because of OpenAI’s Sora Read More »