large language models

us-gov’t-announces-arrest-of-former-google-engineer-for-alleged-ai-trade-secret-theft

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft

Don’t trade the secrets dept. —

Linwei Ding faces four counts of trade secret theft, each with a potential 10-year prison term.

A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

Enlarge / A Google sign stands in front of the building on the sidelines of the opening of the new Google Cloud data center in Hesse, Hanau, opened in October 2023.

On Wednesday, authorities arrested former Google software engineer Linwei Ding in Newark, California, on charges of stealing AI trade secrets from the company. The US Department of Justice alleges that Ding, a Chinese national, committed the theft while secretly working with two China-based companies.

According to the indictment, Ding, who was hired by Google in 2019 and had access to confidential information about the company’s data centers, began uploading hundreds of files into a personal Google Cloud account two years ago.

The trade secrets Ding allegedly copied contained “detailed information about the architecture and functionality of GPU and TPU chips and systems, the software that allows the chips to communicate and execute tasks, and the software that orchestrates thousands of chips into a supercomputer capable of executing at the cutting edge of machine learning and AI technology,” according to the indictment.

Shortly after the alleged theft began, Ding was offered the position of chief technology officer at an early-stage technology company in China that touted its use of AI technology. The company offered him a monthly salary of about $14,800, plus an annual bonus and company stock. Ding reportedly traveled to China, participated in investor meetings, and sought to raise capital for the company.

Investigators reviewed surveillance camera footage that showed another employee scanning Ding’s name badge at the entrance of the building where Ding worked at Google, making him look like he was working from his office when he was actually traveling.

Ding also founded and served as the chief executive of a separate China-based startup company that aspired to train “large AI models powered by supercomputing chips,” according to the indictment. Prosecutors say Ding did not disclose either affiliation to Google, which described him as a junior employee. He resigned from Google on December 26 of last year.

The FBI served a search warrant at Ding’s home in January, seizing his electronic devices and later executing an additional warrant for the contents of his personal accounts. Authorities found more than 500 unique files of confidential information that Ding allegedly stole from Google. The indictment says that Ding copied the files into the Apple Notes application on his Google-issued Apple MacBook, then converted the Apple Notes into PDF files and uploaded them to an external account to evade detection.

“We have strict safeguards to prevent the theft of our confidential commercial information and trade secrets,” Google spokesperson José Castañeda told Ars Technica. “After an investigation, we found that this employee stole numerous documents, and we quickly referred the case to law enforcement. We are grateful to the FBI for helping protect our information and will continue cooperating with them closely.”

Attorney General Merrick Garland announced the case against the 38-year-old at an American Bar Association conference in San Francisco. Ding faces four counts of federal trade secret theft, each carrying a potential sentence of up to 10 years in prison.

US gov’t announces arrest of former Google engineer for alleged AI trade secret theft Read More »

openai-clarifies-the-meaning-of-“open”-in-its-name,-responding-to-musk-lawsuit

OpenAI clarifies the meaning of “open” in its name, responding to Musk lawsuit

The OpenAI logo as an opening to a red brick wall.

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, OpenAI published a blog post titled “OpenAI and Elon Musk” in response to a lawsuit Musk filed last week. The ChatGPT maker shared several archived emails from Musk that suggest he once supported a pivot away from open source practices in the company’s quest to develop artificial general intelligence (AGI). The selected emails also imply that the “open” in “OpenAI” means that the ultimate result of its research into AGI should be open to everyone but not necessarily “open source” along the way.

In one telling exchange from January 2016 shared by the company, OpenAI Chief Scientist Illya Sutskever wrote, “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).”

In response, Musk replied simply, “Yup.”

Read 8 remaining paragraphs | Comments

OpenAI clarifies the meaning of “open” in its name, responding to Musk lawsuit Read More »

the-ai-wars-heat-up-with-claude-3,-claimed-to-have-“near-human”-abilities

The AI wars heat up with Claude 3, claimed to have “near-human” abilities

The Anthropic Claude 3 logo.

Enlarge / The Anthropic Claude 3 logo.

On Monday, Anthropic released Claude 3, a family of three AI language models similar to those that power ChatGPT. Anthropic claims the models set new industry benchmarks across a range of cognitive tasks, even approaching “near-human” capability in some cases. It’s available now through Anthropic’s website, with the most powerful model being subscription-only. It’s also available via API for developers.

Claude 3’s three models represent increasing complexity and parameter count: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Sonnet powers the Claude.ai chatbot now for free with an email sign-in. But as mentioned above, Opus is only available through Anthropic’s web chat interface if you pay $20 a month for “Claude Pro,” a subscription service offered through the Anthropic website. All three feature a 200,000-token context window. (The context window is the number of tokens—fragments of a word—that an AI language model can process at once.)

We covered the launch of Claude in March 2023 and Claude 2 in July that same year. Each time, Anthropic fell slightly behind OpenAI’s best models in capability while surpassing them in terms of context window length. With Claude 3, Anthropic has perhaps finally caught up with OpenAI’s released models in terms of performance, although there is no consensus among experts yet—and the presentation of AI benchmarks is notoriously prone to cherry-picking.

A Claude 3 benchmark chart provided by Anthropic.

Enlarge / A Claude 3 benchmark chart provided by Anthropic.

Claude 3 reportedly demonstrates advanced performance across various cognitive tasks, including reasoning, expert knowledge, mathematics, and language fluency. (Despite the lack of consensus over whether large language models “know” or “reason,” the AI research community commonly uses those terms.) The company claims that the Opus model, the most capable of the three, exhibits “near-human levels of comprehension and fluency on complex tasks.”

That’s quite a heady claim and deserves to be parsed more carefully. It’s probably true that Opus is “near-human” on some specific benchmarks, but that doesn’t mean that Opus is a general intelligence like a human (consider that pocket calculators are superhuman at math). So, it’s a purposely eye-catching claim that can be watered down with qualifications.

According to Anthropic, Claude 3 Opus beats GPT-4 on 10 AI benchmarks, including MMLU (undergraduate level knowledge), GSM8K (grade school math), HumanEval (coding), and the colorfully named HellaSwag (common knowledge). Several of the wins are very narrow, such as 86.8 percent for Opus vs. 86.4 percent on a five-shot trial of MMLU, and some gaps are big, such as 84.9 percent on HumanEval over GPT-4’s 67.0 percent. But what that might mean, exactly, to you as a customer is difficult to say.

“As always, LLM benchmarks should be treated with a little bit of suspicion,” says AI researcher Simon Willison, who spoke with Ars about Claude 3. “How well a model performs on benchmarks doesn’t tell you much about how the model ‘feels’ to use. But this is still a huge deal—no other model has beaten GPT-4 on a range of widely used benchmarks like this.”

The AI wars heat up with Claude 3, claimed to have “near-human” abilities Read More »

ai-generated-articles-prompt-wikipedia-to-downgrade-cnet’s-reliability-rating

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating

The hidden costs of AI —

Futurism report highlights the reputational cost of publishing AI-generated content.

The CNET logo on a smartphone screen.

Wikipedia has downgraded tech website CNET’s reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site’s trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.

Around November 2022, CNET began publishing articles written by an AI model under the byline “CNET Money Staff.” In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment, but the reputational damage had already been done.

Wikipedia maintains a page called “Reliable sources/Perennial sources” that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia’s perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.

“CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors,” wrote a Wikipedia editor named David Gerard. “So far the experiment is not going down well, as it shouldn’t. I haven’t found any yet, but any of these articles that make it into a Wikipedia article need to be removed.”

After other editors agreed in the discussion, they began the process of downgrading CNET’s reliability rating.

As of this writing, Wikipedia’s Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a “generally reliable” source; (2) between October 2020 and October 2022, where Wikipedia notes that the site was acquired by Red Ventures in October 2020, “leading to a deterioration in editorial standards” and saying there is no consensus about reliability; and (3) between November 2022 and present, where Wikipedia currently considers CNET “generally unreliable” after the site began using an AI tool “to rapidly generate articles riddled with factual inaccuracies and affiliate links.”

A screenshot of a chart featuring CNET's reliability ratings, as found on Wikipedia's

Enlarge / A screenshot of a chart featuring CNET’s reliability ratings, as found on Wikipedia’s “Perennial Sources” page.

Futurism reports that the issue with CNET’s AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company’s publications. This lack of transparency was a key factor in the decision to downgrade CNET’s reliability rating.

In response to the downgrade and the controversies surrounding AI-generated content, CNET issued a statement that claims that the site maintains high editorial standards.

“CNET is the world’s largest provider of unbiased tech-focused news and advice,” a CNET spokesperson said in a statement to Futurism. “We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy.”

This article was updated on March 1, 2024 at 9: 30am to reflect fixes in the date ranges for CNET on the Perennial Sources page.

AI-generated articles prompt Wikipedia to downgrade CNET’s reliability rating Read More »

microsoft-partners-with-openai-rival-mistral-for-ai-models,-drawing-eu-scrutiny

Microsoft partners with OpenAI-rival Mistral for AI models, drawing EU scrutiny

The European Approach —

15M euro investment comes as Microsoft hosts Mistral’s GPT-4 alternatives on Azure.

Velib bicycles are parked in front of the the U.S. computer and micro-computing company headquarters Microsoft on January 25, 2023 in Issy-les-Moulineaux, France.

On Monday, Microsoft announced plans to offer AI models from Mistral through its Azure cloud computing platform, which came in conjunction with a 15 million euro non-equity investment in the French firm, which is often seen as a European rival to OpenAI. Since then, the investment deal has faced scrutiny from European Union regulators.

Microsoft’s deal with Mistral, known for its large language models akin to OpenAI’s GPT-4 (which powers the subscription versions of ChatGPT), marks a notable expansion of its AI portfolio at a time when its well-known investment in California-based OpenAI has raised regulatory eyebrows. The new deal with Mistral drew particular attention from regulators because Microsoft’s investment could convert into equity (partial ownership of Mistral as a company) during Mistral’s next funding round.

The development has intensified ongoing investigations into Microsoft’s practices, particularly related to the tech giant’s dominance in the cloud computing sector. According to Reuters, EU lawmakers have voiced concerns that Mistral’s recent lobbying for looser AI regulations might have been influenced by its relationship with Microsoft. These apprehensions are compounded by the French government’s denial of prior knowledge of the deal, despite earlier lobbying for more lenient AI laws in Europe. The situation underscores the complex interplay between national interests, corporate influence, and regulatory oversight in the rapidly evolving AI landscape.

Avoiding American influence

The EU’s reaction to the Microsoft-Mistral deal reflects broader tensions over the role of Big Tech companies in shaping the future of AI and their potential to stifle competition. Calls for a thorough investigation into Microsoft and Mistral’s partnership have been echoed across the continent, according to Reuters, with some lawmakers accusing the firms of attempting to undermine European legislative efforts aimed at ensuring a fair and competitive digital market.

The controversy also touches on the broader debate about “European champions” in the tech industry. France, along with Germany and Italy, had advocated for regulatory exemptions to protect European startups. However, the Microsoft-Mistral deal has led some, like MEP Kim van Sparrentak, to question the motives behind these exemptions, suggesting they might have inadvertently favored American Big Tech interests.

“That story seems to have been a front for American-influenced Big Tech lobby,” said Sparrentak, as quoted by Reuters. Sparrentak has been a key architect of the EU’s AI Act, which has not yet been passed. “The Act almost collapsed under the guise of no rules for ‘European champions,’ and now look. European regulators have been played.”

MEP Alexandra Geese also expressed concerns over the concentration of money and power resulting from such partnerships, calling for an investigation. Max von Thun, Europe director at the Open Markets Institute, emphasized the urgency of investigating the partnership, criticizing Mistral’s reported attempts to influence the AI Act.

Also on Monday, amid the partnership news, Mistral announced Mistral Large, a new large language model (LLM) that Mistral says “ranks directly after GPT-4 based on standard benchmarks.” Mistral has previously released several open-weights AI models that have made news for their capabilities, but Mistral Large will be a closed model only available to customers through an API.

Microsoft partners with OpenAI-rival Mistral for AI models, drawing EU scrutiny Read More »

reddit-sells-training-data-to-unnamed-ai-company-ahead-of-ipo

Reddit sells training data to unnamed AI company ahead of IPO

Everything has a price —

If you’ve posted on Reddit, you’re likely feeding the future of AI.

In this photo illustration the American social news

On Friday, Bloomberg reported that Reddit has signed a contract allowing an unnamed AI company to train its models on the site’s content, according to people familiar with the matter. The move comes as the social media platform nears the introduction of its initial public offering (IPO), which could happen as soon as next month.

Reddit initially revealed the deal, which is reported to be worth $60 million a year, earlier in 2024 to potential investors of an anticipated IPO, Bloomberg said. The Bloomberg source speculates that the contract could serve as a model for future agreements with other AI companies.

After an era where AI companies utilized AI training data without expressly seeking any rightsholder permission, some tech firms have more recently begun entering deals where some content used for training AI models similar to GPT-4 (which runs the paid version of ChatGPT) comes under license. In December, for example, OpenAI signed an agreement with German publisher Axel Springer (publisher of Politico and Business Insider) for access to its articles. Previously, OpenAI has struck deals with other organizations, including the Associated Press. Reportedly, OpenAI is also in licensing talks with CNN, Fox, and Time, among others.

In April 2023, Reddit founder and CEO Steve Huffman told The New York Times that it planned to charge AI companies for access to its almost two decades’ worth of human-generated content.

If the reported $60 million/year deal goes through, it’s quite possible that if you’ve ever posted on Reddit, some of that material may be used to train the next generation of AI models that create text, still pictures, and video. Even without the deal, experts have discovered in the past that Reddit has been a key source of training data for large language models and AI image generators.

While we don’t know if OpenAI is the company that signed the deal with Reddit, Bloomberg speculates that Reddit’s ability to tap into AI hype for additional revenue may boost the value of its IPO, which might be worth $5 billion. Despite drama last year, Bloomberg states that Reddit pulled in more than $800 million in revenue in 2023, growing about 20 percent over its 2022 numbers.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Reddit sells training data to unnamed AI company ahead of IPO Read More »

openai-experiments-with-giving-chatgpt-a-long-term-conversation-memory

OpenAI experiments with giving ChatGPT a long-term conversation memory

“I remember…the Alamo” —

AI chatbot “memory” will recall facts from previous conversations when enabled.

A pixelated green illustration of a pair of hands looking through file records.

Enlarge / When ChatGPT looks things up, a pair of green pixelated hands look through paper records, much like this. Just kidding.

Benj Edwards / Getty Images

On Tuesday, OpenAI announced that it is experimenting with adding a form of long-term memory to ChatGPT that will allow it to remember details between conversations. You can ask ChatGPT to remember something, see what it remembers, and ask it to forget. Currently, it’s only available to a small number of ChatGPT users for testing.

So far, large language models have typically used two types of memory: one baked into the AI model during the training process (before deployment) and an in-context memory (the conversation history) that persists for the duration of your session. Usually, ChatGPT forgets what you have told it during a conversation once you start a new session.

Various projects have experimented with giving LLMs a memory that persists beyond a context window. (The context window is the hard limit on the number of tokens the LLM can process at once.) The techniques include dynamically managing context history, compressing previous history through summarization, links to vector databases that store information externally, or simply periodically injecting information into a system prompt (the instructions ChatGPT receives at the beginning of every chat).

A screenshot of ChatGPT memory controls provided by OpenAI.

Enlarge / A screenshot of ChatGPT memory controls provided by OpenAI.

OpenAI

OpenAI hasn’t explained which technique it uses here, but the implementation reminds us of Custom Instructions, a feature OpenAI introduced in July 2023 that lets users add custom additions to the ChatGPT system prompt to change its behavior.

Possible applications for the memory feature provided by OpenAI include explaining how you prefer your meeting notes to be formatted, telling it you run a coffee shop and having ChatGPT assume that’s what you’re talking about, keeping information about your toddler that loves jellyfish so it can generate relevant graphics, and remembering preferences for kindergarten lesson plan designs.

Also, OpenAI says that memories may help ChatGPT Enterprise and Team subscribers work together better since shared team memories could remember specific document formatting preferences or which programming frameworks your team uses. And OpenAI plans to bring memories to GPTs soon, with each GPT having its own siloed memory capabilities.

Memory control

Obviously, any tendency to remember information brings privacy implications. You should already know that sending information to OpenAI for processing on remote servers introduces the possibility of privacy leaks and that OpenAI trains AI models on user-provided information by default unless conversation history is disabled or you’re using an Enterprise or Team account.

Along those lines, OpenAI says that your saved memories are also subject to OpenAI training use unless you meet the criteria listed above. Still, the memory feature can be turned off completely. Additionally, the company says, “We’re taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details—unless you explicitly ask it to.”

Users will also be able to control what ChatGPT remembers using a “Manage Memory” interface that lists memory items. “ChatGPT’s memories evolve with your interactions and aren’t linked to specific conversations,” OpenAI says. “Deleting a chat doesn’t erase its memories; you must delete the memory itself.”

ChatGPT’s memory features are not currently available to every ChatGPT account, so we have not experimented with it yet. Access during this testing period appears to be random among ChatGPT (free and paid) accounts for now. “We are rolling out to a small portion of ChatGPT free and Plus users this week to learn how useful it is,” OpenAI writes. “We will share plans for broader roll out soon.”

OpenAI experiments with giving ChatGPT a long-term conversation memory Read More »

the-super-bowl’s-best-and-wackiest-ai-commercials

The Super Bowl’s best and wackiest AI commercials

Superb Owl News —

It’s nothing like “crypto bowl” in 2022, but AI made a notable splash during the big game.

A still image from BodyArmor's 2024

Enlarge / A still image from BodyArmor’s 2024 “Field of Fake” Super Bowl commercial.

BodyArmor

Heavily hyped tech products have a history of appearing in Super Bowl commercials during football’s biggest game—including the Apple Macintosh in 1984, dot-com companies in 2000, and cryptocurrency firms in 2022. In 2024, the hot tech in town is artificial intelligence, and several companies showed AI-related ads at Super Bowl LVIII. Here’s a rundown of notable appearances that range from serious to wacky.

Microsoft Copilot

Microsoft Game Day Commercial | Copilot: Your everyday AI companion.

It’s been a year since Microsoft launched the AI assistant Microsoft Copilot (as “Bing Chat“), and Microsoft is leaning heavily into its AI-assistant technology, which is powered by large language models from OpenAI. In Copilot’s first-ever Super Bowl commercial, we see scenes of various people with defiant text overlaid on the screen: “They say I will never open my own business or get my degree. They say I will never make my movie or build something. They say I’m too old to learn something new. Too young to change the world. But I say watch me.”

Then the commercial shows Copilot creating solutions to some of these problems, with prompts like, “Generate storyboard images for the dragon scene in my script,” “Write code for my 3d open world game,” “Quiz me in organic chemistry,” and “Design a sign for my classic truck repair garage Mike’s.”

Of course, since generative AI is an unfinished technology, many of these solutions are more aspirational than practical at the moment. On Bluesky, writer Ed Zitron put Microsoft’s truck repair logo to the test and saw results that weren’t nearly as polished as those seen in the commercial. On X, others have criticized and poked fun at the “3d open world game” generation prompt, which is a complex task that would take far more than a single, simple prompt to produce useful code.

Google Pixel 8 “Guided Frame” feature

Javier in Frame | Google Pixel SB Commercial 2024.

Instead of focusing on generative aspects of AI, Google’s commercial showed off a feature called “Guided Frame” on the Pixel 8 phone that uses machine vision technology and a computer voice to help people with blindness or low vision to take photos by centering the frame on a face or multiple faces. Guided Frame debuted in 2022 in conjunction with the Google Pixel 7.

The commercial tells the story of a person named Javier, who says, “For many people with blindness or low vision, there hasn’t always been an easy way to capture daily life.” We see a simulated blurry first-person view of Javier holding a smartphone and hear a computer-synthesized voice describing what the AI model sees, directing the person to center on a face to snap various photos and selfies.

Considering the controversies that generative AI currently generates (pun intended), it’s refreshing to see a positive application of AI technology used as an accessibility feature. Relatedly, an app called Be My Eyes (powered by OpenAI’s GPT-4V) also aims to help low-vision people interact with the world.

Despicable Me 4

Despicable Me 4 – Minion Intelligence (Big Game Spot).

So far, we’ve covered a couple attempts to show AI-powered products as positive features. Elsewhere in Super Bowl ads, companies weren’t as generous about the technology. In an ad for the film Despicable Me 4, we see two Minions creating a series of terribly disfigured AI-generated still images reminiscent of Stable Diffusion 1.4 from 2022. There’s three-legged people doing yoga, a painting of Steve Carell and Will Ferrell as Elizabethan gentlemen, a handshake with too many fingers, people eating spaghetti in a weird way, and a pair of people riding dachshunds in a race.

The images are paired with an earnest voiceover that says, “Artificial intelligence is changing the way we see the world, showing us what we never thought possible, transforming the way we do business, and bringing family and friends closer together. With artificial intelligence, the future is in good hands.” When the voiceover ends, the camera pans out to show hundreds of Minions generating similarly twisted images on computers.

Speaking of image synthesis at the Super Bowl, people mistook a Christian commercial created by He Gets Us, LLC as having been AI-generated, likely due to its gaudy technicolor visuals. With the benefit of a YouTube replay and the ability to look at details, the “He washed feet” commercial doesn’t appear AI-generated to us, but it goes to show how the concept of image synthesis has begun to cast doubt on human-made creations.

The Super Bowl’s best and wackiest AI commercials Read More »

report:-sam-altman-seeking-trillions-for-ai-chip-fabrication-from-uae,-others

Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others

chips ahoy —

WSJ: Audacious $5-$7 trillion investment would aim to expand global AI chip supply.

WASHINGTON, DC - JANUARY 11: OpenAI Chief Executive Officer Sam Altman walks on the House side of the U.S. Capitol on January 11, 2024 in Washington, DC. Meanwhile, House Freedom Caucus members who left a meeting in the Speakers office say that they were talking to the Speaker about abandoning the spending agreement that Johnson announced earlier in the week. (Photo by Kent Nishimura/Getty Images)

Enlarge / OpenAI Chief Executive Officer Sam Altman walks on the House side of the US Capitol on January 11, 2024, in Washington, DC. (Photo by Kent Nishimura/Getty Images)

Getty Images

On Thursday, The Wall Street Journal reported that OpenAI CEO Sam Altman is in talks with investors to raise as much as $5 trillion to $7 trillion for AI chip manufacturing, according to people familiar with the matter. The funding seeks to address the scarcity of graphics processing units (GPUs) crucial for training and running large language models like those that power ChatGPT, Microsoft Copilot, and Google Gemini.

The high dollar amount reflects the huge amount of capital necessary to spin up new semiconductor manufacturing capability. “As part of the talks, Altman is pitching a partnership between OpenAI, various investors, chip makers and power providers, which together would put up money to build chip foundries that would then be run by existing chip makers,” writes the Wall Street Journal in its report. “OpenAI would agree to be a significant customer of the new factories.”

To hit these ambitious targets—which are larger than the entire semiconductor industry’s current $527 billion global sales combined—Altman has reportedly met with a range of potential investors worldwide, including sovereign wealth funds and government entities, notably the United Arab Emirates, SoftBank CEO Masayoshi Son, and representatives from Taiwan Semiconductor Manufacturing Co. (TSMC).

TSMC is the world’s largest dedicated independent semiconductor foundry. It’s a critical linchpin that companies such as Nvidia, Apple, Intel, and AMD rely on to fabricate SoCs, CPUs, and GPUs for various applications.

Altman reportedly seeks to expand the global capacity for semiconductor manufacturing significantly, funding the infrastructure necessary to support the growing demand for GPUs and other AI-specific chips. GPUs are excellent at parallel computation, which makes them ideal for running AI models that heavily rely on matrix multiplication to work. However, the technology sector currently faces a significant shortage of these important components, constraining the potential for AI advancements and applications.

In particular, the UAE’s involvement, led by Sheikh Tahnoun bin Zayed al Nahyan, a key security official and chair of numerous Abu Dhabi sovereign wealth vehicles, reflects global interest in AI’s potential and the strategic importance of semiconductor manufacturing. However, the prospect of substantial UAE investment in a key tech industry raises potential geopolitical concerns, particularly regarding the US government’s strategic priorities in semiconductor production and AI development.

The US has been cautious about allowing foreign control over the supply of microchips, given their importance to the digital economy and national security. Reflecting this, the Biden administration has undertaken efforts to bolster domestic chip manufacturing through subsidies and regulatory scrutiny of foreign investments in important technologies.

To put the $5 trillion to $7 trillion estimate in perspective, the White House just today announced a $5 billion investment in R&D to advance US-made semiconductor technologies. TSMC has already sunk $40 billion—one of the largest foreign investments in US history—into a US chip plant in Arizona. As of now, it’s unclear whether Altman has secured any commitments toward his fundraising goal.

Updated on February 9, 2024 at 8: 45 PM Eastern with a quote from the WSJ that clarifies the proposed relationship between OpenAI and partners in the talks.

Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others Read More »

chatgpt’s-new-@-mentions-bring-multiple-personalities-into-your-ai-convo

ChatGPT’s new @-mentions bring multiple personalities into your AI convo

team of rivals —

Bring different AI roles into the same chatbot conversation history.

Illustration of a man jugging at symbols.

Enlarge / With so many choices, selecting the perfect GPT can be confusing.

On Tuesday, OpenAI announced a new feature in ChatGPT that allows users to pull custom personalities called “GPTs” into any ChatGPT conversation with the @ symbol. It allows a level of quasi-teamwork within ChatGPT among expert roles that was previously impractical, making collaborating with a team of AI agents within OpenAI’s platform one step closer to reality.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT,” wrote OpenAI on the social media network X. “This allows you to add relevant GPTs with the full context of the conversation.”

OpenAI introduced GPTs in November as a way to create custom personalities or roles for ChatGPT to play. For example, users can build their own GPTs to focus on certain topics or certain skills. Paid ChatGPT subscribers can also freely download a host of GPTs developed by other ChatGPT users through the GPT Store.

Previously, if you wanted to share information between GPT profiles, you had to copy the text, select a new chat with the GPT, paste it, and explain the context of what the information means or what you want to do with it. Now, ChatGPT users can stay in the default ChatGPT window and bring in GPTs as needed without losing the history of the conversation.

For example, we created a “Wellness Guide” GPT that is crafted as an expert in human health conditions (of course, this being ChatGPT, always consult a human doctor if you’re having medical problems), and we created a “Canine Health Advisor” for dog-related health questions.

A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Enlarge / A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Benj Edwards

We started in a default ChatGPT chat, hit the @ symbol, then typed the first few letters of “Wellness” and selected it from a list. It filled out the rest. We asked a question about food poisoning in humans, and then we switched to the canine advisor in the same way with an @ symbol and asked about the dog.

Using this feature, you could alternatively consult, say, an “ad copywriter” GPT and an “editor” GPT—ask the copywriter to write some text, then rope in the editor GPT to check it, looking at it from a different angle. Different system prompts (the instructions that define a GPT’s personality) make for significant behavior differences.

We also tried swapping between GPT profiles that write software and others designed to consult on historical tech subjects. Interestingly, ChatGPT does not differentiate between GPTs as different personalities as you change. It will still say, “I did this earlier” when a different GPT is talking about a previous GPT’s output in the same conversation history. From its point of view, it’s just ChatGPT and not multiple agents.

From our vantage point, this feature seems to represent baby steps toward a future where GPTs, as independent agents, could work together as a team to fulfill more complex tasks directed by the user. Similar experiments have been done outside of OpenAI in the past (using API access), but OpenAI has so far resisted a more agentic model for ChatGPT. As we’ve seen (first with GPTs and now with this), OpenAI seems to be slowly angling toward that goal itself, but only time will tell if or when we see true agentic teamwork in a shipping service.

ChatGPT’s new @-mentions bring multiple personalities into your AI convo Read More »

openai-and-common-sense-media-partner-to-protect-teens-from-ai-harms-and-misuse

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse

Adventures in chatbusting —

Site gave ChatGPT 3 stars and 48% privacy score: “Best used for creativity, not facts.”

Boy in Living Room Wearing Robot Mask

On Monday, OpenAI announced a partnership with the nonprofit Common Sense Media to create AI guidelines and educational materials targeted at parents, educators, and teens. It includes the curation of family-friendly GPTs in OpenAI’s GPT store. The collaboration aims to address concerns about the impacts of AI on children and teenagers.

Known for its reviews of films and TV shows aimed at parents seeking appropriate media for their kids to watch, Common Sense Media recently branched out into AI and has been reviewing AI assistants on its site.

“AI isn’t going anywhere, so it’s important that we help kids understand how to use it responsibly,” Common Sense Media wrote on X. “That’s why we’ve partnered with @OpenAI to help teens and families safely harness the potential of AI.”

OpenAI CEO Sam Altman and Common Sense Media CEO James Steyer announced the partnership onstage in San Francisco at the Common Sense Summit for America’s Kids and Families, an event that was well-covered by media members on the social media site X.

For his part, Altman offered a canned statement in the press release, saying, “AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence.”

The announcement feels slightly non-specific in the official news release, with Steyer offering, “Our guides and curation will be designed to educate families and educators about safe, responsible use of ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology.”

The partnership seems aimed mostly at bringing a patina of family-friendliness to OpenAI’s GPT store, with the most solid reveal being the aforementioned fact that Common Sense media will help with the “curation of family-friendly GPTs in the GPT Store based on Common Sense ratings and standards.”

Common Sense AI reviews

As mentioned above, Common Sense Media began reviewing AI assistants on its site late last year. This puts Common Sense Media in an interesting position with potential conflicts of interest regarding the new partnership with OpenAI. However, it doesn’t seem to be offering any favoritism to OpenAI so far.

For example, Common Sense Media’s review of ChatGPT calls the AI assistant “A powerful, at times risky chatbot for people 13+ that is best used for creativity, not facts.” It labels ChatGPT as being suitable for ages 13 and up (which is in OpenAI’s Terms of Service) and gives the OpenAI assistant three out of five stars. ChatGPT also scores a 48 percent privacy rating (which is oddly shown as 55 percent on another page that goes into privacy details). The review we cited was last updated on October 13, 2023, as of this writing.

For reference, Google Bard gets a three-star overall rating and a 75 percent privacy rating in its Common Sense Media review. Stable Diffusion, the image synthesis model, nets a one-star rating with the description, “Powerful image generator can unleash creativity, but is wildly unsafe and perpetuates harm.” OpenAI’s DALL-E gets two stars and a 48 percent privacy rating.

The information that Common Sense Media includes about each AI model appears relatively accurate and detailed (and the organization cited an Ars Technica article as a reference in one explanation), so they feel fair, even in the face of the OpenAI partnership. Given the low scores, it seems that most AI models aren’t off to a great start, but that may change. It’s still early days in generative AI.

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse Read More »

openai-updates-chatgpt-4-model-with-potential-fix-for-ai-“laziness”-problem

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem

Break’s over —

Also, new GPT-3.5 Turbo model, lower API prices, and other model updates.

A lazy robot (a man with a box on his head) sits on the floor beside a couch.

On Thursday, OpenAI announced updates to the AI models that power its ChatGPT assistant. Amid less noteworthy updates, OpenAI tucked in a mention of a potential fix to a widely reported “laziness” problem seen in GPT-4 Turbo since its release in November. The company also announced a new GPT-3.5 Turbo model (with lower pricing), a new embedding model, an updated moderation model, and a new way to manage API usage.

“Today, we are releasing an updated GPT-4 Turbo preview model, gpt-4-0125-preview. This model completes tasks like code generation more thoroughly than the previous preview model and is intended to reduce cases of ‘laziness’ where the model doesn’t complete a task,” writes OpenAI in its blog post.

Since the launch of GPT-4 Turbo, a large number of ChatGPT users have reported that the ChatGPT-4 version of its AI assistant has been declining to do tasks (especially coding tasks) with the same exhaustive depth as it did in earlier versions of GPT-4. We’ve seen this behavior ourselves while experimenting with ChatGPT over time.

OpenAI has never offered an official explanation for this change in behavior, but OpenAI employees have previously acknowledged on social media that the problem is real, and the ChatGPT X account wrote in December, “We’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

We reached out to OpenAI asking if it could provide an official explanation for the laziness issue but did not receive a response by press time.

New GPT-3.5 Turbo, other updates

Elsewhere in OpenAI’s blog update, the company announced a new version of GPT-3.5 Turbo (gpt-3.5-turbo-0125), which it says will offer “various improvements including higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.”

And the cost of GPT-3.5 Turbo through OpenAI’s API will decrease for the third time this year “to help our customers scale.” New input token prices are 50 percent less, at $0.0005 per 1,000 input tokens, and output prices are 25 percent less, at $0.0015 per 1,000 output tokens.

Lower token prices for GPT-3.5 Turbo will make operating third-party bots significantly less expensive, but the GPT-3.5 model is generally more likely to confabulate than GPT-4 Turbo. So we might see more scenarios like Quora’s bot telling people that eggs can melt (although the instance used a now-deprecated GPT-3 model called text-davinci-003). If GPT-4 Turbo API prices drop over time, some of those hallucination issues with third parties might eventually go away.

OpenAI also announced new embedding models, text-embedding-3-small and text-embedding-3-large, which convert content into numerical sequences, aiding in machine learning tasks like clustering and retrieval. And an updated moderation model, text-moderation-007, is part of the company’s API that “allows developers to identify potentially harmful text,” according to OpenAI.

Finally, OpenAI is rolling out improvements to its developer platform, introducing new tools for managing API keys and a new dashboard for tracking API usage. Developers can now assign permissions to API keys from the API keys page, helping to clamp down on misuse of API keys (if they get into the wrong hands) that can potentially cost developers lots of money. The API dashboard allows devs to “view usage on a per feature, team, product, or project level, simply by having separate API keys for each.”

As the media world seemingly swirls around the company with controversies and think pieces about the implications of its tech, releases like these show that the dev teams at OpenAI are still rolling along as usual with updates at a fairly regular pace. Despite the company almost completely falling apart late last year, it seems that, under the hood, it’s business as usual for OpenAI.

OpenAI updates ChatGPT-4 model with potential fix for AI “laziness” problem Read More »