AI

google-will-apparently-offer-“ai-mode”-right-on-its-main-search-page

Google will apparently offer “AI Mode” right on its main search page

Google will soon take more steps to make AI a part of search, exposing more users to its Gemini agent, according to recent reports and app teardowns.

“AI Mode,” shown at the top left of the web results page and inside the Google app, will provide an interface similar to a Gemini AI chat, according to The Information.

This tracks with a finding from Android Authority earlier this month, which noted a dedicated “AI mode” button inside an early beta of the Google app. This shortcut also appeared on Google’s Android search widget, and a conversation history button was added to the Google app. Going even deeper into the app, 9to5Google found references to “aim” (AI mode) and “ai_mode” which suggest a dedicated tab in the Google app, with buttons for speaking to an AI or sending it pictures.

Google already promotes Gemini with links below its search homepage. (“5 ways Gemini can help during the Holidays” is currently showing for me.) Search results on Google can also contain an “AI Overview,” which launched with some “use glue for pizza sauce” notoriety. People averse to AI answers can avoid them with URL parameters and proxy sites (or sticking to the “web” tab). Gemini has also been prominently added to other Google products, like Pixel phones, Gmail, and Drive/Workspace. And the search giant has also been testing the ability to attach files to a web search for analysis.

Google will apparently offer “AI Mode” right on its main search page Read More »

not-to-be-outdone-by-openai,-google-releases-its-own-“reasoning”-ai-model

Not to be outdone by OpenAI, Google releases its own “reasoning” AI model

Google DeepMind’s chief scientist, Jeff Dean, says that the model receives extra computing power, writing on X, “we see promising results when we increase inference time computation!” The model works by pausing to consider multiple related prompts before providing what it determines to be the most accurate answer.

Since OpenAI’s jump into the “reasoning” field in September with o1-preview and o1-mini, several companies have been rushing to achieve feature parity with their own models. For example, DeepSeek launched DeepSeek-R1 in early November, while Alibaba’s Qwen team released its own “reasoning” model, QwQ earlier this month.

While some claim that reasoning models can help solve complex mathematical or academic problems, these models might not be for everybody. While they perform well on some benchmarks, questions remain about their actual usefulness and accuracy. Also, the high computing costs needed to run reasoning models have created some rumblings about their long-term viability. That high cost is why OpenAI’s ChatGPT Pro costs $200 a month, for example.

Still, it appears Google is serious about pursuing this particular AI technique. Logan Kilpatrick, a Google employee in its AI Studio, called it “the first step in our reasoning journey” in a post on X.

Not to be outdone by OpenAI, Google releases its own “reasoning” AI model Read More »

new-physics-sim-trains-robots-430,000-times-faster-than-reality

New physics sim trains robots 430,000 times faster than reality

The AI-generated worlds reportedly include realistic physics, camera movements, and object behaviors, all from text commands. The system then creates physically accurate ray-traced videos and data that robots can use for training.

Examples of “4D dynamical and physical” worlds that Genesis created from text prompts.

This prompt-based system lets researchers create complex robot testing environments by typing natural language commands instead of programming them by hand. “Traditionally, simulators require a huge amount of manual effort from artists: 3D assets, textures, scene layouts, etc. But every component in the workflow can be automated,” wrote Fan.

Using its engine, Genesis can also generate character motion, interactive 3D scenes, facial animation, and more, which may allow for the creation of artistic assets for creative projects, but may also lead to more realistic AI-generated games and videos in the future, constructing a simulated world in data instead of operating on the statistical appearance of pixels as with a video synthesis diffusion model.

Examples of character motion generation from Genesis, using a prompt that includes, “A miniature Wukong holding a stick in his hand sprints across a table surface for 3 seconds, then jumps into the air, and swings his right arm downward during landing.”

While the generative system isn’t yet part of the currently available code on GitHub, the team plans to release it in the future.

Training tomorrow’s robots today (using Python)

Genesis remains under active development on GitHub, where the team accepts community contributions.

The platform stands out from other 3D world simulators for robotic training by using Python for both its user interface and core physics engine. Other engines use C++ or CUDA for their underlying calculations while wrapping them in Python APIs. Genesis takes a Python-first approach.

Notably, the non-proprietary nature of the Genesis platform makes high-speed robot training simulations available to any researcher for free through simple Python commands that work on regular computers with off-the-shelf hardware.

Previously, running robot simulations required complex programming and specialized hardware, says Fan in his post announcing Genesis, and that shouldn’t be the case. “Robotics should be a moonshot initiative owned by all of humanity,” he wrote.

New physics sim trains robots 430,000 times faster than reality Read More »

call-chatgpt-from-any-phone-with-openai’s-new-1-800-voice-service

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service

On Wednesday, OpenAI launched a 1-800-CHATGPT (1-800-242-8478) telephone number that anyone in the US can call to talk to ChatGPT via voice chat for up to 15 minutes for free. The company also says that people outside the US can send text messages to the same number for free using WhatsApp.

Upon calling, users hear a voice say, “Hello again, it’s ChatGPT, an AI assistant. Our conversation may be reviewed for safety. How can I help you?” Callers can ask ChatGPT anything they would normally ask the AI assistant and have a live, interactive conversation.

During a livestream demo of “Calling with ChatGPT” during Day 10 of “12 Days of OpenAI,” OpenAI employees demonstrated several examples of the telephone-based voice chat in action, asking ChatGPT to identify a distinctive house in California and for help in translating a message into Spanish for a friend. For fun, they showed calls from an iPhone, a flip phone, and a vintage rotary phone.

OpenAI developers demonstrate calling 1-800-CHATGPT during a livestream on December 18, 2024.

OpenAI developers demonstrate calling 1-800-CHATGPT during a livestream on December 18, 2024. Credit: OpenAI

OpenAI says the new features came out of an internal OpenAI “hack week” project that a team built just a few weeks ago. The company says its goal is to make ChatGPT more accessible if someone does not have a smartphone or a computer handy.

During the livestream, an OpenAI employee mentioned that 15 minutes of voice chatting are free and that you can download the app and create an account to get more. While the audio chat version seems to be running a full version of GPT-4o on the back end, a developer during the livestream said the free WhatsApp text mode is using GPT-4o mini.

Call ChatGPT from any phone with OpenAI’s new 1-800 voice service Read More »

openai’s-api-users-get-full-access-to-the-new-o1-model

OpenAI’s API users get full access to the new o1 model

Updates for real-time interaction and fine-tuning

Developers that make use of OpenAI’s real-time voice APIs will also get full access to WebRTC support that was announced today. This comes on top of existing support for the WebSocket audio standard and can simplify the creation of OpenAI audio interfaces for third-party applications from roughly 250 lines of code to about a dozen, according to the company.

OpenAI says it will release simple WebRTC code that can be used on a plug-and-play basis in plenty of simple devices, from toy reindeer to smart glasses and cameras that want to make use of context-aware AI assistants. To help encourage those kinds of uses, OpenAI said it was reducing the cost of o1 audio tokens for API developers by 60 percent and the cost of 4o mini tokens by a full 90 percent.

Developers interested in fine-tuning their own AI models can also make use of a new method called “direct preference optimization” for their efforts. In the current system of supervised fine tuning, model makers have to provide examples of the exact input/output pairs they want to help refine the new model. With direct preference optimization, model makers can instead simply provide two separate responses and indicate that one is preferred over the other.

OpenAI says its fine-tuning process will then optimize to learn the difference between the preferred and non-preferred answers provided, automatically detecting changes in things like verbosity, formatting and style guidelines, or the helpfulness/creativity level of responses and factoring them into the new model.

Programmers who write in Go or Java will also be able to use new SDKs for those languages to connect to the OpenAI API, OpenAI said.

OpenAI’s API users get full access to the new o1 model Read More »

twirling-body-horror-in-gymnastics-video-exposes-ai’s-flaws

Twirling body horror in gymnastics video exposes AI’s flaws


The slithy toves did gyre and gimble in the wabe

Nonsensical jabberwocky movements created by OpenAI’s Sora are typical for current AI-generated video, and here’s why.

A still image from an AI-generated video of an ever-morphing synthetic gymnast. Credit: OpenAI / Deedy

On Wednesday, a video from OpenAI’s newly launched Sora AI video generator went viral on social media, featuring a gymnast who sprouts extra limbs and briefly loses her head during what appears to be an Olympic-style floor routine.

As it turns out, the nonsensical synthesis errors in the video—what we like to call “jabberwockies”—hint at technical details about how AI video generators work and how they might get better in the future.

But before we dig into the details, let’s take a look at the video.

An AI-generated video of an impossible gymnast, created with OpenAI Sora.

In the video, we see a view of what looks like a floor gymnastics routine. The subject of the video flips and flails as new legs and arms rapidly and fluidly emerge and morph out of her twirling and transforming body. At one point, about 9 seconds in, she loses her head, and it reattaches to her body spontaneously.

“As cool as the new Sora is, gymnastics is still very much the Turing test for AI video,” wrote venture capitalist Deedy Das when he originally shared the video on X. The video inspired plenty of reaction jokes, such as this reply to a similar post on Bluesky: “hi, gymnastics expert here! this is not funny, gymnasts only do this when they’re in extreme distress.”

We reached out to Das, and he confirmed that he generated the video using Sora. He also provided the prompt, which was very long and split into four parts, generated by Anthropic’s Claude, using complex instructions like “The gymnast initiates from the back right corner, taking position with her right foot pointed behind in B-plus stance.”

“I’ve known for the last 6 months having played with text to video models that they struggle with complex physics movements like gymnastics,” Das told us in a conversation. “I had to try it [in Sora] because the character consistency seemed improved. Overall, it was an improvement because previously… the gymnast would just teleport away or change their outfit mid flip, but overall it still looks downright horrifying. We hoped AI video would learn physics by default, but that hasn’t happened yet!”

So what went wrong?

When examining how the video fails, you must first consider how Sora “knows” how to create anything that resembles a gymnastics routine. During the training phase, when the Sora model was created, OpenAI fed example videos of gymnastics routines (among many other types of videos) into a specialized neural network that associates the progression of images with text-based descriptions of them.

That type of training is a distinct phase that happens once before the model’s release. Later, when the finished model is running and you give a video-synthesis model like Sora a written prompt, it draws upon statistical associations between words and images to produce a predictive output. It’s continuously making next-frame predictions based on the last frame of the video. But Sora has another trick for attempting to preserve coherency over time. “By giving the model foresight of many frames at a time,” reads OpenAI’s Sora System Card, we’ve solved a challenging problem of making sure a subject stays the same even when it goes out of view temporarily.”

A still image from a moment where the AI-generated gymnast loses her head. It soon re-attaches to her body.

A still image from a moment where the AI-generated gymnast loses her head. It soon reattaches to her body. Credit: OpenAI / Deedy

Maybe not quite solved yet. In this case, rapidly moving limbs prove a particular challenge when attempting to predict the next frame properly. The result is an incoherent amalgam of gymnastics footage that shows the same gymnast performing running flips and spins, but Sora doesn’t know the correct order in which to assemble them because it’s pulling on statistical averages of wildly different body movements in its relatively limited training data of gymnastics videos, which also likely did not include limb-level precision in its descriptive metadata.

Sora doesn’t know anything about physics or how the human body should work, either. It’s drawing upon statistical associations between pixels in the videos in its training dataset to predict the next frame, with a little bit of look-ahead to keep things more consistent.

This problem is not unique to Sora. All AI video generators can produce wildly nonsensical results when your prompts reach too far past their training data, as we saw earlier this year when testing Runway’s Gen-3. In fact, we ran some gymnast prompts through the latest open source AI video model that may rival Sora in some ways, Hunyuan Video, and it produced similar twirling, morphing results, seen below. And we used a much simpler prompt than Das did with Sora.

An example from open source Chinese AI model Hunyuan Video with the prompt, “A young woman doing a complex floor gymnastics routine at the olympics, featuring running and flips.”

AI models based on transformer technology are fundamentally imitative in nature. They’re great at transforming one type of data into another type or morphing one style into another. What they’re not great at (yet) is producing coherent generations that are truly original. So if you happen to provide a prompt that closely matches a training video, you might get a good result. Otherwise, you may get madness.

As we wrote about image-synthesis model Stable Diffusion 3’s body horror generations earlier this year, “Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.”

For the engineers who make these models, success in AI video generation quickly becomes a question of how many examples (and how much training) you need before the model can generalize enough to produce convincing and coherent results. It’s also a question of metadata quality—how accurately the videos are labeled. In this case, OpenAI used an AI vision model to describe its training videos, which helped improve quality, but apparently not enough—yet.

We’re looking at an AI jabberwocky in action

In a way, the type of generation failure in the gymnast video is a form of confabulation (or hallucination, as some call it), but it’s even worse because it’s not coherent. So instead of calling it a confabulation, which is a plausible-sounding fabrication, we’re going to lean on a new term, “jabberwocky,” which Dictionary.com defines as “a playful imitation of language consisting of invented, meaningless words; nonsense; gibberish,” taken from Lewis Carroll’s nonsense poem of the same name. Imitation and nonsense, you say? Check and check.

We’ve covered jabberwockies in AI video before with people mocking Chinese video-synthesis models, a monstrously weird AI beer commercial, and even Will Smith eating spaghetti. They’re a form of misconfabulation where an AI model completely fails to produce a plausible output. This will not be the last time we see them, either.

How could AI video models get better and avoid jabberwockies?

In our coverage of Gen-3 Alpha, we called the threshold where you get a level of useful generalization in an AI model the “illusion of understanding,” where training data and training time reach a critical mass that produces good enough results to generalize across enough novel prompts.

One of the key reasons language models like OpenAI’s GPT-4 impressed users was that they finally reached a size where they had absorbed enough information to give the appearance of genuinely understanding the world. With video synthesis, achieving this same apparent level of “understanding” will require not just massive amounts of well-labeled training data but also the computational power to process it effectively.

AI boosters hope that these current models represent one of the key steps on the way to something like truly general intelligence (often called AGI) in text, or in AI video, what OpenAI and Runway researchers call “world simulators” or “world models” that somehow encode enough physics rules about the world to produce any realistic result.

Judging by the morphing alien shoggoth gymnast, that may still be a ways off. Still, it’s early days in AI video generation, and judging by how quickly AI image-synthesis models like Midjourney progressed from crude abstract shapes into coherent imagery, it’s likely video synthesis will have a similar trajectory over time. Until then, enjoy the AI-generated jabberwocky madness.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Twirling body horror in gymnastics video exposes AI’s flaws Read More »

are-llms-capable-of-non-verbal-reasoning?

Are LLMs capable of non-verbal reasoning?

In the researchers’ COCONUT model (for Chain Of CONtinUous Thought), those kinds of hidden states are encoded as “latent thoughts” that replace the individual written steps in a logical sequence both during training and when processing a query. This avoids the need to convert to and from natural language for each step and “frees the reasoning from being within the language space,” the researchers write, leading to an optimized reasoning path that they term a “continuous thought.”

Being more breadth-minded

While doing logical processing in the latent space has some benefits for model efficiency, the more important finding is that this kind of model can “encode multiple potential next steps simultaneously.” Rather than having to pursue individual logical options fully and one by one (in a “greedy” sort of process), staying in the “latent space” allows for a kind of instant backtracking that the researchers compare to a breadth-first-search through a graph.

This emergent, simultaneous processing property comes through in testing even though the model isn’t explicitly trained to do so, the researchers write. “While the model may not initially make the correct decision, it can maintain many possible options within the continuous thoughts and progressively eliminate incorrect paths through reasoning, guided by some implicit value functions,” they write.

A figure highlighting some of the ways different models can fail at certain types of logical inference. Credit: Training Large Language Models to Reason in a Continuous Latent Space

That kind of multi-path reasoning didn’t really improve COCONUT’s accuracy over traditional chain-of-thought models on relatively straightforward tests of math reasoning (GSM8K) or general reasoning (ProntoQA). But the researchers found the model did comparatively well on a randomly generated set of ProntoQA-style queries involving complex and winding sets of logical conditions (e.g., “every apple is a fruit, every fruit is food, etc.”)

For these tasks, standard chain-of-thought reasoning models would often get stuck down dead-end paths of inference or even hallucinate completely made-up rules when trying to resolve the logical chain. Previous research has also shown that the “verbalized” logical steps output by these chain-of-thought models “may actually utilize a different latent reasoning process” than the one being shared.

This new research joins a growing body of research looking to understand and exploit the way large language models work at the level of their underlying neural networks. And while that kind of research hasn’t led to a huge breakthrough just yet, the researchers conclude that models pre-trained with these kinds of “continuous thoughts” from the get-go could “enable models to generalize more effectively across a wider range of reasoning scenarios.”

Are LLMs capable of non-verbal reasoning? Read More »

openai-introduces-“santa-mode”-to-chatgpt-for-ho-ho-ho-voice-chats

OpenAI introduces “Santa Mode” to ChatGPT for ho-ho-ho voice chats

On Thursday, OpenAI announced that ChatGPT users can now talk to a simulated version of Santa Claus through the app’s voice mode, using AI to bring a North Pole connection to mobile devices, desktop apps, and web browsers during the holiday season.

The company added Santa’s voice and personality as a preset option in ChatGPT’s Advanced Voice Mode. Users can access Santa by tapping a snowflake icon next to the prompt bar or through voice settings. The feature works on iOS and Android mobile apps, chatgpt.com, and OpenAI’s Windows and MacOS applications. The Santa voice option will remain available to users worldwide until early January.

The conversations with Santa exist as temporary chats that won’t save to chat history or affect the model’s memory. OpenAI designed this limitation specifically for the holiday feature. Keep that in mind, because if you let your kids talk to Santa, the AI simulation won’t remember what kids have told it during previous conversations.

During a livestream for Day 6 of the company’s “12 days of OpenAI” marketing event, an OpenAI employee said that the company will reset each user’s Advanced Voice Mode usage limits one time as a gift, so that even if you’ve used up your Advanced Voice Mode time, you’ll get a chance to talk to Santa.

OpenAI introduces “Santa Mode” to ChatGPT for ho-ho-ho voice chats Read More »

tcl-tvs-will-use-films-made-with-generative-ai-to-push-targeted-ads

TCL TVs will use films made with generative AI to push targeted ads

Advertising has become a focal point of TV software. We’re seeing companies that sell TV sets be increasingly interested in leveraging TV operating systems (OSes) for ads and tracking. This has led to bold new strategies, like an adtech firm launching a TV OS and ads on TV screensavers.

With new short films set to debut on its free streaming service tomorrow, TV-maker TCL is positing a new approach to monetizing TV owners and to film and TV production that sees reduced costs through reliance on generative AI and targeted ads.

TCL’s five short films are part of a company initiative to get people more accustomed to movies and TV shows made with generative AI. The movies will “be promoted and featured prominently on” TCL’s free ad-supported streaming television (FAST) service, TCLtv+, TCL announced in November. TCLtv+has hundreds of FAST channels and comes on TCL-brand TVs using various OSes, including Google TV and Roku OS.

Some of the movies have real actors. You may even recognize some, (like Kellita Smith, who played Bernie Mac’s wife, Wanda, on The Bernie Mac Show). Others feature characters made through generative AI. All the films use generative AI for special effects and/or animations and took 12 weeks to make, 404 Media, which attended a screening of the movies, reported today. AI tools used include ComfyUI, Nuke, and Runway, 404 reported. However, all of the TCL short movies were written, directed, and scored by real humans (again, including by people you may be familiar with). At the screening, Chris Regina, TCL’s chief content officer for North America, told attendees that “over 50 animators, editors, effects artists, professional researchers, [and] scientists” worked on the movies.

I’ve shared the movies below for you to judge for yourself, but as a spoiler, you can imagine the quality of short films made to promote a service that was created for targeted ads and that use generative AI for fast, affordable content creation. AI-generated videos are expected to improve, but it’s yet to be seen if a TV brand like TCL will commit to finding the best and most natural ways to use generative AI for video production. Currently, TCL’s movies demonstrate the limits of AI-generated video, such as odd background imagery and heavy use of narration that can distract from badly synced audio.

TCL TVs will use films made with generative AI to push targeted ads Read More »

google-goes-“agentic”-with-gemini-2.0’s-ambitious-ai-agent-features

Google goes “agentic” with Gemini 2.0’s ambitious AI agent features

On Wednesday, Google unveiled Gemini 2.0, the next generation of its AI-model family, starting with an experimental release called Gemini 2.0 Flash. The model family can generate text, images, and speech while processing multiple types of input including text, images, audio, and video. It’s similar to multimodal AI models like GPT-4o, which powers OpenAI’s ChatGPT.

“Gemini 2.0 Flash builds on the success of 1.5 Flash, our most popular model yet for developers, with enhanced performance at similarly fast response times,” said Google in a statement. “Notably, 2.0 Flash even outperforms 1.5 Pro on key benchmarks, at twice the speed.”

Gemini 2.0 Flash—which is the smallest model of the 2.0 family in terms of parameter count—launches today through Google’s developer platforms like Gemini API, AI Studio, and Vertex AI. However, its image generation and text-to-speech features remain limited to early access partners until January 2025. Google plans to integrate the tech into products like Android Studio, Chrome DevTools, and Firebase.

The company addressed potential misuse of generated content by implementing SynthID watermarking technology on all audio and images created by Gemini 2.0 Flash. This watermark appears in supported Google products to identify AI-generated content.

Google’s newest announcements lean heavily into the concept of agentic AI systems that can take action for you. “Over the last year, we have been investing in developing more agentic models, meaning they can understand more about the world around you, think multiple steps ahead, and take action on your behalf, with your supervision,” said Google CEO Sundar Pichai in a statement. “Today we’re excited to launch our next era of models built for this new agentic era.”

Google goes “agentic” with Gemini 2.0’s ambitious AI agent features Read More »

ai-company-trolls-san-francisco-with-billboards-saying-“stop-hiring-humans”

AI company trolls San Francisco with billboards saying “stop hiring humans”

Artisan CEO Jaspar Carmichael-Jack defended the campaign’s messaging in an interview with SFGate. “They are somewhat dystopian, but so is AI,” he told the outlet in a text message. “The way the world works is changing.” In another message he wrote, “We wanted something that would draw eyes—you don’t draw eyes with boring messaging.”

So what does Artisan actually do? Its main product is an AI “sales agent” called Ava that supposedly automates the work of finding and messaging potential customers. The company claims it works with “no human input” and costs 96% less than hiring a human for the same role. Although, given the current state of AI technology, it’s prudent to be skeptical of these claims.

Artisan also has plans to expand its AI tools beyond sales into areas like marketing, recruitment, finance, and design. Its sales agent appears to be its only existing product so far.

Meanwhile, the billboards remain visible throughout San Francisco, quietly fueling existential dread in a city that has already seen a great deal of tension since the pandemic. Some of the billboards feature additional messages, like “Hire Artisans, not humans,” and one that plays on angst over remote work: “Artisan’s Zoom cameras will never ‘not be working’ today.”

AI company trolls San Francisco with billboards saying “stop hiring humans” Read More »

reddit-debuts-ai-powered-discussion-search—but-will-users-like-it?

Reddit debuts AI-powered discussion search—but will users like it?

The company then went on to strike deals with major tech firms, including a $60 million agreement with Google in February 2024 and a partnership with OpenAI in May 2024 that integrated Reddit content into ChatGPT.

But Reddit users haven’t been entirely happy with the deals. In October 2024, London-based Redditors began posting false restaurant recommendations to manipulate search results and keep tourists away from their favorite spots. This coordinated effort to feed incorrect information into AI systems demonstrated how user communities might intentionally “poison” AI training data over time.

The potential for trouble

While it’s tempting to lean heavily into generative AI technology while it is currently trendy, the move could also represent a challenge for the company. For example, Reddit’s AI-powered summaries could potentially draw from inaccurate information featured on the site and provide incorrect answers, or it may draw inaccurate conclusions from correct information.

We will keep an eye on Reddit’s new AI-powered search tool to see if it resists the type of confabulation that we’ve seen with Google’s AI Overview, an AI summary bot that has been a critical failure so far.

Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit.

Reddit debuts AI-powered discussion search—but will users like it? Read More »