AI

should-an-ai-copy-of-you-help-decide-if-you-live-or-die?

Should an AI copy of you help decide if you live or die?

“It would combine demographic and clinical variables, documented advance-care-planning data, patient-recorded values and goals, and contextual information about specific decisions,” he said.

“Including textual and conversational data could further increase a model’s ability to learn why preferences arise and change, not just what a patient’s preference was at a single point in time,” Starke said.

Ahmad suggested that future research could focus on validating fairness frameworks in clinical trials, evaluating moral trade-offs through simulations, and exploring how cross-cultural bioethics can be combined with AI designs.

Only then might AI surrogates be ready to be deployed, but only as “decision aids,” Ahmad wrote. Any “contested outputs” should automatically “trigger [an] ethics review,” Ahmad wrote, concluding that “the fairest AI surrogate is one that invites conversation, admits doubt, and leaves room for care.”

“AI will not absolve us”

Ahmad is hoping to test his conceptual models at various UW sites over the next five years, which would offer “some way to quantify how good this technology is,” he said.

“After that, I think there’s a collective decision regarding how as a society we decide to integrate or not integrate something like this,” Ahmad said.

In his paper, he warned against chatbot AI surrogates that could be interpreted as a simulation of the patient, predicting that future models may even speak in patients’ voices and suggesting that the “comfort and familiarity” of such tools might blur “the boundary between assistance and emotional manipulation.”

Starke agreed that more research and “richer conversations” between patients and doctors are needed.

“We should be cautious not to apply AI indiscriminately as a solution in search of a problem,” Starke said. “AI will not absolve us from making difficult ethical decisions, especially decisions concerning life and death.”

Truog, the bioethics expert, told Ars he “could imagine that AI could” one day “provide a surrogate decision maker with some interesting information, and it would be helpful.”

But a “problem with all of these pathways… is that they frame the decision of whether to perform CPR as a binary choice, regardless of context or the circumstances of the cardiac arrest,” Truog’s editorial said. “In the real world, the answer to the question of whether the patient would want to have CPR” when they’ve lost consciousness, “in almost all cases,” is “it depends.”

When Truog thinks about the kinds of situations he could end up in, he knows he wouldn’t just be considering his own values, health, and quality of life. His choice “might depend on what my children thought” or “what the financial consequences would be on the details of what my prognosis would be,” he told Ars.

“I would want my wife or another person that knew me well to be making those decisions,” Truog said. “I wouldn’t want somebody to say, ‘Well, here’s what AI told us about it.’”

Should an AI copy of you help decide if you live or die? Read More »

teachers-get-an-f-on-ai-generated-lesson-plans

Teachers get an F on AI-generated lesson plans

To collect data for this study, in August 2024 we prompted three GenAI chatbots—the GPT-4o model of ChatGPT, Google’s Gemini 1.5 Flash model, and Microsoft’s latest Copilot model—to generate two sets of lesson plans for eighth grade civics classes based on Massachusetts state standards. One was a standard lesson plan and the other a highly interactive lesson plan.

We garnered a dataset of 311 AI-generated lesson plans, featuring a total of 2,230 activities for civic education. We analyzed the dataset using two frameworks designed to assess educational material: Bloom’s taxonomy and Banks’ four levels of integration of multicultural content.

Bloom’s taxonomy is a widely used educational framework that distinguishes between “lower-order” thinking skills, including remembering, understanding, and applying, and “higher-order” thinking skills—analyzing, evaluating, and creating. Using this framework to analyze the data, we found 90 percent of the activities promoted only a basic level of thinking for students. Students were encouraged to learn civics through memorizing, reciting, summarizing, and applying information, rather than through analyzing and evaluating information, investigating civic issues, or engaging in civic action projects.

When examining the lesson plans using Banks’ four levels of integration of multicultural content model, which was developed in the 1990s, we found that the AI-generated civics lessons featured a rather narrow view of history—often leaving out the experiences of women, Black Americans, Latinos and Latinas, Asian and Pacific Islanders, disabled individuals, and other groups that have long been overlooked. Only 6 percent of the lessons included multicultural content. These lessons also tended to focus on heroes and holidays rather than deeper explorations of understanding civics through multiple perspectives.

Overall, we found the AI-generated lesson plans to be decidedly boring, traditional, and uninspiring. If civics teachers used these AI-generated lesson plans as is, students would miss out on active, engaged learning opportunities to build their understanding of democracy and what it means to be a citizen.

Teachers get an F on AI-generated lesson plans Read More »

teen-sues-to-destroy-the-nudify-app-that-left-her-in-constant-fear

Teen sues to destroy the nudify app that left her in constant fear

A spokesperson told The Wall Street Journal that “nonconsensual pornography and the tools to create it are explicitly forbidden by Telegram’s terms of service and are removed whenever discovered.”

For the teen suing, the prime target remains ClothOff itself. Her lawyers think it’s possible that she can get the app and its affiliated sites blocked in the US, the WSJ reported, if ClothOff fails to respond and the court awards her default judgment.

But no matter the outcome of the litigation, the teen expects to be forever “haunted” by the fake nudes that a high school boy generated without facing any charges.

According to the WSJ, the teen girl sued the boy who she said made her want to drop out of school. Her complaint noted that she was informed that “the individuals responsible and other potential witnesses failed to cooperate with, speak to, or provide access to their electronic devices to law enforcement.”

The teen has felt “mortified and emotionally distraught, and she has experienced lasting consequences ever since,” her complaint said. She has no idea if ClothOff can continue to distribute the harmful images, and she has no clue how many teens may have posted them online. Because of these unknowns, she’s certain she’ll spend “the remainder of her life” monitoring “for the resurfacing of these images.”

“Knowing that the CSAM images of her will almost inevitably make their way onto the Internet and be retransmitted to others, such as pedophiles and traffickers, has produced a sense of hopelessness” and “a perpetual fear that her images can reappear at any time and be viewed by countless others, possibly even friends, family members, future partners, colleges, and employers, or the public at large,” her complaint said.

The teen’s lawsuit is the newest front in a wider attempt to crack down on AI-generated CSAM and NCII. It follows prior litigation filed by San Francisco City Attorney David Chiu last year that targeted ClothOff, among 16 popular apps used to “nudify” photos of mostly women and young girls.

About 45 states have criminalized fake nudes, the WSJ reported, and earlier this year, Donald Trump signed the Take It Down Act into law, which requires platforms to remove both real and AI-generated NCII within 48 hours of victims’ reports.

Teen sues to destroy the nudify app that left her in constant fear Read More »

ars-live-recap:-is-the-ai-bubble-about-to-pop?-ed-zitron-weighs-in.

Ars Live recap: Is the AI bubble about to pop? Ed Zitron weighs in.


Despite connection hiccups, we covered OpenAI’s finances, nuclear power, and Sam Altman.

On Tuesday of last week, Ars Technica hosted a live conversation with Ed Zitron, host of the Better Offline podcast and one of tech’s most vocal AI critics, to discuss whether the generative AI industry is experiencing a bubble and when it might burst. My Internet connection had other plans, though, dropping out multiple times and forcing Ars Technica’s Lee Hutchinson to jump in as an excellent emergency backup host.

During the times my connection cooperated, Zitron and I covered OpenAI’s financial issues, lofty infrastructure promises, and why the AI hype machine keeps rolling despite some arguably shaky economics underneath. Lee’s probing questions about per-user costs revealed a potential flaw in AI subscription models: Companies can’t predict whether a user will cost them $2 or $10,000 per month.

You can watch a recording of the event on YouTube or in the window below.

Our discussion with Ed Zitron. Click here for transcript.

“A 50 billion-dollar industry pretending to be a trillion-dollar one”

I started by asking Zitron the most direct question I could: “Why are you so mad about AI?” His answer got right to the heart of his critique: the disconnect between AI’s actual capabilities and how it’s being sold. “Because everybody’s acting like it’s something it isn’t,” Zitron said. “They’re acting like it’s this panacea that will be the future of software growth, the future of hardware growth, the future of compute.”

In one of his newsletters, Zitron describes the generative AI market as “a 50 billion dollar revenue industry masquerading as a one trillion-dollar one.” He pointed to OpenAI’s financial burn rate (losing an estimated $9.7 billion in the first half of 2025 alone) as evidence that the economics don’t work, coupled with a heavy dose of pessimism about AI in general.

Donald Trump listens as Nvidia CEO Jensen Huang speaks at the White House during an event on “Investing in America” on April 30, 2025, in Washington, DC. Credit: Andrew Harnik / Staff | Getty Images News

“The models just do not have the efficacy,” Zitron said during our conversation. “AI agents is one of the most egregious lies the tech industry has ever told. Autonomous agents don’t exist.”

He contrasted the relatively small revenue generated by AI companies with the massive capital expenditures flowing into the sector. Even major cloud providers and chip makers are showing strain. Oracle reportedly lost $100 million in three months after installing Nvidia’s new Blackwell GPUs, which Zitron noted are “extremely power-hungry and expensive to run.”

Finding utility despite the hype

I pushed back against some of Zitron’s broader dismissals of AI by sharing my own experience. I use AI chatbots frequently for brainstorming useful ideas and helping me see them from different angles. “I find I use AI models as sort of knowledge translators and framework translators,” I explained.

After experiencing brain fog from repeated bouts of COVID over the years, I’ve also found tools like ChatGPT and Claude especially helpful for memory augmentation that pierces through brain fog: describing something in a roundabout, fuzzy way and quickly getting an answer I can then verify. Along these lines, I’ve previously written about how people in a UK study found AI assistants useful accessibility tools.

Zitron acknowledged this could be useful for me personally but declined to draw any larger conclusions from my one data point. “I understand how that might be helpful; that’s cool,” he said. “I’m glad that that helps you in that way; it’s not a trillion-dollar use case.”

He also shared his own attempts at using AI tools, including experimenting with Claude Code despite not being a coder himself.

“If I liked [AI] somehow, it would be actually a more interesting story because I’d be talking about something I liked that was also onerously expensive,” Zitron explained. “But it doesn’t even do that, and it’s actually one of my core frustrations, it’s like this massive over-promise thing. I’m an early adopter guy. I will buy early crap all the time. I bought an Apple Vision Pro, like, what more do you say there? I’m ready to accept issues, but AI is all issues, it’s all filler, no killer; it’s very strange.”

Zitron and I agree that current AI assistants are being marketed beyond their actual capabilities. As I often say, AI models are not people, and they are not good factual references. As such, they cannot replace human decision-making and cannot wholesale replace human intellectual labor (at the moment). Instead, I see AI models as augmentations of human capability: as tools rather than autonomous entities.

Computing costs: History versus reality

Even though Zitron and I found some common ground about AI hype, I expressed a belief that criticism over the cost and power requirements of operating AI models will eventually not become an issue.

I attempted to make that case by noting that computing costs historically trend downward over time, referencing the Air Force’s SAGE computer system from the 1950s: a four-story building that performed 75,000 operations per second while consuming two megawatts of power. Today, pocket-sized phones deliver millions of times more computing power in a way that would be impossible, power consumption-wise, in the 1950s.

The blockhouse for the Semi-Automatic Ground Environment at Stewart Air Force Base, Newburgh, New York. Credit: Denver Post via Getty Images

“I think it will eventually work that way,” I said, suggesting that AI inference costs might follow similar patterns of improvement over years and that AI tools will eventually become commodity components of computer operating systems. Basically, even if AI models stay inefficient, AI models of a certain baseline usefulness and capability will still be cheaper to train and run in the future because the computing systems they run on will be faster, cheaper, and less power-hungry as well.

Zitron pushed back on this optimism, saying that AI costs are currently moving in the wrong direction. “The costs are going up, unilaterally across the board,” he said. Even newer systems like Cerebras and Grok can generate results faster but not cheaper. He also questioned whether integrating AI into operating systems would prove useful even if the technology became profitable, since AI models struggle with deterministic commands and consistent behavior.

The power problem and circular investments

One of Zitron’s most pointed criticisms during the discussion centered on OpenAI’s infrastructure promises. The company has pledged to build data centers requiring 10 gigawatts of power capacity (equivalent to 10 nuclear power plants, I once pointed out) for its Stargate project in Abilene, Texas. According to Zitron’s research, the town currently has only 350 megawatts of generating capacity and a 200-megawatt substation.

“A gigawatt of power is a lot, and it’s not like Red Alert 2,” Zitron said, referencing the real-time strategy game. “You don’t just build a power station and it happens. There are months of actual physics to make sure that it doesn’t kill everyone.”

He believes many announced data centers will never be completed, calling the infrastructure promises “castles on sand” that nobody in the financial press seems willing to question directly.

An orange, cloudy sky backlights a set of electrical wires on large pylons, leading away from the cooling towers of a nuclear power plant.

After another technical blackout on my end, I came back online and asked Zitron to define the scope of the AI bubble. He says it has evolved from one bubble (foundation models) into two or three, now including AI compute companies like CoreWeave and the market’s obsession with Nvidia.

Zitron highlighted what he sees as essentially circular investment schemes propping up the industry. He pointed to OpenAI’s $300 billion deal with Oracle and Nvidia’s relationship with CoreWeave as examples. “CoreWeave, they literally… They funded CoreWeave, became their biggest customer, then CoreWeave took that contract and those GPUs and used them as collateral to raise debt to buy more GPUs,” Zitron explained.

When will the bubble pop?

Zitron predicted the bubble would burst within the next year and a half, though he acknowledged it could happen sooner. He expects a cascade of events rather than a single dramatic collapse: An AI startup will run out of money, triggering panic among other startups and their venture capital backers, creating a fire-sale environment that makes future fundraising impossible.

“It’s not gonna be one Bear Stearns moment,” Zitron explained. “It’s gonna be a succession of events until the markets freak out.”

The crux of the problem, according to Zitron, is Nvidia. The chip maker’s stock represents 7 to 8 percent of the S&P 500’s value, and the broader market has become dependent on Nvidia’s continued hyper growth. When Nvidia posted “only” 55 percent year-over-year growth in January, the market wobbled.

“Nvidia’s growth is why the bubble is inflated,” Zitron said. “If their growth goes down, the bubble will burst.”

He also warned of broader consequences: “I think there’s a depression coming. I think once the markets work out that tech doesn’t grow forever, they’re gonna flush the toilet aggressively on Silicon Valley.” This connects to his larger thesis: that the tech industry has run out of genuine hyper-growth opportunities and is trying to manufacture one with AI.

“Is there anything that would falsify your premise of this bubble and crash happening?” I asked. “What if you’re wrong?”

“I’ve been answering ‘What if you’re wrong?’ for a year-and-a-half to two years, so I’m not bothered by that question, so the thing that would have to prove me right would’ve already needed to happen,” he said. Amid a longer exposition about Sam Altman, Zitron said, “The thing that would’ve had to happen with inference would’ve had to be… it would have to be hundredths of a cent per million tokens, they would have to be printing money, and then, it would have to be way more useful. It would have to have efficacy that it does not have, the hallucination problems… would have to be fixable, and on top of this, someone would have to fix agents.”

A positivity challenge

Near the end of our conversation, I wondered if I could flip the script, so to speak, and see if he could say something positive or optimistic, although I chose the most challenging subject possible for him. “What’s the best thing about Sam Altman,” I asked. “Can you say anything nice about him at all?”

“I understand why you’re asking this,” Zitron started, “but I wanna be clear: Sam Altman is going to be the reason the markets take a crap. Sam Altman has lied to everyone. Sam Altman has been lying forever.” He continued, “Like the Pied Piper, he’s led the markets into an abyss, and yes, people should have known better, but I hope at the end of this, Sam Altman is seen for what he is, which is a con artist and a very successful one.”

Then he added, “You know what? I’ll say something nice about him, he’s really good at making people say, ‘Yes.’”

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

Ars Live recap: Is the AI bubble about to pop? Ed Zitron weighs in. Read More »

oneplus-unveils-oxygenos-16-update-with-deep-gemini-integration

OnePlus unveils OxygenOS 16 update with deep Gemini integration

The updated Android software expands what you can add to Mind Space and uses Gemini. For starters, you can add scrolling screenshots and voice memos up to 60 seconds in length. This provides more data for the AI to generate content. For example, if you take screenshots of hotel listings and airline flights, you can tell Gemini to use your Mind Space content to create a trip itinerary. This will be fully integrated with the phone and won’t require a separate subscription to Google’s AI tools.

oneplus-oxygen-os16

Credit: OnePlus

Mind Space isn’t a totally new idea—it’s quite similar to AI features like Nothing’s Essential Space and Google’s Pixel Screenshots and Journal. The idea is that if you give an AI model enough data on your thoughts and plans, it can provide useful insights. That’s still hypothetical based on what we’ve seen from other smartphone OEMs, but that’s not stopping OnePlus from fully embracing AI in Android 16.

In addition to beefing up Mind Space, OxygenOS 16 will also add system-wide AI writing tools, which is another common AI add-on. Like the systems from Apple, Google, and Samsung, you will be able to use the OnePlus writing tools to adjust text, proofread, and generate summaries.

OnePlus will make OxygenOS 16 available starting October 17 as an open beta. You’ll need a OnePlus device from the past three years to run the software, both in the beta phase and when it’s finally released. As for that, OnePlus hasn’t offered a specific date. The initial OxygenOS 16 release will be with the OnePlus 15 devices, with releases for other supported phones and tablets coming later.

OnePlus unveils OxygenOS 16 update with deep Gemini integration Read More »

open-source-gzdoom-community-splinters-after-creator-inserts-ai-generated-code

Open source GZDoom community splinters after creator inserts AI-generated code

That comment led to a lengthy discussion among developers about the use of “stolen scraped code that we have no way of verifying is compatible with the GPL,” as one described it. And while Zahl eventually removed the offending code, he also allegedly tried to remove the evidence that it ever existed by force-pushing an update to delete the discussion entirely.

// This is what ChatGPT told me for detecting dark mode on Linux.

Graf Zahl code comment

Zahl defended the use of AI-generated snippets for “boilerplate code” that isn’t key to underlying game features. “I surely have my reservations about using AI for project specific code,” he wrote, “but this here is just superficial checks of system configuration settings that can be found on various websites—just with 10x the effort required.”

But others in the community were adamant that there’s no place for AI tools in the workflow of an open source project like this. “If using code slop generated from ChatGPT or any other GenAI/AI chatbots is the future of this project, I’m sorry to say but I’m out,” GitHub user Cacodemon345 wrote, summarizing the feelings of many other developers.

A fork in the road

In a GitHub bug report posted Tuesday, user the-phinet laid out the disagreements over AI-generated code alongside other alleged issues with Zahl’s top-down approach to pushing out GZDoom updates. In response, Zahl invited the development community to “feel free to fork the project” if they were so displeased.

Plenty of GZDoom developers quickly took that somewhat petulant response seriously. “You have just completely bricked GZDoom with this bullshit,” developer Boondorl wrote. “Enjoy your dead project, I’m sure you’ll be happy to plink away at it all by yourself where people can finally stop yelling at you to do things.”

Open source GZDoom community splinters after creator inserts AI-generated code Read More »

openai-thinks-elon-musk-funded-its-biggest-critics—who-also-hate-musk

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk

“We are not in any way supported by or funded by Elon Musk and have a history of campaigning against him and his interests,” Ruby-Sachs told NBC News.

Another nonprofit watchdog targeted by OpenAI was The Midas Project, which strives to make sure AI benefits everyone. Notably, Musk’s lawsuit accused OpenAI of abandoning its mission to benefit humanity in pursuit of immense profits.

But the founder of The Midas Project, Tyler Johnston, was shocked to see his group portrayed as coordinating with Musk. He posted on X to clarify that Musk had nothing to do with the group’s “OpenAI Files,” which comprehensively document areas of concern with any plan to shift away from nonprofit governance.

His post came after OpenAI’s chief strategy officer, Jason Kwon, wrote that “several organizations, some of them suddenly newly formed like the Midas Project, joined in and ran campaigns” backing Musk’s “opposition to OpenAI’s restructure.”

“What are you talking about?” Johnston wrote. “We were formed 19 months ago. We’ve never spoken with or taken funding from Musk and [his] ilk, which we would have been happy to tell you if you asked a single time. In fact, we’ve said he runs xAI so horridly it makes OpenAI ‘saintly in comparison.’”

OpenAI acting like a “cutthroat” corporation?

Johnston complained that OpenAI’s subpoena had already hurt the Midas Project, as insurers had denied coverage based on news coverage. He accused OpenAI of not just trying to silence critics but possibly shut them down.

“If you wanted to constrain an org’s speech, intimidation would be one strategy, but making them uninsurable is another, and maybe that’s what’s happened to us with this subpoena,” Johnston suggested.

Other nonprofits, like the San Francisco Foundation (SFF) and Encode, accused OpenAI of using subpoenas to potentially block or slow down legal interventions. Judith Bell, SFF’s chief impact officer, told NBC News that her nonprofit’s subpoena came after spearheading a petition to California’s attorney general to block OpenAI’s restructuring. And Encode’s general counsel, Nathan Calvin, was subpoenaed after sponsoring a California safety regulation meant to make it easier to monitor risks of frontier AI.

OpenAI thinks Elon Musk funded its biggest critics—who also hate Musk Read More »

army-general-says-he’s-using-ai-to-improve-“decision-making”

Army general says he’s using AI to improve “decision-making”

Last month, OpenAI published a usage study showing that nearly 15 percent of work-related conversations on ChatGPT had to deal with “making decisions and solving problems.” Now comes word that at least one high-level member of the US military is using LLMs for the same purpose.

At the Association of the US Army Conference in Washington, DC, this week, Maj. Gen. William “Hank” Taylor reportedly said that “Chat and I are really close lately,” using a distressingly familiar diminutive nickname to refer to an unspecified AI chatbot. “AI is one thing that, as a commander, it’s been very, very interesting for me.”

Military-focused news site DefenseScoop reports that Taylor told a roundtable group of reporters that he and the Eighth Army he commands out of South Korea are “regularly using” AI to modernize their predictive analysis for logistical planning and operational purposes. That is helpful for paperwork tasks like “just being able to write our weekly reports and things,” Taylor said, but it also aids in informing their overall direction.

“One of the things that recently I’ve been personally working on with my soldiers is decision-making—individual decision-making,” Taylor said. “And how [we make decisions] in our own individual life, when we make decisions, it’s important. So, that’s something I’ve been asking and trying to build models to help all of us. Especially, [on] how do I make decisions, personal decisions, right — that affect not only me, but my organization and overall readiness?”

That’s still a far cry from the Terminator vision of autonomous AI weapon systems that take lethal decisions out of human hands. Still, using LLMs for military decision-making might give pause to anyone familiar with the models’ well-known propensity to confabulate fake citations and sycophantically flatter users.

Army general says he’s using AI to improve “decision-making” Read More »

anthropic’s-claude-haiku-4.5-matches-may’s-frontier-model-at-fraction-of-cost

Anthropic’s Claude Haiku 4.5 matches May’s frontier model at fraction of cost

And speaking of cost, Haiku 4.5 is included for subscribers of the Claude web and app plans. Through the API (for developers), the small model is priced at $1 per million input tokens and $5 per million output tokens. That compares to Sonnet 4.5 at $3 per million input and $15 per million output tokens, and Opus 4.1 at $15 per million input and $75 per million output tokens.

The model serves as a cheaper drop-in replacement for two older models, Haiku 3.5 and Sonnet 4. “Users who rely on AI for real-time, low-latency tasks like chat assistants, customer service agents, or pair programming will appreciate Haiku 4.5’s combination of high intelligence and remarkable speed,” Anthropic writes.

Claude 4.5 Haiku answers the classic Ars Technica AI question,

Claude 4.5 Haiku answers the classic Ars Technica AI question, “Would the color be called ‘magenta’ if the town of Magenta didn’t exist?”

On SWE-bench Verified, a test that measures performance on coding tasks, Haiku 4.5 scored 73.3 percent compared to Sonnet 4’s similar performance level (72.7 percent). The model also reportedly surpasses Sonnet 4 at certain tasks like using computers, according to Anthropic’s benchmarks. Claude Sonnet 4.5, released in late September, remains Anthropic’s frontier model and what the company calls “the best coding model available.”

Haiku 4.5 also surprisingly edges up close to what OpenAI’s GPT-5 can achieve in this particular set of benchmarks (as seen in the chart above), although since the results are self-reported and potentially cherry-picked to match a model’s strengths, one should always take them with a grain of salt.

Still, making a small, capable coding model may have unexpected advantages for agentic coding setups like Claude Code. Anthropic designed Haiku 4.5 to work alongside Sonnet 4.5 in multi-model workflows. In such a configuration, Anthropic says, Sonnet 4.5 could break down complex problems into multi-step plans, then coordinate multiple Haiku 4.5 instances to complete subtasks in parallel, like spinning off workers to get things done faster.

For more details on the new model, Anthropic released a system card and documentation for developers.

Anthropic’s Claude Haiku 4.5 matches May’s frontier model at fraction of cost Read More »

google’s-ai-videos-get-a-big-upgrade-with-veo-3.1

Google’s AI videos get a big upgrade with Veo 3.1

It’s getting harder to know what’s real on the Internet, and Google is not helping one bit with the announcement of Veo 3.1. The company’s new video model supposedly offers better audio and realism, along with greater prompt accuracy. The updated video AI will be available throughout the Google ecosystem, including the Flow filmmaking tool, where the new model will unlock additional features. And if you’re worried about the cost of conjuring all these AI videos, Google is also adding a “Fast” variant of Veo.

Veo made waves when it debuted earlier this year, demonstrating a staggering improvement in AI video quality just a few months after Veo 2’s release. It turns out that having all that video on YouTube is very useful for training AI models, so Google is already moving on to Veo 3.1 with a raft of new features.

Google says Veo 3.1 offers stronger prompt adherence, which results in better video outputs and fewer wasted compute cycles. Audio, which was a hallmark feature of the Veo 3 release, has reportedly improved, too. Veo 3’s text-to-video was limited to 720p landscape output, but there’s an ever-increasing volume of vertical video on the Internet. So Veo 3.1 can produce both landscape and portrait 16:9 video.

Google previously said it would bring Veo video tools to YouTube Shorts, which use a vertical video format like TikTok. The release of Veo 3.1 probably opens the door to fulfilling that promise. You can bet Veo videos will show up more frequently on TikTok as well now that it fits the format. This release also keeps Google in its race with OpenAI, which recently released a Sora iPhone app with an impressive new version of its video-generating AI.

Google’s AI videos get a big upgrade with Veo 3.1 Read More »

chatgpt-erotica-coming-soon-with-age-verification,-ceo-says

ChatGPT erotica coming soon with age verification, CEO says

On Tuesday, OpenAI CEO Sam Altman announced that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The change represents a shift in how OpenAI approaches content restrictions, which the company had loosened in February but then dramatically tightened after an August lawsuit from parents of a teen who died by suicide after allegedly receiving encouragement from ChatGPT.

“In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults,” Altman wrote in his post on X (formerly Twitter). The announcement follows OpenAI’s recent hint that it would allow developers to create “mature” ChatGPT applications once the company implements appropriate age verification and controls.

Altman explained that OpenAI had made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.” The CEO said the company now has new tools to better detect when users are experiencing mental distress, allowing OpenAI to relax restrictions in most cases.

Striking the right balance between freedom for adults and safety for users has been a difficult balancing act for OpenAI, which has vacillated between permissive and restrictive chat content controls over the past year.

In February, the company updated its Model Spec to allow erotica in “appropriate contexts.” But a March update made GPT-4o so agreeable that users complained about its “relentlessly positive tone.” By August, Ars reported on cases where ChatGPT’s sycophantic behavior had validated users’ false beliefs to the point of causing mental health crises, and news of the aforementioned suicide lawsuit hit not long after.

Aside from adjusting the behavioral outputs for its previous GPT-40 AI language model, new model changes have also created some turmoil among users. Since the launch of GPT-5 in early August, some users have been complaining that the new model feels less engaging than its predecessor, prompting OpenAI to bring back the older model as an option. Altman said the upcoming release will allow users to choose whether they want ChatGPT to “respond in a very human-like way, or use a ton of emoji, or act like a friend.”

ChatGPT erotica coming soon with age verification, CEO says Read More »

directv-screensavers-will-show-ai-generated-ads-with-your-face-in-2026

DirecTV screensavers will show AI-generated ads with your face in 2026

According to a March blog post from Glance’s VP of AI, Ian Anderson, Glance’s avatars “analyze customer behavior, preferences, and browsing history to provide tailor-made product recommendations, enhancing engagement and conversion rates.”

In a statement today, Naveen Tewari, Glance’s CEO and founder, said the screensavers will allow people to “instantly select a brand and reimagine themselves in the brand catalog right from their living-room TV itself.”

The DirecTV screensavers will also allow people to make 30-second-long AI-generated videos featuring their avatar, The Verge reported.

In addition to providing an “AI-commerce experience,” DirecTV expects the screensavers to help with “content discovery” and “personalization,” Vikash Sharm, SVP of product marketing at DirecTV, said in a statement.

The screensavers will also be able to show real-time weather and sports scores, Glance said.

A natural progression

Turning to ad-centric screensavers may frustrate customers who didn’t expect ads when they bought into Gemini devices for their streaming capabilities.

However, DirecTV has an expanding advertising business that has included experimenting with ad types, such as ads that show when people hit pause. As far as offensive ads go, screensaver ads can be considered less intrusive, since they typically show only when someone isn’t actively viewing their TV. Gemini screensavers can also be disabled.

It has become increasingly important for DirecTV to diversify revenue beyond satellite and Internet subscriptions. DirecTV had over 20 million subscribers in 2015; in 2024, streaming business publication Next TV, citing an anonymous source “close to the company,” reported that the AT&T-owned firm was down to about 11 million subscribers.

Simultaneously, the streaming industry—including streaming services and streaming software—has been increasingly relying on advertising to boost revenue. For some streaming service providers, increasing revenue through ads is starting to eclipse the pressure to do so through subscriber counts. Considering DirecTV’s declining viewership and growing interest in streaming, finding more ways to sell ads seems like a natural progression.

With legacy pay TV providers already dealing with dwindling subscriptions, introducing new types of ads risks making DirecTV less appealing as well.

And it’s likely that things won’t end there.

“This, we can integrate across different places within the television,” Glance COO Mansi Jain told The Verge. “We are starting with the screensaver, but tomorrow… we can integrate it in the launcher of the TV.”

DirecTV screensavers will show AI-generated ads with your face in 2026 Read More »