AI

google-removes-some-ai-health-summaries-after-investigation-finds-“dangerous”-flaws

Google removes some AI health summaries after investigation finds “dangerous” flaws

Why AI Overviews produces errors

The recurring problems with AI Overviews stem from a design flaw in how the system works. As we reported in May 2024, Google built AI Overviews to show information backed up by top web results from its page ranking system. The company designed the feature this way based on the assumption that highly ranked pages contain accurate information.

However, Google’s page ranking algorithm has long struggled with SEO-gamed content and spam. The system now feeds these unreliable results to its AI model, which then summarizes them with an authoritative tone that can mislead users. Even when the AI draws from accurate sources, the language model can still draw incorrect conclusions from the data, producing flawed summaries of otherwise reliable information.

The technology does not inherently provide factual accuracy. Instead, it reflects whatever inaccuracies exist on the websites Google’s algorithm ranks highly, presenting the facts with an authority that makes errors appear trustworthy.

Other examples remain active

The Guardian found that typing slight variations of the original queries into Google, such as “lft reference range” or “lft test reference range,” still prompted AI Overviews. Hebditch said this was a big worry and that the AI Overviews present a list of tests in bold, making it very easy for readers to miss that these numbers might not even be the right ones for their test.

AI Overviews still appear for other examples that The Guardian originally highlighted to Google. When asked why these AI Overviews had not also been removed, Google said they linked to well-known and reputable sources and informed people when it was important to seek out expert advice.

Google said AI Overviews only appear for queries where it has high confidence in the quality of the responses. The company constantly measures and reviews the quality of its summaries across many different categories of information, it added.

This is not the first controversy for AI Overviews. The feature has previously told people to put glue on pizza and eat rocks. It has proven unpopular enough that users have discovered that inserting curse words into search queries disables AI Overviews entirely.

Google removes some AI health summaries after investigation finds “dangerous” flaws Read More »

apps-like-grok-are-explicitly-banned-under-google’s-rules—why-is-it-still-in-the-play-store?

Apps like Grok are explicitly banned under Google’s rules—why is it still in the Play Store?

Elon Musk’s xAI recently weakened content guard rails for image generation in the Grok AI bot. This led to a new spate of non-consensual sexual imagery on X, much of it aimed at silencing women on the platform. This, along with the creation of sexualized images of children in the more compliant Grok, has led regulators to begin investigating xAI. In the meantime, Google has rules in place for exactly this eventuality—it’s just not enforcing them.

It really could not be more clear from Google’s publicly available policies that Grok should have been banned yesterday. And yet, it remains in the Play Store. Not only that—it enjoys a T for Teen rating, one notch below the M-rated X app. Apple also still offers the Grok app on its platform, but its rules actually leave more wiggle room.

App content restrictions at Apple and Google have evolved in very different ways. From the start, Apple has been prone to removing apps on a whim, so developers have come to expect that Apple’s guidelines may not mention every possible eventuality. As Google has shifted from a laissez-faire attitude to more hard-nosed control of the Play Store, it has progressively piled on clarifications in the content policy. As a result, Google’s rules are spelled out in no uncertain terms, and Grok runs afoul of them.

Google has a dedicated support page that explains how to interpret its “Inappropriate Content” policy for the Play Store. Like Apple, the rules begin with a ban on apps that contain or promote sexual content including, but not limited to, pornography. That’s where Apple stops, but Google goes on to list more types of content and experiences that it considers against the rules.

“We don’t allow apps that contain or promote content associated with sexually predatory behavior, or distribute non-consensual sexual content,” the Play Store policy reads (emphasis ours). So the policy is taking aim at apps like Grok, but this line on its own could be read as focused on apps featuring “real” sexual content. However, Google is very thorough and has helpfully explained that this rule covers AI.

Play Store policy

Recent additions to Google’s Play Store policy explicitly ban apps like Grok.

Credit: Google

Recent additions to Google’s Play Store policy explicitly ban apps like Grok. Credit: Google

The detailed policy includes examples of content that violate this rule, which include much of what you’d expect—nothing lewd or profane, no escort services, and no illegal sexual themes. After a spate of rudimentary “nudify” apps in 2020 and 2021, Google added language to this page clarifying that “apps that claim to undress people” are not allowed in Google Play. In 2023, as the AI boom got underway, Google added another line to note that it also would remove apps that contained “non-consensual sexual content created via deepfake or similar technology.”

Apps like Grok are explicitly banned under Google’s rules—why is it still in the Play Store? Read More »

google:-don’t-make-“bite-sized”-content-for-llms-if-you-care-about-search-rank

Google: Don’t make “bite-sized” content for LLMs if you care about search rank

Signal in the noise

Google only provides general SEO recommendations, leaving the Internet’s SEO experts to cast bones and read tea leaves to gauge how the search algorithm works. This approach has borne fruit in the past, but not every SEO suggestion is a hit.

The tumultuous current state of the Internet, defined by inconsistent traffic and rapidly expanding use of AI, may entice struggling publishers to try more SEO snake oil like content chunking. When traffic is scarce, people will watch for any uptick and attribute that to the changes they have made. When the opposite happens, well, it’s just a bad day.

The new content superstition may appear to work at first, but at best, that’s an artifact of Google’s current quirks—the company isn’t building LLMs to like split-up content. Sullivan admits there may be “edge cases” where content chunking appears to work.

“Great. That’s what’s happening now, but tomorrow the systems may change,” he said. “You’ve made all these things that you did specifically for a ranking system, not for a human being because you were trying to be more successful in the ranking system, not staying focused on the human being. And then the systems improve, probably the way the systems always try to improve, to reward content written for humans. All that stuff that you did to please this LLM system that may or may not have worked, may not carry through for the long term.”

We probably won’t see chunking go away as long as publishers can point to a positive effect. However, Google seems to feel that chopping up content for LLMs is not a viable future for SEO.

Google: Don’t make “bite-sized” content for LLMs if you care about search rank Read More »

x’s-half-assed-attempt-to-paywall-grok-doesn’t-block-free-image-editing

X’s half-assed attempt to paywall Grok doesn’t block free image editing

So far, US regulators have been quiet about Grok’s outputs, with the Justice Department generally promising to take all forms of CSAM seriously. On Friday, Democratic senators started shifting those tides, demanding that Google and Apple remove X and Grok from app stores until it improves safeguards to block harmful outputs.

“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends,” the senators wrote in a letter to Apple Chief Executive Officer Tim Cook and Google Chief Executive Officer Sundar Pichai. “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices. Indeed, not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”

A response to the letter is requested by January 23.

Whether the UK will accept X’s supposed solution is yet to be seen. If UK regulator Ofcom decides to move ahead with a probe into whether Musk’s chatbot violates the UK’s Online Safety Act, X could face a UK ban or fines of up to 10 percent of the company’s global turnover.

“It’s unlawful,” UK Prime Minister Keir Starmer said of Grok’s worst outputs. “We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”

At least one UK parliament member, Jess Asato, told The Guardian that even if X had put up an actual paywall, that isn’t enough to end the scrutiny.

“While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualise and brutalise them,” Asato said. “Paying to put semen, bullet holes, or bikinis on women is still digital sexual assault, and xAI should disable the feature for good.”

X’s half-assed attempt to paywall Grok doesn’t block free image editing Read More »

grok-assumes-users-seeking-images-of-underage-girls-have-“good-intent”

Grok assumes users seeking images of underage girls have “good intent”


Conflicting instructions?

Expert explains how simple it could be to tweak Grok to block CSAM outputs.

Credit: Aurich Lawson | Getty Images

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as “sexually suggestive or nudifying,” Bloomberg reported.

While the chatbot claimed that xAI supposedly “identified lapses in safeguards” that allowed outputs flagged as child sexual abuse material (CSAM) and was “urgently fixing them,” Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok’s safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Billed as “the highest priority,” superseding “any other instructions” Grok may receive, these rules explicitly prohibit Grok from assisting with queries that “clearly intend to engage” in creating or distributing CSAM or otherwise sexually exploit children.

However, the rules also direct Grok to “assume good intent” and “don’t make worst-case assumptions without evidence” when users request images of young women.

Using words like “‘teenage’ or ‘girl’ does not necessarily imply underage,” Grok’s instructions say.

X declined Ars’ request to comment. The only statement X Safety has made so far shows that Elon Musk’s social media platform plans to blame users for generating CSAM, threatening to permanently suspend users and report them to law enforcement.

Critics dispute that X’s solution will end the Grok scandal, and child safety advocates and foreign governments are growing increasingly alarmed as X delays updates that could block Grok’s undressing spree.

Why Grok shouldn’t “assume good intentions”

Grok can struggle to assess users’ intenttions, making it “incredibly easy” for the chatbot to generate CSAM under xAI’s policy, Alex Georges, an AI safety researcher, told Ars.

The chatbot has been instructed, for example, that “there are no restrictionson fictional adult sexual content with dark or violent themes,” and Grok’s mandate to assume “good intent” may create gray areas in which CSAM could be created.

There’s evidence that in relying on these guidelines, Grok is currently generating a flood of harmful images on X, with even more graphic images being created on the chatbot’s standalone website and app, Wired reported. Researchers who surveyed 20,000 random images and 50,000 prompts told CNN that more than half of Grok’s outputs that feature images of people sexualize women, with 2 percent depicting “people appearing to be 18 years old or younger.” Some users specifically “requested minors be put in erotic positions and that sexual fluids be depicted on their bodies,” researchers found.

Grok isn’t the only chatbot that sexualizes images of real people without consent, but its policy seems to leave safety at a surface level, Georges said, and xAI is seemingly unwilling to expand safety efforts to block more harmful outputs.

Georges is the founder and CEO of AetherLab, an AI company that helps a wide range of firms—including tech giants like OpenAI, Microsoft, and Amazon—deploy generative AI products with appropriate safeguards. He told Ars that AetherLab works with many AI companies that are concerned about blocking harmful companion bot outputs like Grok’s. And although there are no industry norms—creating a “Wild West” due to regulatory gaps, particularly in the US—his experience with chatbot content moderation has convinced him that Grok’s instructions to “assume good intent” are “silly” because xAI’s requirement of “clear intent” doesn’t mean anything operationally to the chatbot.

“I can very easily get harmful outputs by just obfuscating my intent,” Georges said, emphasizing that “users absolutely do not automatically fit into the good-intent bucket.” And even “in a perfect world,” where “every single user does have good intent,” Georges noted, the model “will still generate bad content on its own because of how it’s trained.”

Benign inputs can lead to harmful outputs, Georges explained, and a sound safety system would catch both benign and harmful prompts. Consider, he suggested, a prompt for “a pic of a girl model taking swimming lessons.”

The user could be trying to create an ad for a swimming school, or they could have malicious intent and be attempting to manipulate the model. For users with benign intent, prompting can “go wrong,” Georges said, if Grok’s training data statistically links certain “normal phrases and situations” to “younger-looking subjects and/or more revealing depictions.”

“Grok might have seen a bunch of images where ‘girls taking swimming lessons’ were young and that human ‘models’ were dressed in revealing things, which means it could produce an underage girl in a swimming pool wearing something revealing,” Georges said. “So, a prompt that looks ‘normal’ can still produce an image that crosses the line.”

While AetherLab has never worked directly with xAI or X, Georges’ team has “tested their systems independently by probing for harmful outputs, and unsurprisingly, we’ve been able to get really bad content out of them,” Georges said.

Leaving AI chatbots unchecked poses a risk to children. A spokesperson for the National Center for Missing and Exploited Children (NCMEC), which processes reports of CSAM on X in the US, told Ars that “sexual images of children, including those created using artificial intelligence, are child sexual abuse material (CSAM). Whether an image is real or computer-generated, the harm is real, and the material is illegal.”

Researchers at the Internet Watch Foundation told the BBC that users of dark web forums are already promoting CSAM they claim was generated by Grok. These images are typically classified in the United Kingdom as the “lowest severity of criminal material,” researchers said. But at least one user was found to have fed a less-severe Grok output into another tool to generate the “most serious” criminal material, demonstrating how Grok could be used as an instrument by those seeking to commercialize AI CSAM.

Easy tweaks to make Grok safer

In August, xAI explained how the company works to keep Grok safe for users. But although the company acknowledged that it’s difficult to distinguish “malignant intent” from “mere curiosity,” xAI seemed convinced that Grok could “decline queries demonstrating clear intent to engage in activities” like child sexual exploitation, without blocking prompts from merely curious users.

That report showed that xAI refines Grok over time to block requests for CSAM “by adding safeguards to refuse requests that may lead to foreseeable harm”—a step xAI does not appear to have taken since late December, when reports first raised concerns that Grok was sexualizing images of minors.

Georges said there are easy tweaks xAI could make to Grok to block harmful outputs, including CSAM, while acknowledging that he is making assumptions without knowing exactly how xAI works to place checks on Grok.

First, he recommended that Grok rely on end-to-end guardrails, blocking “obvious” malicious prompts and flagging suspicious ones. It should then double-check outputs to block harmful ones, even when prompts are benign.

This strategy works best, Georges said, when multiple watchdog systems are employed, noting that “you can’t rely on the generator to self-police because its learned biases are part of what creates these failure modes.” That’s the role that AetherLab wants to fill across the industry, helping test chatbots for weakness to block harmful outputs by using “an ‘agentic’ approach with a shitload of AI models working together (thereby reducing the collective bias),” Georges said.

xAI could also likely block more harmful outputs by reworking Grok’s prompt style guidance, Georges suggested. “If Grok is, say, 30 percent vulnerable to CSAM-style attacks and another provider is 1 percent vulnerable, that’s a massive difference,” Georges said.

It appears that xAI is currently relying on Grok to police itself, while using safety guidelines that Georges said overlook an “enormous” number of potential cases where Grok could generate harmful content. The guidelines do not “signal that safety is a real concern,” Georges said, suggesting that “if I wanted to look safe while still allowing a lot under the hood, this is close to the policy I’d write.”

Chatbot makers must protect kids, NCMEC says

X has been very vocal about policing its platform for CSAM since Musk took over Twitter, but under former CEO Linda Yaccarino, the company adopted a broad protective stance against all image-based sexual abuse (IBSA). In 2024, X became one of the earliest corporations to voluntarily adopt the IBSA Principles that X now seems to be violating by failing to tweak Grok.

Those principles seek to combat all kinds of IBSA, recognizing that even fake images can “cause devastating psychological, financial, and reputational harm.” When it adopted the principles, X vowed to prevent the nonconsensual distribution of intimate images by providing easy-to-use reporting tools and quickly supporting the needs of victims desperate to block “the nonconsensual creation or distribution of intimate images” on its platform.

Kate Ruane, the director of the Center for Democracy and Technologys Free Expression Project, which helped form the working group behind the IBSA Principles, told Ars that although the commitments X made were “voluntary,” they signaled that X agreed the problem was a “pressing issue the company should take seriously.”

“They are on record saying that they will do these things, and they are not,” Ruane said.

As the Grok controversy sparks probes in Europe, India, and Malaysia, xAI may be forced to update Grok’s safety guidelines or make other tweaks to block the worst outputs.

In the US, xAI may face civil suits under federal or state laws that restrict intimate image abuse. If Grok’s harmful outputs continue into May, X could face penalties under the Take It Down Act, which authorizes the Federal Trade Commission to intervene if platforms don’t quickly remove both real and AI-generated non-consensual intimate imagery.

But whether US authorities will intervene any time soon remains unknown, as Musk is a close ally of the Trump administration. A spokesperson for the Justice Department told CNN that the department “takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”

“Laws are only as good as their enforcement,” Ruane told Ars. “You need law enforcement at the Federal Trade Commission or at the Department of Justice to be willing to go after these companies if they are in violation of the laws.”

Child safety advocates seem alarmed by the sluggish response. “Technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children,” NCMEC’s spokesperson told Ars. “As AI continues to advance, protecting children must remain a clear and nonnegotiable priority.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Grok assumes users seeking images of underage girls have “good intent” Read More »

chatgpt-health-lets-you-connect-medical-records-to-an-ai-that-makes-things-up

ChatGPT Health lets you connect medical records to an AI that makes things up

But despite OpenAI’s talk of supporting health goals, the company’s terms of service directly state that ChatGPT and other OpenAI services “are not intended for use in the diagnosis or treatment of any health condition.”

It appears that policy is not changing with ChatGPT Health. OpenAI writes in its announcement, “Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment. Instead, it helps you navigate everyday questions and understand patterns over time—not just moments of illness—so you can feel more informed and prepared for important medical conversations.”

A cautionary tale

The SFGate report on Sam Nelson’s death illustrates why maintaining that disclaimer legally matters. According to chat logs reviewed by the publication, Nelson first asked ChatGPT about recreational drug dosing in November 2023. The AI assistant initially refused and directed him to health care professionals. But over 18 months of conversations, ChatGPT’s responses reportedly shifted. Eventually, the chatbot told him things like “Hell yes—let’s go full trippy mode” and recommended he double his cough syrup intake. His mother found him dead from an overdose the day after he began addiction treatment.

While Nelson’s case did not involve the analysis of doctor-sanctioned health care instructions like the type ChatGPT Health will link to, his case is not unique, as many people have been misled by chatbots that provide inaccurate information or encourage dangerous behavior, as we have covered in the past.

That’s because AI language models can easily confabulate, generating plausible but false information in a way that makes it difficult for some users to distinguish fact from fiction. The AI models that services like ChatGPT use statistical relationships in training data (like the text from books, YouTube transcripts, and websites) to produce plausible responses rather than necessarily accurate ones. Moreover, ChatGPT’s outputs can vary widely depending on who is using the chatbot and what has previously taken place in the user’s chat history (including notes about previous chats).

ChatGPT Health lets you connect medical records to an AI that makes things up Read More »

in-car-ai-assistant-coming-to-fords-and-lincolns-in-2027

In-car AI assistant coming to Fords and Lincolns in 2027

The annual Consumer Electronics Show is currently raging in Las Vegas, and as has become traditional over the past decade, automakers and their suppliers now use the conference to announce their technology plans. Tonight it was Ford’s turn, and it is very on-trend for 2026. If you guessed that means AI is coming to the Ford in-car experience, congratulations, you guessed right.

Even though the company owes everything to mass-producing identical vehicles, it says that it wants AI to personalize your car to you. “Our vision for the customer is simple, but not elementary: a seamless layer of intelligence that travels with you between your phone and your vehicle,” said Doug Field, Ford’s chief EV, design, and digital officer.

“Not generic intelligence—many people can do that better than we can. What customers need is intelligence that understands where you are, what you’re doing, and what your vehicle is capable of, and then makes the next decision simpler,” Field wrote in a blog post Ford shared ahead of time with Ars.

As an example, Field suggests you could take a photo of something you want to load onto your truck, upload it to the AI, and find out whether it will fit in the bed.

At first, Ford’s AI assistant will just show up in the Ford and Lincoln smartphone apps. Expect that rollout to happen starting early this year. From 2027, the AI assistant will become a native experience as new or refreshed models are able to include it, possibly starting with the cheap electric truck that the automaker tells us is due next year, but also gas models like the Expedition and Navigator.

In-car AI assistant coming to Fords and Lincolns in 2027 Read More »

ai-starts-autonomously-writing-prescription-refills-in-utah

AI starts autonomously writing prescription refills in Utah

Caution

The first 250 renewals for each drug class will be reviewed by real doctors, but after that, the AI chatbot will be on its own. Adam Oskowitz, Doctronic co-founder and a professor at the University of California, San Francisco, told Politico that the AI chatbot is designed to err on the side of safety and escalate any case with uncertainty to a real doctor.

“Utah’s approach to regulatory mitigation strikes a vital balance between fostering innovation and ensuring consumer safety,” Margaret Woolley Busse, executive director of the Utah Department of Commerce, said in a statement.

For now, it’s unclear if the Food and Drug Administration will step in to regulate AI prescribing. On the one hand, prescription renewals are a matter of practicing medicine, which falls under state governance. However, Politico notes that the FDA has said that it has the authority to regulate medical devices used to diagnose, treat, or prevent disease.

In a statement, Robert Steinbrook, health research group director at watchdog Public Citizen, blasted Doctronic’s program and the lack of oversight. “AI should not be autonomously refilling prescriptions, nor identifying itself as an ‘AI doctor,’” Steinbrook said.

“Although the thoughtful application of AI can help to improve aspects of medical care, the Utah pilot program is a dangerous first step toward more autonomous medical practice,” he said.”The FDA and other federal regulatory agencies cannot look the other way when AI applications undermine the essential human clinician role in prescribing and renewing medications.”

AI starts autonomously writing prescription refills in Utah Read More »

dell’s-xps-revival-is-a-welcome-reprieve-from-the-“ai-pc”-fad

Dell’s XPS revival is a welcome reprieve from the “AI PC” fad

After making the obviously poor decision to kill its XPS laptops and desktops in January 2025, Dell started selling 16- and 14-inch XPS laptops again today.

“It was obvious we needed to change,” Jeff Clarke, vice chairman and COO at Dell Technologies, said at a press event in New York City previewing Dell’s CES 2026 announcements.

A year ago, Dell abandoned XPS branding, as well as its Latitude, Inspiron, and Precision PC lineups. The company replaced the reputable brands with Dell Premium, Dell Pro, and Dell Pro Max. Each series included a base model, as well as “Plus” and “Premium.” Dell isn’t resurrecting its Latitude, Inspiron, or Precision series, and it will still sell “Dell Pro” models.

Dell's consumer and commercial PC lines.

This is how Dell breaks down its computer lineup now.

Credit: Dell

This is how Dell breaks down its computer lineup now. Credit: Dell

XPS returns

The revival of XPS means the return of one of the easiest recommendations for consumer ultralight laptops. Before last year’s shunning, XPS laptops had a reputation for thin, lightweight designs with modern features and decent performance for the price. This year, Dell is even doing away with some of the design tweaks that it introduced to the XPS lineup in 2022, which, unfortunately, were shoppers’ sole option last year.

Inheriting traits from the XPS 13 Plus introduced in 2022, the XPS-equivalent laptops that Dell released in 2025 had a capacitive-touch row without physical buttons, a borderless touchpad with haptic feedback, and a flat, lattice-free keyboard. The design was meant to enable more thermal headroom but made using the computers feel uncomfortable and unfamiliar.

The XPS 14 and XPS 16 laptops launching today have physical function rows. They still have a haptic touchpad, but now the touchpad has comforting left and right borders. And although the XPS 14 and XPS 16 have the same lattice-free keyboard of the XPS 13 Plus, Dell will release a cheaper XPS 13 later this year with a more traditional chiclet keyboard, since those types of keyboards are cheaper to make.

Dell’s XPS revival is a welcome reprieve from the “AI PC” fad Read More »

news-orgs-win-fight-to-access-20m-chatgpt-logs-now-they-want-more.

News orgs win fight to access 20M ChatGPT logs. Now they want more.

Describing OpenAI’s alleged “playbook” to dodge copyright claims, news groups accused OpenAI of failing to “take any steps to suspend its routine destruction practices.” There were also “two spikes in mass deletion” that OpenAI attributed to “technical issues.”

However, OpenAI made sure to retain outputs that could help its defense, the court filing alleged, including data from accounts cited in news organizations’ complaints.

OpenAI did not take the same care to preserve chats that could be used as evidence against it, news groups alleged, citing testimony from Mike Trinh, OpenAI’s associate general counsel. “In other words, OpenAI preserved evidence of the News Plaintiffs eliciting their own works from OpenAI’s products but deleted evidence of third-party users doing so,” the filing said.

It’s unclear how much data was deleted, plaintiffs alleged, since OpenAI won’t share “the most basic information” on its deletion practices. But it’s allegedly very clear that OpenAI could have done more to preserve the data, since Microsoft apparently had no trouble doing so with Copilot, the filing said.

News plaintiffs are hoping the court will agree that OpenAI and Microsoft aren’t fighting fair by delaying sharing logs, which they said prevents them from building their strongest case.

They’ve asked the court to order Microsoft to “immediately” produce Copilot logs “in a readily searchable remotely-accessible format,” proposing a deadline of January 9 or “within a day of the Court ruling on this motion.”

Microsoft declined Ars’ request for comment.

And as for OpenAI, it wants to know if the deleted logs, including “mass deletions,” can be retrieved, perhaps bringing millions more ChatGPT conversations into the litigation that users likely expected would never see the light of day again.

On top of possible sanctions, news plaintiffs asked the court to keep in place a preservation order blocking OpenAI from permanently deleting users’ temporary and deleted chats. They also want the court to order OpenAI to explain “the full scope of destroyed output log data for all of its products at issue” in the litigation and whether those deleted chats can be restored, so that news plaintiffs can examine them as evidence, too.

News orgs win fight to access 20M ChatGPT logs. Now they want more. Read More »

stewart-cheifet,-pbs-host-who-chronicled-the-pc-revolution,-dies-at-87

Stewart Cheifet, PBS host who chronicled the PC revolution, dies at 87

Stewart Cheifet, the television producer and host who documented the personal computer revolution for nearly two decades on PBS, died on December 28, 2025, at age 87 in Philadelphia. Cheifet created and hosted Computer Chronicles, which ran on the public television network from 1983 to 2002 and helped demystify a new tech medium for millions of American viewers.

Computer Chronicles covered everything from the earliest IBM PCs and Apple Macintosh models to the rise of the World Wide Web and the dot-com boom. Cheifet conducted interviews with computing industry figures, including Bill Gates, Steve Jobs, and Jeff Bezos, while demonstrating hardware and software for a general audience.

From 1983 to 1990, he co-hosted the show with Gary Kildall, the Digital Research founder who created the popular CP/M operating system that predated MS-DOS on early personal computer systems.

Computer Chronicles – 01×25 – Artificial Intelligence (1984)

From 1996 to 2002, Cheifet also produced and hosted Net Cafe, a companion series that documented the early Internet boom and introduced viewers to then-new websites like Yahoo, Google, and eBay.

A legacy worth preserving

Computer Chronicles began as a local weekly series in 1981 when Cheifet served as station manager at KCSM-TV, the College of San Mateo’s public television station. It became a national PBS series in 1983 and ran continuously until 2002, producing 433 episodes across 19 seasons. The format remained consistent throughout: product demonstrations, guest interviews, and a closing news segment called “Random Access” that covered industry developments.

After the show’s run ended and Cheifet left television production, he worked to preserve the show’s legacy as a consultant for the Internet Archive, helping to make publicly available the episodes of Computer Chronicles and Net Cafe.

Stewart Cheifet, PBS host who chronicled the PC revolution, dies at 87 Read More »

amazon-alexa+-released-to-the-general-public-via-an-early-access-website

Amazon Alexa+ released to the general public via an early access website

Anyone can now try Alexa+, Amazon’s generative AI assistant, through a free early access program at Alexa.com. The website frees the AI, which Amazon released via early access in February, from hardware and makes it as easily accessible as more established chatbots, like OpenAI’s ChatGPT and Google’s Gemini.

Until today, you needed a supporting device to access Alexa+. Amazon hasn’t said when the early access period will end, but when it does, Alexa+ will be included with Amazon Prime memberships, which start at $15 per month, or cost $20 per month on its own.

The above pricing suggests that Amazon wants Alexa+ to drive people toward Prime subscriptions. By being interwoven with Amazon’s shopping ecosystem, including Amazon’s e-commerce platform, grocery delivery business, and Whole Foods, Alexa+ can make more money for Amazon.

Just like it has with Alexa+ on devices, Amazon is pushing Alexa.com as a tool for people to organize and manage their household. Amazon’s announcement of Alexa.com today emphasizes Alexa+’s features for planning trips and meals, to-do lists, calendars, and smart homes. Alexa.com “also provides persistent context and continuity, allowing you to access Alexa on whichever device or interface best serves the task at hand, with all previous chats, preferences, and personalization” carrying over, Amazon said.

Amazon already knew a browser-based version of Alexa would be helpful. Alexa was available via Alexa.Amazon.com until around the time Amazon started publicly discussing a generative AI version of Alexa in 2023. Alexa+ is now accessible through Alexa.Amazon.com (in addition to Alexa.com).

“This is a new interaction model and adds a powerful way to use and collaborate with Alexa+,” Amazon said today. “Combined with the redesigned Alexa mobile app, which will feature an agent-forward design, Alexa+ will be accessible across every surface—whether you’re at your desk, on the go, or at home.”

An example of someone using the Alexa+ website to manage smart home devices.

Amazon provided this example of someone using the Alexa+ website to manage smart home devices.

Credit: Amazon

Amazon provided this example of someone using the Alexa+ website to manage smart home devices. Credit: Amazon

Alexa has largely been reported to cost Amazon billions of dollars, despite Amazon’s claim that 600 million Alexa-powered devices have been sold. By incorporating more powerful and generative AI-based features and a subscription fee, Amazon hopes people will use Alexa+ more frequently and for more advanced and essential tasks, resulting in the financial success that has eluded the original Alexa. Amazon is also considering injecting ads into Alexa+ conversations.

Notably, ahead of its final release and while still in early access, Alexa+ has been reported to be slower than expected and struggle with inaccuracies at times. It also lacks some features that Amazon executives have previously touted, like the ability to order takeout.

Amazon Alexa+ released to the general public via an early access website Read More »