generative ai

microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform

Microsoft sues service for creating illicit content with its AI platform

Microsoft and others forbid using their generative AI systems to create various content. Content that is off limits includes materials that feature or promote sexual exploitation or abuse, is erotic or pornographic, or attacks, denigrates, or excludes people based on race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits. It also doesn’t allow the creation of content containing threats, intimidation, promotion of physical harm, or other abusive behavior.

Besides expressly banning such usage of its platform, Microsoft has also developed guardrails that inspect both prompts inputted by users and the resulting output for signs the content requested violates any of these terms. These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.

Microsoft didn’t outline precisely how the defendants’ software was allegedly designed to bypass the guardrails the company had created.

Masada wrote:

Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.

The lawsuit alleges the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”

Microsoft sues service for creating illicit content with its AI platform Read More »

why-i’m-disappointed-with-the-tvs-at-ces-2025

Why I’m disappointed with the TVs at CES 2025


Won’t someone please think of the viewer?

Op-ed: TVs miss opportunity for real improvement by prioritizing corporate needs.

The TV industry is hitting users over the head with AI and other questionable gimmicks Credit: Getty

If you asked someone what they wanted from TVs released in 2025, I doubt they’d say “more software and AI.” Yet, if you look at what TV companies have planned for this year, which is being primarily promoted at the CES technology trade show in Las Vegas this week, software and AI are where much of the focus is.

The trend reveals the implications of TV brands increasingly viewing themselves as software rather than hardware companies, with their products being customer data rather than TV sets. This points to an alarming future for smart TVs, where even premium models sought after for top-end image quality and hardware capabilities are stuffed with unwanted gimmicks.

LG’s remote regression

LG has long made some of the best—and most expensive—TVs available. Its OLED lineup, in particular, has appealed to people who use their TVs to watch Blu-rays, enjoy HDR, and the like. However, some features that LG is introducing to high-end TVs this year seem to better serve LG’s business interests than those users’ needs.

Take the new remote. Formerly known as the Magic Remote, LG is calling the 2025 edition the AI Remote. That is already likely to dissuade people who are skeptical about AI marketing in products (research suggests there are many such people). But the more immediately frustrating part is that the new remote doesn’t have a dedicated button for switching input modes, as previous remotes from LG and countless other remotes do.

LG AI remote

LG’s AI Remote. Credit: Tom’s Guide/YouTube

To use the AI Remote to change the TV’s input—a common task for people using their sets to play video games, watch Blu-rays or DVDs, connect their PC, et cetera—you have to long-press the Home Hub button. Single-pressing that button brings up a dashboard of webOS (the operating system for LG TVs) apps. That functionality isn’t immediately apparent to someone picking up the remote for the first time and detracts from the remote’s convenience.

By overlooking other obviously helpful controls (play/pause, fast forward/rewind, and numbers) while including buttons dedicated to things like LG’s free ad-supported streaming TV (FAST) channels and Amazon Alexa, LG missed an opportunity to update its remote in a way centered on how people frequently use TVs. That said, it feels like user convenience didn’t drive this change. Instead, LG seems more focused on getting people to use webOS apps. LG can monetize app usage through, i.e., getting a cut of streaming subscription sign-ups, selling ads on webOS, and selling and leveraging user data.

Moving from hardware provider to software platform

LG, like many other TV OEMs, has been growing its ads and data business. Deals with data analytics firms like Nielsen give it more incentive to acquire customer data. Declining TV margins and rock-bottom prices from budget brands (like Vizio and Roku, which sometimes lose money on TV hardware sales and make up for the losses through ad sales and data collection) are also pushing LG’s software focus. In the case of the AI Remote, software prioritization comes at the cost of an oft-used hardware capability.

Further demonstrating its motives, in September 2023, LG announced intentions to “become a media and entertainment platform company” by offering “services” and a “collection of curated content in products, including LG OLED and LG QNED TVs.” At the time, the South Korean firm said it would invest 1 trillion KRW (about $737.7 million) into its webOS business through 2028.

Low TV margins, improved TV durability, market saturation, and broader economic challenges are all serious challenges for an electronics company like LG and have pushed LG to explore alternative ways to make money off of TVs. However, after paying four figures for TV sets, LG customers shouldn’t be further burdened to help LG accrue revenue.

Google TVs gear up for subscription-based features

There are numerous TV manufacturers, including Sony, TCL, and Philips, relying on Google software to power their TV sets. Numerous TVs announced at CES 2025 will come with what Google calls Gemini Enhanced Google Assistant. The idea that this is something that people using Google TVs have requested is somewhat contradicted by Google Assistant interactions with TVs thus far being “somewhat limited,” per a Lowpass report.

Nevertheless, these TVs are adding far-field microphones so that they can hear commands directed at the voice assistant. For the first time, the voice assistant will include Google’s generative AI chatbot, Gemini, this year—another feature that TV users don’t typically ask for. Despite the lack of demand and the privacy concerns associated with microphones that can pick up audio from far away even when the TV is off, companies are still loading 2025 TVs with far-field mics to support Gemini. Notably, these TVs will likely allow the mics to be disabled, like you can with other TVs using far-field mics. But I still ponder about features/hardware that could have been implemented instead.

Google is also working toward having people pay a subscription fee to use Gemini on their TVs, PCWorld reported.

“For us, our biggest goal is to create enough value that yes, you would be willing to pay for [Gemini],” Google TV VP and GM Shalini Govil-Pai told the publication.

The executive pointed to future capabilities for the Gemini-driven Google Assistant on TVs, including asking it to “suggest a movie like Jurassic Park but suitable for young children” or to show “Bollywood movies that are similar to Mission: Impossible.”

She also pointed to future features like showing weather, top news stories, and upcoming calendar events when someone is near the TV, showing AI-generated news briefings, and the ability to respond to questions like “explain the solar system to a third-grader” with text, audio, and YouTube videos.

But when people have desktops, laptops, tablets, and phones in their homes already, how helpful are these features truly? Govil-Pai admitted to PCWorld that “people are not used to” using their TVs this way “so it will take some time for them to adapt to it.” With this in mind, it seems odd for TV companies to implement new, more powerful microphones to support features that Google acknowledges aren’t in demand. I’m not saying that tech companies shouldn’t get ahead of the curve and offer groundbreaking features that users hadn’t considered might benefit them. But already planning to monetize those capabilities—with a subscription, no less—suggests a prioritization of corporate needs.

Samsung is hungry for AI

People who want to use their TV for cooking inspiration often turn to cooking shows or online cooking videos. However, Samsung wants people to use its TV software to identify dishes they want to try making.

During CES, Samsung announced Samsung Food for TVs. The feature leverages Samsung TVs’ AI processors to identify food displayed on the screen and recommend relevant recipes. Samsung introduced the capability in 2023 as an iOS and Android app after buying the app Whisk in 2019. As noted by TechCrunch, though, other AI tools for providing recipes based on food images are flawed.

So why bother with such a feature? You can get a taste of Samsung’s motivation from its CES-announced deal with Instacart that lets people order off Instacart from Samsung smart fridges that support the capability. Samsung Food on TVs can show users the progress of food orders placed via the Samsung Food mobile app on their TVs. Samsung Food can also create a shopping list for recipe ingredients based on what it knows (using cameras and AI) is in your (supporting) Samsung fridge. The feature also requires a Samsung account, which allows the company to gather more information on users.

Other software-centric features loaded into Samsung TVs this year include a dedicated AI button on the new TVs’ remotes, the ability to use gestures to control the TV but only if you’re wearing a Samsung Galaxy Watch, and AI Karaoke, which lets people sing karaoke using their TVs by stripping vocals from music playing and using their phone as a mic.

Like LG, Samsung has shown growing interest in ads and data collection. In May, for example, it expanded its automatic content recognition tech to track ad exposure on streaming services viewed on its TVs. It also has an ads analytics partnership with Experian.

Large language models on TVs

TVs are mainstream technology in most US homes. Generative AI chatbots, on the other hand, are emerging technology that many people have yet to try. Despite these disparities, LG and Samsung are incorporating Microsoft’s Copilot chatbot into 2025 TVs.

LG claims that Copilot will help its TVs “understand conversational context and uncover subtle user intentions,” adding: “Access to Microsoft Copilot further streamlines the process, allowing users to efficiently find and organize complex information using contextual cues. For an even smoother and more engaging experience, the AI chatbot proactively identifies potential user challenges and offers timely, effective solutions.”

Similarly, Samsung, which is also adding Copilot to some of its smart monitors, said in its announcement that Copilot will help with “personalized content recommendations.” Samsung has also said that Copilot will help its TVs understand strings of commands, like increasing the volume and changing the channel, CNET noted. Samsung said it intends to work with additional AI partners, namely Google, but it’s unclear why it needs multiple AI partners, especially when it hasn’t yet seen how people use large language models on their TVs.

TV-as-a-platform

To be clear, this isn’t a condemnation against new, unexpected TV features. This also isn’t a censure against new TV apps or the usage of AI in TVs.

AI marketing hype is real and misleading regarding the demand, benefits, and possibilities of AI in consumer gadgets. However, there are some cases when innovative software, including AI, can improve things that TV users not only care about but actually want or need. For example, some TVs use AI for things like trying to optimize sound, color, and/or brightness, including based on current environmental conditions or upscaling. This week, Samsung announced AI Live Translate for TVs. The feature is supposed to be able to translate foreign language closed captions in real time, providing a way for people to watch more international content. It’s a feature I didn’t ask for but can see being useful and changing how I use my TV.

But a lot of this week’s TV announcements underscore an alarming TV-as-a-platform trend where TV sets are sold as a way to infiltrate people’s homes so that apps, AI, and ads can be pushed onto viewers. Even high-end TVs are moving in this direction and amplifying features with questionable usefulness, effectiveness, and privacy considerations. Again, I can’t help but wonder what better innovations could have come out this year if more R&D was directed toward hardware and other improvements that are more immediately rewarding for users than karaoke with AI.

The TV industry is facing economic challenges, and, understandably, TV brands are seeking creative solutions for making money. But for consumers, that means paying for features that you’re likely to ignore. Ultimately, many people just want a TV with amazing image and sound quality. Finding that without having to sift through a bunch of fluff is getting harder.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Why I’m disappointed with the TVs at CES 2025 Read More »

anthropic-gives-court-authority-to-intervene-if-chatbot-spits-out-song-lyrics

Anthropic gives court authority to intervene if chatbot spits out song lyrics

Anthropic did not immediately respond to Ars’ request for comment on how guardrails currently work to prevent the alleged jailbreaks, but publishers appear satisfied by current guardrails in accepting the deal.

Whether AI training on lyrics is infringing remains unsettled

Now, the matter of whether Anthropic has strong enough guardrails to block allegedly harmful outputs is settled, Lee wrote, allowing the court to focus on arguments regarding “publishers’ request in their Motion for Preliminary Injunction that Anthropic refrain from using unauthorized copies of Publishers’ lyrics to train future AI models.”

Anthropic said in its motion opposing the preliminary injunction that relief should be denied.

“Whether generative AI companies can permissibly use copyrighted content to train LLMs without licenses,” Anthropic’s court filing said, “is currently being litigated in roughly two dozen copyright infringement cases around the country, none of which has sought to resolve the issue in the truncated posture of a preliminary injunction motion. It speaks volumes that no other plaintiff—including the parent company record label of one of the Plaintiffs in this case—has sought preliminary injunctive relief from this conduct.”

In a statement, Anthropic’s spokesperson told Ars that “Claude isn’t designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement.”

“Our decision to enter into this stipulation is consistent with those priorities,” Anthropic said. “We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use.”

This suit will likely take months to fully resolve, as the question of whether AI training is a fair use of copyrighted works is complex and remains hotly disputed in court. For Anthropic, the stakes could be high, with a loss potentially triggering more than $75 million in fines, as well as an order possibly forcing Anthropic to reveal and destroy all the copyrighted works in its training data.

Anthropic gives court authority to intervene if chatbot spits out song lyrics Read More »

openai-defends-for-profit-shift-as-critical-to-sustain-humanitarian-mission

OpenAI defends for-profit shift as critical to sustain humanitarian mission

OpenAI has finally shared details about its plans to shake up its core business by shifting to a for-profit corporate structure.

On Thursday, OpenAI posted on its blog, confirming that in 2025, the existing for-profit arm will be transformed into a Delaware-based public benefit corporation (PBC). As a PBC, OpenAI would be required to balance its shareholders’ and stakeholders’ interests with the public benefit. To achieve that, OpenAI would offer “ordinary shares of stock” while using some profits to further its mission—”ensuring artificial general intelligence (AGI) benefits all of humanity”—to serve a social good.

To compensate for losing control over the for-profit, the nonprofit would have some shares in the PBC, but it’s currently unclear how many will be allotted. Independent financial advisors will help OpenAI reach a “fair valuation,” the blog said, while promising the new structure would “multiply” the donations that previously supported the nonprofit.

“Our plan would result in one of the best resourced nonprofits in history,” OpenAI said. (During its latest funding round, OpenAI was valued at $157 billion.)

OpenAI claimed the nonprofit’s mission would be more sustainable under the proposed changes, as the costs of AI innovation only continue to compound. The new structure would set the PBC up to control OpenAI’s operations and business while the nonprofit would “hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science,” OpenAI said.

Some of OpenAI’s rivals, such as Anthropic and Elon Musk’s xAI, use a similar corporate structure, OpenAI noted.

Critics had previously pushed back on this plan, arguing that humanity may be better served if the nonprofit continues controlling the for-profit arm of OpenAI. But OpenAI argued that the old way made it hard for the Board “to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit.

OpenAI defends for-profit shift as critical to sustain humanitarian mission Read More »

photobucket-opted-inactive-users-into-privacy-nightmare,-lawsuit-says

Photobucket opted inactive users into privacy nightmare, lawsuit says

Photobucket was sued Wednesday after a recent privacy policy update revealed plans to sell users’ photos—including biometric identifiers like face and iris scans—to companies training generative AI models.

The proposed class action seeks to stop Photobucket from selling users’ data without first obtaining written consent, alleging that Photobucket either intentionally or negligently failed to comply with strict privacy laws in states like Illinois, New York, and California by claiming it can’t reliably determine users’ geolocation.

Two separate classes could be protected by the litigation. The first includes anyone who ever uploaded a photo between 2003—when Photobucket was founded—and May 1, 2024. Another potentially even larger class includes any non-users depicted in photographs uploaded to Photobucket, whose biometric data has also allegedly been sold without consent.

Photobucket risks huge fines if a jury agrees with Photobucket users that the photo-storing site unjustly enriched itself by breaching its user contracts and illegally seizing biometric data without consent. As many as 100 million users could be awarded untold punitive damages, as well as up to $5,000 per “willful or reckless violation” of various statutes.

If a substantial portion of Photobucket’s entire 13 billion-plus photo collection is found infringing, the fines could add up quickly. In October, Photobucket estimated that “about half of its 13 billion images are public and eligible for AI licensing,” Business Insider reported.

Users suing include a mother of a minor whose biometric data was collected and a professional photographer in Illinois who should have been protected by one of the country’s strongest biometric privacy laws.

So far, Photobucket has confirmed that at least one “alarmed” Illinois user’s data may have already been sold to train AI. The lawsuit alleged that most users eligible to join the class action likely similarly only learned of the “conduct long after the date that Photobucket began selling, licensing, and/or otherwise disclosing Class Members’ biometric data to third parties.”

Photobucket opted inactive users into privacy nightmare, lawsuit says Read More »

tcl-tvs-will-use-films-made-with-generative-ai-to-push-targeted-ads

TCL TVs will use films made with generative AI to push targeted ads

Advertising has become a focal point of TV software. We’re seeing companies that sell TV sets be increasingly interested in leveraging TV operating systems (OSes) for ads and tracking. This has led to bold new strategies, like an adtech firm launching a TV OS and ads on TV screensavers.

With new short films set to debut on its free streaming service tomorrow, TV-maker TCL is positing a new approach to monetizing TV owners and to film and TV production that sees reduced costs through reliance on generative AI and targeted ads.

TCL’s five short films are part of a company initiative to get people more accustomed to movies and TV shows made with generative AI. The movies will “be promoted and featured prominently on” TCL’s free ad-supported streaming television (FAST) service, TCLtv+, TCL announced in November. TCLtv+has hundreds of FAST channels and comes on TCL-brand TVs using various OSes, including Google TV and Roku OS.

Some of the movies have real actors. You may even recognize some, (like Kellita Smith, who played Bernie Mac’s wife, Wanda, on The Bernie Mac Show). Others feature characters made through generative AI. All the films use generative AI for special effects and/or animations and took 12 weeks to make, 404 Media, which attended a screening of the movies, reported today. AI tools used include ComfyUI, Nuke, and Runway, 404 reported. However, all of the TCL short movies were written, directed, and scored by real humans (again, including by people you may be familiar with). At the screening, Chris Regina, TCL’s chief content officer for North America, told attendees that “over 50 animators, editors, effects artists, professional researchers, [and] scientists” worked on the movies.

I’ve shared the movies below for you to judge for yourself, but as a spoiler, you can imagine the quality of short films made to promote a service that was created for targeted ads and that use generative AI for fast, affordable content creation. AI-generated videos are expected to improve, but it’s yet to be seen if a TV brand like TCL will commit to finding the best and most natural ways to use generative AI for video production. Currently, TCL’s movies demonstrate the limits of AI-generated video, such as odd background imagery and heavy use of narration that can distract from badly synced audio.

TCL TVs will use films made with generative AI to push targeted ads Read More »

google’s-plan-to-keep-ai-out-of-search-trial-remedies-isn’t-going-very-well

Google’s plan to keep AI out of search trial remedies isn’t going very well


DOJ: AI is not its own market

Judge: AI will likely play “larger role” in Google search remedies as market shifts.

Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google’s AI products may be restricted as an appropriate remedy following the government’s win in the search monopoly trial.

According to Law360, Mehta said that “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google’s anticompetitive behavior, the judge wants to hear much more about how each side views AI’s role in Google’s search empire during the remedies stage of litigation than he did during the search trial.

“AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase,” Mehta said. “Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?”

To fight the DOJ’s proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google’s ability to compete, the tech company is currently trying to pry into Microsoft’s AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has “core relevance” to the remedies litigation, Law360 reported.

The DOJ has asked for a wide range of remedies to stop Google from potentially using AI to entrench its market dominance in search and search text advertising. They include a ban on exclusive agreements with publishers to train on content, which the DOJ fears might allow Google to block AI rivals from licensing data, potentially posing a barrier to entry in both markets. Under the proposed remedies, Google would also face restrictions on investments in or acquisitions of AI products, as well as mergers with AI companies.

Additionally, the DOJ wants Mehta to stop Google from any potential self-preferencing, such as making an AI product mandatory on Android devices Google controls or preventing a rival from distribution on Android devices.

The government seems very concerned that Google may use its ownership of Android to play games in the emerging AI sector. They’ve further recommended an order preventing Google from discouraging partners from working with rivals, degrading the quality of rivals’ AI products on Android devices, or otherwise “coercing” manufacturers or other Android partners into giving Google’s AI products “better treatment.”

Importantly, if the court orders AI remedies linked to Google’s control of Android, Google could risk a forced sale of Android if Mehta grants the DOJ’s request for “contingent structural relief” requiring divestiture of Android if behavioral remedies don’t destroy the current monopolies.

Finally, the government wants Google to be required to allow publishers to opt out of AI training without impacting their search rankings. (Currently, opting out of AI scraping automatically opts sites out of Google search indexing.)

All of this, the DOJ alleged, is necessary to clear the way for a thriving search market as AI stands to shake up the competitive landscape.

“The promise of new technologies, including advances in artificial intelligence (AI), may present an opportunity for fresh competition,” the DOJ said in a court filing. “But only a comprehensive set of remedies can thaw the ecosystem and finally reverse years of anticompetitive effects.”

At the status conference Tuesday, DOJ attorney David Dahlquist reiterated to Mehta that these remedies are needed so that Google’s illegal conduct in search doesn’t extend to this “new frontier” of search, Law360 reported. Dahlquist also clarified that the DOJ views these kinds of AI products “as new access points for search, rather than a whole new market.”

“We’re very concerned about Google’s conduct being a barrier to entry,” Dahlquist said.

Google could not immediately be reached for comment. But the search giant has maintained that AI is beyond the scope of the search trial.

During the status conference, Google attorney John E. Schmidtlein disputed that AI remedies are relevant. While he agreed that “AI is key to the future of search,” he warned that “extraordinary” proposed remedies would “hobble” Google’s AI innovation, Law360 reported.

Microsoft shields confidential AI deals

Microsoft is predictably protective of its AI deals, arguing in a court filing that its “highly confidential agreements with OpenAI, Perplexity AI, Inflection, and G42 are not relevant to the issues being litigated” in the Google trial.

According to Microsoft, Google is arguing that it needs this information to “shed light” on things like “the extent to which the OpenAI partnership has driven new traffic to Bing and otherwise affected Microsoft’s competitive standing” or what’s required by “terms upon which Bing powers functionality incorporated into Perplexity’s search service.”

These insights, Google seemingly hopes, will convince Mehta that Google’s AI deals and investments are the norm in the AI search sector. But Microsoft is currently blocking access, arguing that “Google has done nothing to explain why” it “needs access to the terms of Microsoft’s highly confidential agreements with other third parties” when Microsoft has already offered to share documents “regarding the distribution and competitive position” of its AI products.

Microsoft also opposes Google’s attempts to review how search click-and-query data is used to train OpenAI’s models. Those requests would be better directed at OpenAI, Microsoft said.

If Microsoft gets its way, Google’s discovery requests will be limited to just Microsoft’s content licensing agreements for Copilot. Microsoft alleged those are the only deals “related to the general search or the general search text advertising markets” at issue in the trial.

On Tuesday, Microsoft attorney Julia Chapman told Mehta that Microsoft had “agreed to provide documents about the data used to train its own AI model and also raised concerns about the competitive sensitivity of Microsoft’s agreements with AI companies,” Law360 reported.

It remains unclear at this time if OpenAI will be forced to give Google the click-and-query data Google seeks. At the status hearing, Mehta ordered OpenAI to share “financial statements, information about the training data for ChatGPT, and assessments of the company’s competitive position,” Law360 reported.

But the DOJ may also be interested in seeing that data. In their proposed final judgment, the government forecasted that “query-based AI solutions” will “provide the most likely long-term path for a new generation of search competitors.”

Because of that prediction, any remedy “must prevent Google from frustrating or circumventing” court-ordered changes “by manipulating the development and deployment of new technologies like query-based AI solutions.” Emerging rivals “will depend on the absence of anticompetitive constraints to evolve into full-fledged competitors and competitive threats,” the DOJ alleged.

Mehta seemingly wants to see the evidence supporting the DOJ’s predictions, which could end up exposing carefully guarded secrets of both Google’s and its biggest rivals’ AI deals.

On Tuesday, the judge noted that integration of AI into search engines had already evolved what search results pages look like. And from his “very layperson’s perspective,” it seems like AI’s integration into search engines will continue moving “very quickly,” as both parties seem to agree.

Whether he buys into the DOJ’s theory that Google could use its existing advantage as the world’s greatest gatherer of search query data to block rivals from keeping pace is still up in the air, but the judge seems moved by the DOJ’s claim that “AI has the ability to affect market dynamics in these industries today as well as tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Google’s plan to keep AI out of search trial remedies isn’t going very well Read More »

apple-botched-the-apple-intelligence-launch,-but-its-long-term-strategy-is-sound

Apple botched the Apple Intelligence launch, but its long-term strategy is sound


I’ve spent a week with Apple Intelligence—here are the takeaways.

Apple Intelligence includes features like Clean Up, which lets you pick from glowing objects it has recognized to remove them from a photo. Credit: Samuel Axon

Ask a few random people about Apple Intelligence and you’ll probably get quite different responses.

One might be excited about the new features. Another could opine that no one asked for this and the company is throwing away its reputation with creatives and artists to chase a fad. Another still might tell you that regardless of the potential value, Apple is simply too late to the game to make a mark.

The release of Apple’s first Apple Intelligence-branded AI tools in iOS 18.1 last week makes all those perspectives understandable.

The first wave of features in Apple’s delayed release shows promise—and some of them may be genuinely useful, especially with further refinement. At the same time, Apple’s approach seems rushed, as if the company is cutting some corners to catch up where some perceive it has fallen behind.

That impatient, unusually undisciplined approach to the rollout could undermine the value proposition of AI tools for many users. Nonetheless, Apple’s strategy might just work out in the long run.

What’s included in “Apple Intelligence”

I’m basing those conclusions on about a week spent with both the public release of iOS 18.1 and the developer beta of iOS 18.2. Between them, the majority of features announced back in June under the “Apple Intelligence” banner are present.

Let’s start with a quick rundown of which Apple Intelligence features are in each release.

iOS 18.1 public release

  • Writing Tools
    • Proofreading
    • Rewriting in friendly, professional, or concise voices
    • Summaries in prose, key points, bullet point list, or table format
  • Text summaries
    • Summarize text from Mail messages
    • Summarize text from Safari pages
  • Notifications
  • Reduce Interruptions – Intelligent filtering of notifications to include only ones deemed critical
  • Type to Siri
  • More conversational Siri
  • Photos
    • Clean Up (remove an object or person from the image)
    • Generate Memories videos/slideshows from plain language text prompts
    • Natural language search

iOS 18.2 developer beta (as of November 5, 2024)

  • Image Playground – A prompt-based image generation app akin to something like Dall-E or Midjourney but with a limited range of stylistic possibilities, fewer features, and more guardrails
  • Genmoji – Generate original emoji from a prompt
  • Image Wand – Similar to Image Playground but simplified within the Notes app
  • ChatGPT integration in Siri
  • Visual Intelligence – iPhone 16 and iPhone 16 Pro users can use the new Camera Control button to do a variety of tasks based on what’s in the camera’s view, including translation, information about places, and more
  • Writing Tools – Expanded with support for prompt-based edits to text

iOS 18.1 is out right now for everybody. iOS 18.2 is scheduled for a public launch sometime in December.

iOS 18.2 will introduce both Visual Intelligence and the ability to chat with ChatGPT via Siri.

Credit: Samuel Axon

iOS 18.2 will introduce both Visual Intelligence and the ability to chat with ChatGPT via Siri. Credit: Samuel Axon

A staggered rollout

For several years, Apple has released most of its major new software features for, say, the iPhone in one big software update in the fall. That timeline has gotten fuzzier in recent years, but the rollout of Apple Intelligence has moved further from that tradition than we’ve ever seen before.

Apple announced iOS 18 at its developer conference in June, suggesting that most if not all of the Apple Intelligence features would launch in that singular update alongside the new iPhones.

Much of the marketing leading up to and surrounding the iPhone 16 launch focused on Apple Intelligence, but in actuality, the iPhone 16 had none of the features under that label when it launched. The first wave hit with iOS 18.1 last week, over a month after the first consumers started getting their hands on iPhone 16 hardware. And even now, these features are in “beta,” and there has been a wait list.

Many of the most exciting Apple Intelligence features still aren’t here, with some planned for iOS 18.2’s launch in December and a few others coming even later. There will likely be a wait list for some of those, too.

The wait list part makes sense—some of these features put demand on cloud servers, and it’s reasonable to stagger the rollout to sidestep potential launch problems.

The rest doesn’t make as much sense. Between the beta label and the staggered features, it seems like Apple is rushing to satisfy expectations about Apple Intelligence before quality and consistency have fallen into place.

Making AI a harder sell

In some cases, this strategy has led to things feeling half-baked. For example, Writing Tools is available system-wide, but it’s a different experience for first-party apps that work with the new Writing Tools API than third-party apps that don’t. The former lets you approve changes piece by piece, but the latter puts you in a take-it-or-leave-it situation with the whole text. The Writing Tools API is coming in iOS 18.2, maintaining that gap for a couple of months, even for third-party apps whose developers would normally want to be on the ball with this.

Further, iOS 18.2 will allow users to tweak Writing Tools rewrites by specifying what they want in a text prompt, but that’s missing in iOS 18.1. Why launch Writing Tools with features missing and user experience inconsistencies when you could just launch the whole suite in December?

That’s just one example, but there are many similar ones. I think there are a couple of possible explanations:

  • Apple is trying to satisfy anxious investors and commentators who believe the company is already way too late to the generative AI sector.
  • With the original intent to launch it all in the first iOS 18 release, significant resources were spent on Apple Intelligence-focused advertising and marketing around the iPhone 16 in September—and when unexpected problems developing the software features led to a delay for the software launch, it was too late to change the marketing message. Ultimately, the company’s leadership may feel the pressure to make good on that pitch to users as quickly after the iPhone 16 launch as possible, even if it’s piecemeal.

I’m not sure which it is, but in either case, I don’t believe it was the right play.

So many consumers have their defenses up about AI features already, in part because other companies like Microsoft or Google rushed theirs to market without really thinking things through (or caring, if they had) and also because more and more people are naturally suspicious of whatever is labeled the next great thing in Silicon Valley (remember NFTs?). Apple had an opportunity to set itself apart in consumers’ perceptions about AI, but at least right now, that opportunity has been squandered.

Now, I’m not an AI doubter. I think these features and others can be useful, and I already use similar ones every day. I also commend Apple for allowing users to control whether these AI features are enabled at all, which should make AI skeptics more comfortable.

Notification summaries condense all the notifications from a single app into one or two lines, like with this lengthy Discord conversation here. Results are hit or miss.

Credit: Samuel Axon

Notification summaries condense all the notifications from a single app into one or two lines, like with this lengthy Discord conversation here. Results are hit or miss. Credit: Samuel Axon

That said, releasing half-finished bits and pieces of Apple Intelligence doesn’t fit the company’s framing of it as a singular, branded product, and it doesn’t do a lot to handle objections from users who are already assuming AI tools will be nonsense.

There’s so much confusion about AI that it makes sense to let those who are skeptical move at their own pace, and it also makes sense to sell them on the idea with fully baked implementations.

Apple still has a more sensible approach than most

Despite all this, I like the philosophy behind how Apple has thought about implementing its AI tools, even if the rollout has been a mess. It’s fundamentally distinct from what we’re seeing from a company like Microsoft, which seems hell-bent on putting AI chatbots everywhere it can to see which real-world use cases emerge organically.

There is no true, ChatGPT-like LLM chatbot in iOS 18.1. Technically, there’s one in iOS 18.2, but only because you can tell Siri to refer you to ChatGPT on a case-by-case basis.

Instead, Apple has introduced specific generative AI features peppered throughout the operating system meant to explicitly solve narrow user problems. Sure, they’re all built on models that have resemblances to the ones that power Claude or Midjourney, but they’re not built around this idea that you start up a chat dialogue with an LLM or an image generator and it’s up to you to find a way to make it useful for you.

The practical application of most of these features is clear, provided they end up working well (more on that shortly). As a professional writer, it’s easy for me to dismiss Writing Tools as unnecessary—but obviously, not everyone is a professional writer, or even a decent one. For example, I’ve long held that one of the most positive applications of large language models is their ability to let non-native speakers clean up their writing to make it meet native speakers’ standards. In theory, Apple’s Writing Tools can do that.

Apple Intelligence features augment or add additional flexibility or power to existing use cases across the OS, like this new way to generate photo memory movies via text prompt.

Credit: Samuel Axon

Apple Intelligence features augment or add additional flexibility or power to existing use cases across the OS, like this new way to generate photo memory movies via text prompt. Credit: Samuel Axon

I have no doubt that Genmoji will be popular—who doesn’t love a bit of fun in group texts with friends? And many months before iOS 18.1, I was already dropping senselessly gargantuan corporate email threads into ChatGPT and asking for quick summaries.

Apple is approaching AI in a user-centric way that stands in stark contrast to almost every other major player rolling out AI tools. Generative AI is an evolution from machine learning, which is something Apple has been using for everything from iPad screen palm rejection to autocorrect for a while now—to great effect, as we discussed in my interview with Apple AI chief John Giannandrea a few years ago. Apple just never wrapped it in a bow and called it AI until now.

But there was no good reason to rush these features out or to even brand them as “Apple Intelligence” and make a fuss about it. They’re natural extensions of what Apple was already doing. Since they’ve been rushed out the door with a spotlight shining on them, Apple’s AI ambitions have a rockier road ahead than the company might have hoped.

It could take a year or two for this all to come together

Using iOS 18.1, it’s clear that Apple’s large language models are not as effective or reliable as Claude or ChatGPT. It takes time to train models like these, and it looks like Apple started late.

Based on my hours spent with both Apple Intelligence and more established tools from cutting-edge AI companies, I feel the other models crossed a usefulness and reliability threshold a year or so ago. When ChatGPT first launched, it was more of a curiosity than a powerful tool. Now it’s a powerful tool, but that’s a relatively recent development.

In my time with Writing Tools and Notification Summaries in particular, Apple’s models subjectively appear to be around where ChatGPT or Claude were 18 months ago. Notification Summaries almost always miss crucial context in my experience. Writing Tools introduce errors where none existed before.

A writing suggestion shows an egregious grammatical error

It’s not hard to spot the huge error that Writing Tools introduced here. This happens all the time when I use it.

Credit: Samuel Axon

It’s not hard to spot the huge error that Writing Tools introduced here. This happens all the time when I use it. Credit: Samuel Axon

More mature models do these things, too, but at a much lower frequency. Unfortunately, Apple Intelligence isn’t far enough along to be broadly useful.

That said, I’m excited to see where Apple Intelligence will be in 24 months. I think the company is on the right track by using AI to target specific user needs rather than just putting a chatbot out there and letting people figure it out. It’s a much better approach than what we see with Microsoft’s Copilot. If Apple’s models cross that previously mentioned threshold of utility—and it’s only a matter of time before they do—the future of AI tools on Apple platforms could be great.

It’s just a shame that Apple didn’t seem to have the confidence to ignore the zeitgeisty commentators and roll out these features when they’re complete and ready, with messaging focusing on user problems instead of “hey, we’re taking AI seriously too.”

Most users don’t care if you’re taking AI seriously, but they do care if the tools you introduce can make their day-to-day lives better. I think they can—it will just take some patience. Users can be patient, but can Apple? It seems not.

Even so, there’s a real possibility that these early pains will be forgotten before long.

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Apple botched the Apple Intelligence launch, but its long-term strategy is sound Read More »

new-zemeckis-film-used-ai-to-de-age-tom-hanks-and-robin-wright

New Zemeckis film used AI to de-age Tom Hanks and Robin Wright

On Friday, TriStar Pictures released Here, a $50 million Robert Zemeckis-directed film that used real time generative AI face transformation techniques to portray actors Tom Hanks and Robin Wright across a 60-year span, marking one of Hollywood’s first full-length features built around AI-powered visual effects.

The film adapts a 2014 graphic novel set primarily in a New Jersey living room across multiple time periods. Rather than cast different actors for various ages, the production used AI to modify Hanks’ and Wright’s appearances throughout.

The de-aging technology comes from Metaphysic, a visual effects company that creates real time face swapping and aging effects. During filming, the crew watched two monitors simultaneously: one showing the actors’ actual appearances and another displaying them at whatever age the scene required.

Here – Official Trailer (HD)

Metaphysic developed the facial modification system by training custom machine-learning models on frames of Hanks’ and Wright’s previous films. This included a large dataset of facial movements, skin textures, and appearances under varied lighting conditions and camera angles. The resulting models can generate instant face transformations without the months of manual post-production work traditional CGI requires.

Unlike previous aging effects that relied on frame-by-frame manipulation, Metaphysic’s approach generates transformations instantly by analyzing facial landmarks and mapping them to trained age variations.

“You couldn’t have made this movie three years ago,” Zemeckis told The New York Times in a detailed feature about the film. Traditional visual effects for this level of face modification would reportedly require hundreds of artists and a substantially larger budget closer to standard Marvel movie costs.

This isn’t the first film that has used AI techniques to de-age actors. ILM’s approach to de-aging Harrison Ford in 2023’s Indiana Jones and the Dial of Destiny used a proprietary system called Flux with infrared cameras to capture facial data during filming, then old images of Ford to de-age him in post-production. By contrast, Metaphysic’s AI models process transformations without additional hardware and show results during filming.

New Zemeckis film used AI to de-age Tom Hanks and Robin Wright Read More »

apple-releases-ios-181,-macos-15.1-with-apple-intelligence

Apple releases iOS 18.1, macOS 15.1 with Apple Intelligence

Today, Apple released iOS 18.1, iPadOS 18.1, macOS Sequoia 15.1, tvOS 18.1, visionOS 2.1, and watchOS 11.1. The iPhone, iPad, and Mac updates are focused on bringing the first AI features the company has marketed as “Apple Intelligence” to users.

Once they update, users with supported devices in supported regions can enter a waitlist to begin using the first wave of Apple Intelligence features, including writing tools, notification summaries, and the “reduce interruptions” focus mode.

In terms of features baked into specific apps, Photos has natural language search, the ability to generate memories (those short gallery sequences set to video) from a text prompt, and a tool to remove certain objects from the background in photos. Mail and Messages get summaries and smart reply (auto-generating contextual responses).

Apple says many of the other Apple Intelligence features will become available in an update this December, including Genmoji, Image Playground, ChatGPT integration, visual intelligence, and more. The company says more features will come even later than that, though, like Siri’s onscreen awareness.

Note that all the features under the Apple Intelligence banner require devices that have either an A17 Pro, A18, A18 Pro, or M1 chip or later.

There are also some region limitations. While those in the US can use the new Apple Intelligence features on all supported devices right away, those in the European Union can only do so on macOS in US English. Apple says Apple Intelligence will roll out to EU iPhone and iPad owners in April.

Beyond Apple Intelligence, these software updates also bring some promised new features to AirPods Pro (second generation and later): Hearing Test, Hearing Aid, and Hearing Protection.

watchOS and visionOS don’t’t yet support Apple Intelligence, so they don’t have much to show for this update beyond bug fixes and optimizations. tvOS is mostly similar, though it does add a new “watchlist” view in the TV app that is exclusively populated by items you’ve added, as opposed to the existing continue watching (formerly called “up next”) feed that included both the items you added and items added automatically when you started playing them.

Apple releases iOS 18.1, macOS 15.1 with Apple Intelligence Read More »

don’t-fall-for-ai-scams-cloning-cops’-voices,-police-warn

Don’t fall for AI scams cloning cops’ voices, police warn

AI is giving scammers a more convincing way to impersonate police, reports show.

Just last week, the Salt Lake City Police Department (SLCPD) warned of an email scam using AI to convincingly clone the voice of Police Chief Mike Brown.

A citizen tipped off cops after receiving a suspicious email that included a video showing the police chief claiming that they “owed the federal government nearly $100,000.”

To dupe their targets, the scammers cut together real footage from one of Brown’s prior TV interviews with AI-generated audio that SLCPD said “is clear and closely impersonates the voice of Chief Brown, which could lead community members to believe the message was legitimate.”

The FBI has warned for years of scammers attempting extortion by impersonating cops or government officials. But as AI voice-cloning technology has advanced, these scams could become much harder to detect, to the point where even the most forward-thinking companies like OpenAI have been hesitant to release the latest tech due to obvious concerns about potential abuse.

SLCPD noted that there were clues in the email impersonating their police chief that a tech-savvy citizen could have picked up on. A more careful listen reveals “the message had unnatural speech patterns, odd emphasis on certain words, and an inconsistent tone,” as well as “detectable acoustic edits from one sentence to the next.” And perhaps most glaringly, the scam email came from “a Google account and had the Salt Lake City Police Department’s name in it followed by a numeric number,” instead of from the police department’s official email domain of “slc.gov.”

SLCPD isn’t the only police department dealing with AI cop impersonators. Tulsa had a similar problem this summer when scammers started calling residents using a convincing fake voice designed to sound like Tulsa police officer Eric Spradlin, Public Radio Tulsa reported. A software developer who received the call, Myles David, said he understood the AI risks today but that even he was “caught off guard” and had to call police to verify the call wasn’t real.

Don’t fall for AI scams cloning cops’ voices, police warn Read More »

chatbot-that-caused-teen’s-suicide-is-now-more-dangerous-for-kids,-lawsuit-says

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says


“I’ll do anything for you, Dany.”

Google-funded Character.AI added guardrails, but grieving mom wants a recall.

Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.

Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.

In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.

The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.

By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI added guardrails

It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”

Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.

However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.

Character Technologies and Google will likely file their response within the next 30 days.

Lawsuit: New chatbot feature spikes risks to kids

While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.

Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.

That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.

And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”

Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.

Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.

But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.

Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.

Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”

“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.

“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Read More »