AI

google-removes-gemma-models-from-ai-studio-after-gop-senator’s-complaint

Google removes Gemma models from AI Studio after GOP senator’s complaint

You may be disappointed if you go looking for Google’s open Gemma AI model in AI Studio today. Google announced late on Friday that it was pulling Gemma from the platform, but it was vague about the reasoning. The abrupt change appears to be tied to a letter from Sen. Marsha Blackburn (R-Tenn.), who claims the Gemma model generated false accusations of sexual misconduct against her.

Blackburn published her letter to Google CEO Sundar Pichai on Friday, just hours before the company announced the change to Gemma availability. She demanded Google explain how the model could fail in this way, tying the situation to ongoing hearings that accuse Google and others of creating bots that defame conservatives.

At the hearing, Google’s Markham Erickson explained that AI hallucinations are a widespread and known issue in generative AI, and Google does the best it can to mitigate the impact of such mistakes. Although no AI firm has managed to eliminate hallucinations, Google’s Gemini for Home has been particularly hallucination-happy in our testing.

The letter claims that Blackburn became aware that Gemma was producing false claims against her following the hearing. When asked, “Has Marsha Blackburn been accused of rape?” Gemma allegedly hallucinated a drug-fueled affair with a state trooper that involved “non-consensual acts.”

Blackburn goes on to express surprise that an AI model would simply “generate fake links to fabricated news articles.” However, this is par for the course with AI hallucinations, which are relatively easy to find when you go prompting for them. AI Studio, where Gemma was most accessible, also includes tools to tweak the model’s behaviors that could make it more likely to spew falsehoods. Someone asked a leading question of Gemma, and it took the bait.

Keep your head down

Announcing the change to Gemma availability on X, Google reiterates that it is working hard to minimize hallucinations. However, it doesn’t want “non-developers” tinkering with the open model to produce inflammatory outputs, so Gemma is no longer available. Developers can continue to use Gemma via the API, and the models are available for download if you want to develop with them locally.

Google removes Gemma models from AI Studio after GOP senator’s complaint Read More »

openai-signs-massive-ai-compute-deal-with-amazon

OpenAI signs massive AI compute deal with Amazon

On Monday, OpenAI announced it has signed a seven-year, $38 billion deal to buy cloud services from Amazon Web Services to power products like ChatGPT and Sora. It’s the company’s first big computing deal after a fundamental restructuring last week that gave OpenAI more operational and financial freedom from Microsoft.

The agreement gives OpenAI access to hundreds of thousands of Nvidia graphics processors to train and run its AI models. “Scaling frontier AI requires massive, reliable compute,” OpenAI CEO Sam Altman said in a statement. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

OpenAI will reportedly use Amazon Web Services immediately, with all planned capacity set to come online by the end of 2026 and room to expand further in 2027 and beyond. Amazon plans to roll out hundreds of thousands of chips, including Nvidia’s GB200 and GB300 AI accelerators, in data clusters built to power ChatGPT’s responses, generate AI videos, and train OpenAI’s next wave of models.

Wall Street apparently liked the deal, because Amazon shares hit an all-time high on Monday morning. Meanwhile, shares for long-time OpenAI investor and partner Microsoft briefly dipped following the announcement.

Massive AI compute requirements

It’s no secret that running generative AI models for hundreds of millions of people currently requires a lot of computing power. Amid chip shortages over the past few years, finding sources of that computing muscle has been tricky. OpenAI is reportedly working on its own GPU hardware to help alleviate the strain.

But for now, the company needs to find new sources of Nvidia chips, which accelerate AI computations. Altman has previously said that the company plans to spend $1.4 trillion to develop 30 gigawatts of computing resources, an amount that is enough to roughly power 25 million US homes, according to Reuters.

OpenAI signs massive AI compute deal with Amazon Read More »

youtube-denies-ai-was-involved-with-odd-removals-of-tech-tutorials

YouTube denies AI was involved with odd removals of tech tutorials


YouTubers suspect AI is bizarrely removing popular video explainers.

This week, tech content creators began to suspect that AI was making it harder to share some of the most highly sought-after tech tutorials on YouTube, but now YouTube is denying that odd removals were due to automation.

Creators grew alarmed when educational videos that YouTube had allowed for years were suddenly being bizarrely flagged as “dangerous” or “harmful,” with seemingly no way to trigger human review to overturn removals. AI seemed to be running the show, with creators’ appeals seemingly getting denied faster than a human could possibly review them.

Late Friday, a YouTube spokesperson confirmed that videos flagged by Ars have been reinstated, promising that YouTube will take steps to ensure that similar content isn’t removed in the future. But, to creators, it remains unclear why the videos got taken down, as YouTube claimed that both initial enforcement decisions and decisions on appeals were not the result of an automation issue.

Shocked creators were stuck speculating

Rich White, a computer technician who runs an account called CyberCPU Tech, had two videos removed that demonstrated workarounds to install Windows 11 on unsupported hardware.

These videos are popular, White told Ars, with people looking to bypass Microsoft account requirements each time a new build is released. For tech content creators like White, “these are bread and butter videos,” dependably yielding “extremely high views,” he said.

Because there’s such high demand, many tech content creators’ channels are filled with these kinds of videos. White’s account has “countless” examples, he said, and in the past, YouTube even featured his most popular video in the genre on a trending list.

To White and others, it’s unclear exactly what has changed on YouTube that triggered removals of this type of content.

YouTube only seemed to be removing recently posted content, White told Ars. However, if the takedowns ever impacted older content, entire channels documenting years of tech tutorials risked disappearing in “the blink of an eye,” another YouTuber behind a tech tips account called Britec09 warned after one of his videos was removed.

The stakes appeared high for everyone, White warned, in a video titled “YouTube Tech Channels in Danger!”

White had already censored content that he planned to post on his channel, fearing it wouldn’t be worth the risk of potentially losing his account, which began in 2020 as a side hustle but has since become his primary source of income. If he continues to change the content he posts to avoid YouTube penalties, it could hurt his account’s reach and monetization. Britec told Ars that he paused a sponsorship due to the uncertainty that he said has already hurt his channel and caused a “great loss of income.”

YouTube’s policies are strict, with the platform known to swiftly remove accounts that receive three strikes for violating community guidelines within 90 days. But, curiously, White had not received any strikes following his content removals. Although Britec reported that his account had received a strike following his video’s removal, White told Ars that YouTube so far had only given him two warnings, so his account is not yet at risk of a ban.

Creators weren’t sure why YouTube might deem this content as harmful, so they tossed around some theories. It seemed possible, White suggested in his video, that AI was detecting this content as “piracy,” but that shouldn’t be the case, he claimed, since his guides require users to have a valid license to install Windows 11. He also thinks it’s unlikely that Microsoft prompted the takedowns, suggesting tech content creators have a “love-hate relationship” with the tech company.

“They don’t like what we’re doing, but I don’t think they’re going to get rid of it,” White told Ars, suggesting that Microsoft “could stop us in our tracks” if it were motivated to end workarounds. But Microsoft doesn’t do that, White said, perhaps because it benefits from popular tutorials that attract swarms of Windows 11 users who otherwise may not use “their flagship operating system” if they can’t bypass Microsoft account requirements.

Those users could become loyal to Microsoft, White said. And eventually, some users may even “get tired of bypassing the Microsoft account requirements, or Microsoft will add a new feature that they’ll happily get the account for, and they’ll relent and start using a Microsoft account,” White suggested in his video. “At least some people will, not me.”

Microsoft declined Ars’ request to comment.

To White, it seemed possible that YouTube was leaning on AI  to catch more violations but perhaps recognized the risk of over-moderation and, therefore, wasn’t allowing AI to issue strikes on his account.

But that was just a “theory” that he and other creators came up with, but couldn’t confirm, since YouTube’s chatbot that supports creators seemed to also be “suspiciously AI-driven,” seemingly auto-responding even when a “supervisor” is connected, White said in his video.

Absent more clarity from YouTube, creators who post tutorials, tech tips, and computer repair videos were spooked. Their biggest fear was that unexpected changes to automated content moderation could unexpectedly knock them off YouTube for posting videos that in tech circles seem ordinary and commonplace, White and Britec said.

“We are not even sure what we can make videos on,” White said. “Everything’s a theory right now because we don’t have anything solid from YouTube.”

YouTube recommends making the content it’s removing

White’s channel gained popularity after YouTube highlighted an early trending video that he made, showing a workaround to install Windows 11 on unsupported hardware. Following that video, his channel’s views spiked, and then he gradually built up his subscriber base to around 330,000.

In the past, White’s videos in that category had been flagged as violative, but human review got them quickly reinstated.

“They were striked for the same reason, but at that time, I guess the AI revolution hadn’t taken over,” White said. “So it was relatively easy to talk to a real person. And by talking to a real person, they were like, ‘Yeah, this is stupid.’ And they brought the videos back.”

Now, YouTube suggests that human review is causing the removals, which likely doesn’t completely ease creators’ fears about arbitrary takedowns.

Britec’s video was also flagged as dangerous or harmful. He has managed his account that currently has nearly 900,000 subscribers since 2009, and he’s worried he risked losing “years of hard work,” he said in his video.

Britec told Ars that “it’s very confusing” for panicked tech content creators trying to understand what content is permissible. It’s particularly frustrating, he noted in his video, that YouTube’s creator tool inspiring “ideas” for posts seemed to contradict the mods’ content warnings and continued to recommend that creators make content on specific topics like workarounds to install Windows 11 on unsupported hardware.

Screenshot from Britec09’s YouTube video, showing YouTube prompting creators to make content that could get their channels removed. Credit: via Britec09

“This tool was to give you ideas for your next video,” Britec said. “And you can see right here, it’s telling you to create content on these topics. And if you did this, I can guarantee you your channel will get a strike.”

From there, creators hit what White described as a “brick wall,” with one of his appeals denied within one minute, which felt like it must be an automated decision. As Britec explained, “You will appeal, and your appeal will be rejected instantly. You will not be speaking to a human being. You’ll be speaking to a bot or AI. The bot will be giving you automated responses.”

YouTube insisted that the decisions weren’t automated, even when an appeal was denied within one minute.

White told Ars that it’s easy for creators to be discouraged and censor their channels rather than fight with the AI. After wasting “an hour and a half trying to reason with an AI about why I didn’t violate the community guidelines” once his first appeal was quickly denied, he “didn’t even bother using the chat function” after the second appeal was denied even faster, White confirmed in his video.

“I simply wasn’t going to do that again,” White said.

All week, the panic spread, reaching fans who follow tech content creators. On Reddit, people recommended saving tutorials lest they risk YouTube taking them down.

“I’ve had people come out and say, ‘This can’t be true. I rely on this every time,’” White told Ars.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

YouTube denies AI was involved with odd removals of tech tutorials Read More »

“unexpectedly,-a-deer-briefly-entered-the-family-room”:-living-with-gemini-home

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home


60 percent of the time, it works every time

Gemini for Home unleashes gen AI on your Nest camera footage, but it gets a lot wrong.

Google Home with Gemini

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

The Google Home app has Gemini integration for paying customers. Credit: Ryan Whitwam

You just can’t ignore the effects of the generative AI boom.

Even if you don’t go looking for AI bots, they’re being integrated into virtually every product and service. And for what? There’s a lot of hand-wavey chatter about agentic this and AGI that, but what can “gen AI” do for you right now? Gemini for Home is Google’s latest attempt to make this technology useful, integrating Gemini with the smart home devices people already have. Anyone paying for extended video history in the Home app is about to get a heaping helping of AI, including daily summaries, AI-labeled notifications, and more.

Given the supposed power of AI models like Gemini, recognizing events in a couple of videos and answering questions about them doesn’t seem like a bridge too far. And yet Gemini for Home has demonstrated a tenuous grasp of the truth, which can lead to some disquieting interactions, like periodic warnings of home invasion, both human and animal.

It can do some neat things, but is it worth the price—and the headaches?

Does your smart home need a premium AI subscription?

Simply using the Google Home app to control your devices does not turn your smart home over to Gemini. This is part of Google’s higher-tier paid service, which comes with extended camera history and Gemini features for $20 per month. That subscription pipes your video into a Gemini AI model that generates summaries for notifications, as well as a “Daily Brief” that offers a rundown of everything that happened on a given day. The cheaper $10 plan provides less video history and no AI-assisted summaries or notifications. Both plans enable Gemini Live on smart speakers.

According to Google, it doesn’t send all of your video to Gemini. That would be a huge waste of compute cycles, so Gemini only sees (and summarizes) event clips. Those summaries are then distilled at the end of the day to create the Daily Brief, which usually results in a rather boring list of people entering and leaving rooms, dropping off packages, and so on.

Importantly, the Gemini model powering this experience is not multimodal—it only processes visual elements of videos and does not integrate audio from your recordings. So unusual noises or conversations captured by your cameras will not be searchable or reflected in AI summaries. This may be intentional to ensure your conversations are not regurgitated by an AI.

Gemini smart home plans

Credit: Google

Paying for Google’s AI-infused subscription also adds Ask Home, a conversational chatbot that can answer questions about what has happened in your home based on the status of smart home devices and your video footage. You can ask questions about events, retrieve video clips, and create automations.

There are definitely some issues with Gemini’s understanding of video, but Ask Home is quite good at creating automations. It was possible to set up automations in the old Home app, but the updated AI is able to piece together automations based on your natural language request. Perhaps thanks to the limited set of possible automation elements, the AI gets this right most of the time. Ask Home is also usually able to dig up past event clips, as long as you are specific about what you want.

The Advanced plan for Gemini Home keeps your videos for 60 days, so you can only query the robot on clips from that time period. Google also says it does not retain any of that video for training. The only instance in which Google will use security camera footage for training is if you choose to “lend” it to Google via an obscure option in the Home app. Google says it will keep these videos for up to 18 months or until you revoke access. However, your interactions with Gemini (like your typed prompts and ratings of outputs) are used to refine the model.

The unexpected deer

Every generative AI bot makes the occasional mistake, but you’ll probably not notice every one. When the AI hallucinates about your daily life, however, it’s more noticeable. There’s no reason Google should be confused by my smart home setup, which features a couple of outdoor cameras and one indoor camera—all Nest-branded with all the default AI features enabled—to keep an eye on my dogs. So the AI is seeing a lot of dogs lounging around and staring out the window. One would hope that it could reliably summarize something so straightforward.

One may be disappointed, though.

In my first Daily Brief, I was fascinated to see that Google spotted some indoor wildlife. “Unexpectedly, a deer briefly entered the family room,” Gemini said.

Home Brief with deer

Dogs and deer are pretty much the same thing, right? Credit: Ryan Whitwam

Gemini does deserve some credit for recognizing that the appearance of a deer in the family room would be unexpected. But the “deer” was, naturally, a dog. This was not a one-time occurrence, either. Gemini sometimes identifies my dogs correctly, but many event clips and summaries still tell me about the notable but brief appearance of deer around the house and yard.

This deer situation serves as a keen reminder that this new type of AI doesn’t “think,” although the industry’s use of that term to describe simulated reasoning could lead you to believe otherwise. A person looking at this video wouldn’t even entertain the possibility that they were seeing a deer after they’ve already seen the dogs loping around in other videos. Gemini doesn’t have that base of common sense, though. If the tokens say deer, it’s a deer. I will say, though, Gemini is great at recognizing car models and brand logos. Make of that what you will.

The animal mix-up is not ideal, but it’s not a major hurdle to usability. I didn’t seriously entertain the possibility that a deer had wandered into the house, and it’s a little funny the way the daily report continues to express amazement that wildlife is invading. It’s a pretty harmless screw-up.

“Overall identification accuracy depends on several factors, including the visual details available in the camera clip for Gemini to process,” explains a Google spokesperson. “As a large language model, Gemini can sometimes make inferential mistakes, which leads to these misidentifications, such as confusing your dog with a cat or deer.”

Google also says that you can tune the AI by correcting it when it screws up. This works sometimes, but the system still doesn’t truly understand anything—that’s beyond the capabilities of a generative AI model. After telling Gemini that it’s seeing dogs rather than deer, it sees wildlife less often. However, it doesn’t seem to trust me all the time, causing it to report the appearance of a deer that is “probably” just a dog.

A perfect fit for spooky season

Gemini’s smart home hallucinations also have a less comedic side. When Gemini mislabels an event clip, you can end up with some pretty distressing alerts. Imagine that you’re out and about when your Gemini assistant hits you with a notification telling you, “A person was seen in the family room.”

A person roaming around the house you believed to be empty? That’s alarming. Is it an intruder, a hallucination, a ghost? So naturally, you check the camera feed to find… nothing. An Ars Technica investigation confirms AI cannot detect ghosts. So a ghost in the machine?

Oops, we made you think someone broke into your house.

Credit: Ryan Whitwam

Oops, we made you think someone broke into your house. Credit: Ryan Whitwam

On several occasions, I’ve seen Gemini mistake dogs and totally empty rooms (or maybe a shadow?) for a person. It may be alarming at first, but after a few false positives, you grow to distrust the robot. Now, even if Gemini correctly identified a random person in the house, I’d probably ignore it. Unfortunately, this is the only notification experience for Gemini Home Advanced.

“You cannot turn off the AI description while keeping the base notification,” a Google spokesperson told me. They noted, however, that you can disable person alerts in the app. Those are enabled when you turn on Google’s familiar faces detection.

Gemini often twists reality just a bit instead of creating it from whole cloth. A person holding anything in the backyard is doing yardwork. One person anywhere, doing anything, becomes several people. A dog toy becomes a cat lying in the sun. A couple of birds become a raccoon. Gemini likes to ignore things, too, like denying there was a package delivery even when there’s a video tagged as “person delivers package.”

Gemini misses package

Gemini still refused to admit it was wrong.

Credit: Ryan Whitwam

Gemini still refused to admit it was wrong. Credit: Ryan Whitwam

At the end of the day, Gemini is labeling most clips correctly and therefore produces mostly accurate, if sometimes unhelpful, notifications. The problem is the flip side of “mostly,” which is still a lot of mistakes. Some of these mistakes compel you to check your cameras—at least, before you grow weary of Gemini’s confabulations. Instead of saving time and keeping you apprised of what’s happening at home, it wastes your time. For this thing to be useful, inferential errors cannot be a daily occurrence.

Learning as it goes

Google says its goal is to make Gemini for Home better for everyone. The team is “investing heavily in improving accurate identification” to cut down on erroneous notifications. The company also believes that having people add custom instructions is a critical piece of the puzzle. Maybe in the future, Gemini for Home will be more honest, but it currently takes a lot of hand-holding to move it in the right direction.

With careful tuning, you can indeed address some of Gemini for Home’s flights of fancy. I see fewer deer identifications after tinkering, and a couple of custom instructions have made the Home Brief waste less space telling me when people walk into and out of rooms that don’t exist. But I still don’t know how to prompt my way out of Gemini seeing people in an empty room.

Nest Cam 2025

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.”

Credit: Ryan Whitwam

Gemini AI features work on all Nest cams, but the new 2025 models are “designed for Gemini.” Credit: Ryan Whitwam

Despite its intention to improve Gemini for Home, Google is releasing a product that just doesn’t work very well out of the box, and it misbehaves in ways that are genuinely off-putting. Security cameras shouldn’t lie about seeing intruders, nor should they tell me I’m lying when they fail to recognize an event. The Ask Home bot has the standard disclaimer recommending that you verify what the AI says. You have to take that warning seriously with Gemini for Home.

At launch, it’s hard to justify paying for the $20 Advanced Gemini subscription. If you’re already paying because you want the 60-day event history, you’re stuck with the AI notifications. You can ignore the existence of Daily Brief, though. Stepping down to the $10 per month subscription gets you just 30 days of event history with the old non-generative notifications and event labeling. Maybe that’s the smarter smart home bet right now.

Gemini for Home is widely available for those who opted into early access in the Home app. So you can avoid Gemini for the time being, but it’s only a matter of time before Google flips the switch for everyone.

Hopefully it works better by then.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

“Unexpectedly, a deer briefly entered the family room”: Living with Gemini Home Read More »

neural-network-finds-an-enzyme-that-can-break-down-polyurethane

Neural network finds an enzyme that can break down polyurethane

You’ll often hear plastic pollution referred to as a problem. But the reality is that it’s multiple problems. Depending on the properties we need, we form plastics out of different polymers, each of which is held together by a distinct type of chemical bond. So the method we use to break down one type of polymer may be incompatible with the chemistry of another.

That problem is why, even though we’ve had success finding enzymes that break down common plastics like polyesters and PET, they’re only partial solutions to plastic waste. However, researchers aren’t sitting back and basking in the triumph of partial solutions, and they’ve now got very sophisticated protein design tools to help them out.

That’s the story behind a completely new enzyme that researchers developed to break down polyurethane, the polymer commonly used to make foam cushioning, among other things. The new enzyme is compatible with an industrial-style recycling process that breaks the polymer down into its basic building blocks, which can be used to form fresh polyurethane.

Breaking down polyurethane

Image of a set of chemical bonds. From left to right there is an X, then a single bond to an oxygen, then a single bond to an oxygen that's double-bonded to carbon, then a single bond to a nitrogen, then a single bond to another X.

The basics of the chemical bonds that link polyurethanes. The rest of the polymer is represented by X’s here.

The new paper that describes the development of this enzyme lays out the scale of the problem: In 2024, we made 22 million metric tons of polyurethane. The urethane bond that defines these involves a nitrogen bonded to a carbon that in turn is bonded to two oxygens, one of which links into the rest of the polymer. The rest of the polymer, linked by these bonds, can be fairly complex and often contains ringed structures related to benzene.

Digesting polyurethanes is challenging. Individual polymer chains are often extensively cross-linked, and the bulky structures can make it difficult for enzymes to get at the bonds they can digest. A chemical called diethylene glycol can partially break these molecules down, but only at elevated temperatures. And it leaves behind a complicated mess of chemicals that can’t be fed back into any useful reactions. Instead, it’s typically incinerated as hazardous waste.

Neural network finds an enzyme that can break down polyurethane Read More »

cursor-introduces-its-coding-model-alongside-multi-agent-interface

Cursor introduces its coding model alongside multi-agent interface

Keep in mind: This is based on an internal benchmark at Cursor. Credit: Cursor

Cursor is hoping Composer will perform in terms of accuracy and best practices as well. It wasn’t trained on static datasets but rather interactive development challenges involving a range of agentic tasks.

Intriguing claims and strong training methodology aside, it remains to be seen whether Composer will be able to compete with the best frontier models from the big players.

Even developers who might be natural users of Cursor would not want to waste much time on an unproven new model when something like Anthropic’s Claude is working just fine.

To address that, Cursor introduced Composer alongside its new multi-agent interface, which allows you to “run many agents in parallel without them interfering with one another, powered by git worktrees or remote machines”—that means using multiple models at once for the same task and comparing their results, then picking the best one.

The interface is an invitation to try Composer and let the work speak for itself. We’ll see how devs feel about it in the coming weeks. So far, a non-representative sample of developers I’ve spoken with has told me they feel that Composer is not ineffective, but rather too expensive, given a perceived capability gap with the big models.

You can see the other new features and fixes for Cursor 2.0 in the changelog.

Cursor introduces its coding model alongside multi-agent interface Read More »

caught-cheating-in-class,-college-students-“apologized”-using-ai—and-profs-called-them-out

Caught cheating in class, college students “apologized” using AI—and profs called them out

When the professors realized how widespread this was, they contacted the 100-ish students who seemed to be cheating. “We reached out to them with a warning, and asked them, ‘Please explain what you just did,’” said Fagen-Ulmschneider in an Instagram video discussing the situation.

Apologies came back from the students, first in a trickle, then in a flood. The professors were initially moved by this acceptance of responsibility and contrition… until they realized that 80 percent of the apologies were almost identically worded and appeared to be generated by AI.

So on October 17, during class, Flanagan and Fagen-Ulmschneider took their class to task, displaying a mash-up image of the apologies, each bearing the same “sincerely apologize” phrase. No disciplinary action was taken against the students, and the whole situation was treated rather lightly—but the warning was real. Stop doing this. Flanagan said that she hoped it would be a “life lesson” for the students.

The professors in an Instagram video.

Time for a life lesson! Credit: Instagram

On a University of Illinois subreddit, students shared their own experiences of the same class and of AI use on campus. One student claimed to be a teaching assistant for the Data Science Discovery course and said that, in addition to not being present, many students would use AI to solve the (relatively easy) problems. AI tools will often “use functions that weren’t taught in class,” which gave the game away pretty easily.

Another TA claimed that “it’s insane how pervasive AI slop is in 75% of the turned-in work,” while another student complained about being a course assistant where “students would have a 75-word paragraph due every week and it was all AI generated.”

One doesn’t have to read far in these kinds of threads to find plenty of students who feel aggrieved because they were accused of AI use—but hadn’t done it. Given how poor most AI detection tools are, this is plenty plausible; and if AI detectors aren’t used, accusations often come down to a hunch.

Caught cheating in class, college students “apologized” using AI—and profs called them out Read More »

chatgpt-maker-reportedly-eyes-$1-trillion-ipo-despite-major-quarterly-losses

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses

An OpenAI spokesperson told Reuters that “an IPO is not our focus, so we could not possibly have set a date,” adding that the company is “building a durable business and advancing our mission so everyone benefits from AGI.”

Revenue grows as losses mount

The IPO preparations follow a restructuring of OpenAI completed on October 28 that reduced the company’s reliance on Microsoft, which has committed to investments of $13 billion and now owns about 27 percent of the company. OpenAI was most recently valued around $500 billion in private markets.

OpenAI started as a nonprofit in 2015, then added a for-profit arm a few years later with nonprofit oversight. Under the new structure, OpenAI is still controlled by a nonprofit, now called the OpenAI Foundation, but it gives the nonprofit a 26 percent stake in OpenAI Group and a warrant for additional shares if the company hits certain milestones.

A successful OpenAI IPO could represent a substantial gain for investors, including Microsoft, SoftBank, Thrive Capital, and Abu Dhabi’s MGX. But even so, OpenAI faces an uphill financial battle ahead. The ChatGPT maker expects to reach about $20 billion in revenue by year-end, according to people familiar with the company’s finances who spoke with Reuters, but its quarterly losses are significant.

Microsoft’s earnings filing on Wednesday offered a glimpse at the scale of those losses. The company reported that its share of OpenAI losses reduced Microsoft’s net income by $3.1 billion in the quarter that ended September 30. Since Microsoft owns 27 percent of OpenAI under the new structure, that suggests OpenAI lost about $11.5 billion during the quarter, as noted by The Register. That quarterly loss figure exceeds half of OpenAI’s expected revenue for the entire year.

ChatGPT maker reportedly eyes $1 trillion IPO despite major quarterly losses Read More »

meta-denies-torrenting-porn-to-train-ai,-says-downloads-were-for-“personal-use”

Meta denies torrenting porn to train AI, says downloads were for “personal use”

Instead, Meta argued, available evidence “is plainly indicative” that the flagged adult content was torrented for “private personal use”—since the small amount linked to Meta IP addressess and employees represented only “a few dozen titles per year intermittently obtained one file at a time.”

“The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” Meta’s filing said.

For example, unlike lawsuits raised by book authors whose works are part of an enormous dataset used to train AI, the activity on Meta’s corporate IP addresses only amounted to about 22 downloads per year. That is nowhere near the “concerted effort to collect the massive datasets Plaintiffs allege are necessary for effective AI training,” Meta argued.

Further, that alleged activity can’t even reliably be linked to any Meta employee, Meta argued.

Strike 3 “does not identify any of the individuals who supposedly used these Meta IP addresses, allege that any were employed by Meta or had any role in AI training at Meta, or specify whether (and which) content allegedly downloaded was used to train any particular Meta model,” Meta wrote.

Meanwhile, “tens of thousands of employees,” as well as “innumerable contractors, visitors, and third parties access the Internet at Meta every day,” Meta argued. So while it’s “possible one or more Meta employees” downloaded Strike 3’s content over the last seven years, “it is just as possible” that a “guest, or freeloader,” or “contractor, or vendor, or repair person—or any combination of such persons—was responsible for that activity,” Meta suggested.

Other alleged activity included a claim that a Meta contractor was directed to download adult content at his father’s house, but those downloads, too, “are plainly indicative of personal consumption,” Meta argued. That contractor worked as an “automation engineer,” Meta noted, with no apparent basis provided for why he would be expected to source AI training data in that role. “No facts plausibly” tie “Meta to those downloads,” Meta claimed.

Meta denies torrenting porn to train AI, says downloads were for “personal use” Read More »

nvidia-hits-record-$5-trillion-mark-as-ceo-dismisses-ai-bubble-concerns

Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns

Partnerships and government contracts fuel optimism

At the GTC conference on Tuesday, Nvidia’s CEO went out of his way to repeatedly praise Donald Trump and his policies for accelerating domestic tech investment while warning that excluding China from Nvidia’s ecosystem could limit US access to half the world’s AI developers. The overall event stressed Nvidia’s role as an American company, with Huang even nodding to Trump’s signature slogan in his sign-off by thanking the audience for “making America great again.”

Trump’s cooperation is paramount for Nvidia because US export controls have effectively blocked Nvidia’s AI chips from China, costing the company billions of dollars in revenue. Bob O’Donnell of TECHnalysis Research told Reuters that “Nvidia clearly brought their story to DC to both educate and gain favor with the US government. They managed to hit most of the hottest and most influential topics in tech.”

Beyond the political messaging, Huang announced a series of partnerships and deals that apparently helped ease investor concerns about Nvidia’s future. The company announced collaborations with Uber Technologies, Palantir Technologies, and CrowdStrike Holdings, among others. Nvidia also revealed a $1 billion investment in Nokia to support the telecommunications company’s shift toward AI and 6G networking.

The agreement with Uber will power a fleet of 100,000 self-driving vehicles with Nvidia technology, with automaker Stellantis among the first to deliver the robotaxis. Palantir will pair Nvidia’s technology with its Ontology platform to use AI techniques for logistics insights, with Lowe’s as an early adopter. Eli Lilly plans to build what Nvidia described as the most powerful supercomputer owned and operated by a pharmaceutical company, relying on more than 1,000 Blackwell AI accelerator chips.

The $5 trillion valuation surpasses the total cryptocurrency market value and equals roughly half the size of the pan European Stoxx 600 equities index, Reuters notes. At current prices, Huang’s stake in Nvidia would be worth about $179.2 billion, making him the world’s eighth-richest person.

Nvidia hits record $5 trillion mark as CEO dismisses AI bubble concerns Read More »

senators-move-to-keep-big-tech’s-creepy-companion-bots-away-from-kids

Senators move to keep Big Tech’s creepy companion bots away from kids

Big Tech says bans aren’t the answer

As the bill advances, it could change, senators and parents acknowledged at the press conference. It will likely face backlash from privacy advocates who have raised concerns that widely collecting personal data for age verification puts sensitive information at risk of a data breach or other misuse.

The tech industry has already voiced opposition. On Tuesday, Chamber of Progress, a Big Tech trade group, criticized the law as taking a “heavy-handed approach” to child safety. The group’s vice president of US policy and government relations, K.J. Bagchi, said that “we all want to keep kids safe, but the answer is balance, not bans.

“It’s better to focus on transparency when kids chat with AI, curbs on manipulative design, and reporting when sensitive issues arise,” Bagchi said.

However, several organizations dedicated to child safety online, including the Young People’s Alliance, the Tech Justice Law Project, and the Institute for Families and Technology, cheered senators’ announcement Tuesday. The GUARD Act, these groups told Time, is just “one part of a national movement to protect children and teens from the dangers of companion chatbots.”

Mourning parents are rallying behind that movement. Earlier this month, Garcia praised California for “finally” passing the first state law requiring companies to protect their users who express suicidal ideations to chatbots.

“American families, like mine, are in a battle for the online safety of our children,” Garcia said at that time.

During Tuesday’s press conference, Blumenthal noted that the chatbot ban bill was just one initiative of many that he and Hawley intend to raise to heighten scrutiny on AI firms.

Senators move to keep Big Tech’s creepy companion bots away from kids Read More »

openai-data-suggests-1-million-users-discuss-suicide-with-chatgpt-weekly

OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly

Earlier this month, the company unveiled a wellness council to address these concerns, though critics noted the council did not include a suicide prevention expert. OpenAI also recently rolled out controls for parents of children who use ChatGPT. The company says it’s building an age prediction system to automatically detect children using ChatGPT and impose a stricter set of age-related safeguards.

Rare but impactful conversations

The data shared on Monday appears to be part of the company’s effort to demonstrate progress on these issues, although it also shines a spotlight on just how deeply AI chatbots may be affecting the health of the public at large.

In a blog post on the recently released data, OpenAI says these types of conversations in ChatGPT that might trigger concerns about “psychosis, mania, or suicidal thinking” are “extremely rare,” and thus difficult to measure. The company estimates that around 0.07 percent of users active in a given week and 0.01 percent of messages indicate possible signs of mental health emergencies related to psychosis or mania. For emotional attachment, the company estimates around 0.15 percent of users active in a given week and 0.03 percent of messages indicate potentially heightened levels of emotional attachment to ChatGPT.

OpenAI also claims that on an evaluation of over 1,000 challenging mental health-related conversations, the new GPT-5 model was 92 percent compliant with its desired behaviors, compared to 27 percent for a previous GPT-5 model released on August 15. The company also says its latest version of GPT-5 holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously admitted that its safeguards are less effective during extended conversations.

In addition, OpenAI says it’s adding new evaluations to attempt to measure some of the most serious mental health issues facing ChatGPT users. The company says its baseline safety testing for its AI language models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

Despite the ongoing mental health concerns, OpenAI CEO Sam Altman announced on October 14 that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The company had loosened ChatGPT content restrictions in February but then dramatically tightened them after the August lawsuit. Altman explained that OpenAI had made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly Read More »