AI

in-35-years,-notepad.exe-has-gone-from-“barely-maintained”-to-“it-writes-for-you”

In 3.5 years, Notepad.exe has gone from “barely maintained” to “it writes for you”

By late 2021, major updates for Windows’ built-in Notepad text editor had been so rare for so long that a gentle redesign and a handful of new settings were rated as a major update. New updates have become much more common since then, but like the rest of Windows, recent additions have been overwhelmingly weighted in the direction of generative AI.

In November, Microsoft began testing an update that allowed users to rewrite or summarize text in Notepad using generative AI. Another preview update today takes it one step further, allowing you to write AI-generated text from scratch with basic instructions (the feature is called Write, to differentiate it from the earlier Rewrite).

Like Rewrite and Summarize, Write requires users to be signed into a Microsoft Account, because using it requires you to use your monthly allotment of Microsoft’s AI credits. Per this support page, users without a paid Microsoft 365 subscription get 15 credits per month. Subscribers with Personal and Family subscriptions get 60 credits per month instead.

Microsoft notes that all AI features in Notepad can be disabled in the app’s settings, and obviously, they won’t be available if you use a local account instead of a Microsoft Account.

Microsoft is also releasing preview updates for Paint and Snipping Tool, two other bedrock Windows apps that hadn’t seen much by way of major updates before the Windows 11 era. Paint’s features are also mostly AI-related, including a “sticker generator” and an AI-powered smart select tool “to help you isolate and edit individual elements in your image.” A new “welcome experience” screen that appears the first time you launch the app will walk you through the (again, mostly AI-related) new features Microsoft has added to Paint in the last couple of years.

In 3.5 years, Notepad.exe has gone from “barely maintained” to “it writes for you” Read More »

musk’s-doge-used-meta’s-llama-2—not-grok—for-gov’t-slashing,-report-says

Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says

Why didn’t DOGE use Grok?

It seems that Grok, Musk’s AI model, wasn’t available for DOGE’s task because it was only available as a proprietary model in January. Moving forward, DOGE may rely more frequently on Grok, Wired reported, as Microsoft announced it would start hosting xAI’s Grok 3 models in its Azure AI Foundry this week, The Verge reported, which opens the models up for more uses.

In their letter, lawmakers urged Vought to investigate Musk’s conflicts of interest, while warning of potential data breaches and declaring that AI, as DOGE had used it, was not ready for government.

“Without proper protections, feeding sensitive data into an AI system puts it into the possession of a system’s operator—a massive breach of public and employee trust and an increase in cybersecurity risks surrounding that data,” lawmakers argued. “Generative AI models also frequently make errors and show significant biases—the technology simply is not ready for use in high-risk decision-making without proper vetting, transparency, oversight, and guardrails in place.”

Although Wired’s report seems to confirm that DOGE did not send sensitive data from the “Fork in the Road” emails to an external source, lawmakers want much more vetting of AI systems to deter “the risk of sharing personally identifiable or otherwise sensitive information with the AI model deployers.”

A seeming fear is that Musk may start using his own models more, benefiting from government data his competitors cannot access, while potentially putting that data at risk of a breach. They’re hoping that DOGE will be forced to unplug all its AI systems, but Vought seems more aligned with DOGE, writing in his AI guidance for federal use that “agencies must remove barriers to innovation and provide the best value for the taxpayer.”

“While we support the federal government integrating new, approved AI technologies that can improve efficiency or efficacy, we cannot sacrifice security, privacy, and appropriate use standards when interacting with federal data,” their letter said. “We also cannot condone use of AI systems, often known for hallucinations and bias, in decisions regarding termination of federal employment or federal funding without sufficient transparency and oversight of those models—the risk of losing talent and critical research because of flawed technology or flawed uses of such technology is simply too high.”

Musk’s DOGE used Meta’s Llama 2—not Grok—for gov’t slashing, report says Read More »

google-pretends-to-be-in-on-the-joke,-but-its-focus-on-ai-mode-search-is-serious

Google pretends to be in on the joke, but its focus on AI Mode search is serious

AI Mode as Google’s next frontier

Google is the world’s largest advertising entity, but search is what fuels the company. Quarter after quarter, Google crows about increasing search volume—it’s the most important internal metric for the company. Google has made plenty of changes to its search engine results pages (SERPs) over the years, but AI mode throws that all out. It doesn’t have traditional search results no matter how far you scroll.

To hear Google’s leadership tell it, AI Mode is an attempt to simplify finding information. According to Liz Reid, Google’s head of search, the next year in search is about going from information to intelligence. When you are searching for information on a complex issue, you probably have to look at a lot of web sources. It’s rare that you’ll find a single page that answers all your questions, and maybe you should be using AI for that stuff.

Google Elizabeth Reid

Google search head Liz Reid says the team’s search efforts are aimed at understanding the underlying task behind a query. Credit: Ryan Whitwam

The challenge for AI search is to simplify the process of finding information, essentially doing the legwork for you. When speaking about the move to AI search, DeepMind CTO Koray Kavukcuoglu says that search is the greatest product in the world, and if AI makes it easier to search for information, that’s a net positive.

Latency is important in search—people don’t like to wait for things to load, which is why Google has always emphasized the speed of its services. But AI can be slow. The key, says Reid, is to know when users will accept a longer wait. For example, AI Overviews is designed to spit out tokens faster because it’s part of the core search experience. AI Mode, however, has the luxury of taking more time to “think.” If you’re shopping for a new appliance, you might do a few hours of research. So an AI search experience that takes longer to come up with a comprehensive answer with tables, formatting, and background info might be a desirable experience because it still saves you time.

Google pretends to be in on the joke, but its focus on AI Mode search is serious Read More »

apple-legend-jony-ive-takes-control-of-openai’s-design-future

Apple legend Jony Ive takes control of OpenAI’s design future

On Wednesday, OpenAI announced that former Apple design chief Jony Ive and his design firm LoveFrom will take over creative and design control at OpenAI. The deal makes Ive responsible for shaping the future look and feel of AI products at the chatbot creator, extending across all of the company’s ventures, including ChatGPT.

Ive was Apple’s chief design officer for nearly three decades, where he led the design of iconic products including the iPhone, iPad, MacBook, and Apple Watch, earning numerous industry awards and helping transform Apple into the world’s most valuable company through his minimalist design philosophy.

“Thrilled to be partnering with jony, imo the greatest designer in the world,” tweeted OpenAI CEO Sam Altman while sharing a 9-minute promotional video touting the personal and professional relationship between Ive and Altman.

A screenshot of the Jony Ive / Sam Altman collaboration website captured on May 21, 2025.

A screenshot of the Jony Ive/Sam Altman collaboration website captured on May 21, 2025. Credit: OpenAI

Ive left Apple in 2019 to found LoveFrom, a design firm that has worked with companies including Ferrari, Airbnb, and luxury Italian fashion firm Moncler.

The mechanics of the Ive-OpenAI deal are slightly convoluted. At its core, OpenAI will acquire Ive’s company “io” in an all-equity deal valued at $6.5 billion—Ive founded io last year to design and develop AI-powered products. Meanwhile, io’s staff of approximately 55 engineers, scientists, researchers, physicists, and product development specialists will become part of OpenAI.

Meanwhile, Ive’s design firm LoveFrom will continue to operate independently, with OpenAI becoming a customer of LoveFrom, while LoveFrom will receive a stake in OpenAI. The companies expect the transaction to close this summer pending regulatory approval.

Apple legend Jony Ive takes control of OpenAI’s design future Read More »

gemini-2.5-is-leaving-preview-just-in-time-for-google’s-new-$250-ai-subscription

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription

Deep Think graphs I/O

Deep Think is more capable of complex math and coding. Credit: Ryan Whitwam

Both 2.5 models have adjustable thinking budgets when used in Vertex AI and via the API, and now the models will also include summaries of the “thinking” process for each output. This makes a little progress toward making generative AI less overwhelmingly expensive to run. Gemini 2.5 Pro will also appear in some of Google’s dev products, including Gemini Code Assist.

Gemini Live, previously known as Project Astra, started to appear on mobile devices over the last few months. Initially, you needed to have a Gemini subscription or a Pixel phone to access Gemini Live, but now it’s coming to all Android and iOS devices immediately. Google demoed a future “agentic” capability in the Gemini app that can actually control your phone, search the web for files, open apps, and make calls. It’s perhaps a little aspirational, just like the Astra demo from last year. The version of Gemini Live we got wasn’t as good, but as a glimpse of the future, it was impressive.

There are also some developments in Chrome, and you guessed it, it’s getting Gemini. It’s not dissimilar from what you get in Edge with Copilot. There’s a little Gemini icon in the corner of the browser, which you can click to access Google’s chatbot. You can ask it about the pages you’re browsing, have it summarize those pages, and ask follow-up questions.

Google AI Ultra is ultra-expensive

Since launching Gemini, Google has only had a single $20 monthly plan for AI features. That plan granted you access to the Pro models and early versions of Google’s upcoming AI. At I/O, Google is catching up to AI firms like OpenAI, which have offered sky-high AI plans. Google’s new Google AI Ultra plan will cost $250 per month, more than the $200 plan for ChatGPT Pro.

Gemini 2.5 is leaving preview just in time for Google’s new $250 AI subscription Read More »

new-portal-calls-out-ai-content-with-google’s-watermark

New portal calls out AI content with Google’s watermark

Last year, Google open-sourced its SynthID AI watermarking system, allowing other developers access to a toolkit for imperceptibly marking content as AI-generated. Now, Google is rolling out a web-based portal to let people easily test if a piece of media has been watermarked with SynthID.

After uploading a piece of media to the SynthID Detector, users will get back results that “highlight which parts of the content are more likely to have been watermarked,” Google said. That watermarked, AI-generated content should remain detectable by the portal “even when the content is shared or undergoes a range of transformations,” the company said.

The detector will be available to beta testers starting today, and Google says journalists, media professionals, and researchers can apply for a waitlist to get access themselves. To start, users will be able to upload images and audio to the portal for verification, but Google says video and text detection will be added in the coming weeks.

New portal calls out AI content with Google’s watermark Read More »

zero-click-searches:-google’s-ai-tools-are-the-culmination-of-its-hubris

Zero-click searches: Google’s AI tools are the culmination of its hubris


Google’s first year with AI search was a wild ride. It will get wilder.

Google is constantly making changes to its search rankings, but not all updates are equal. Every few months, the company bundles up changes into a larger “core update.” These updates make rapid and profound changes to search, so website operators watch them closely.

The March 2024 update was unique. It was one of Google’s largest core updates ever, and it took over a month to fully roll out. Nothing has felt quite the same since. Whether the update was good or bad depends on who you ask—and maybe who you are.

It’s common for websites to see traffic changes after a core update, but the impact of the March 2024 update marked a seismic shift. Google says the update aimed to address spam and AI-generated content in a meaningful way. Still, many publishers say they saw clicks on legitimate sites evaporate, while others have had to cope with unprecedented volatility in their traffic. Because Google owns almost the entire search market, changes in its algorithm can move the Internet itself.

In hindsight, the March 2024 update looks like the first major Google algorithm update for the AI era. Not only did it (supposedly) veer away from ranking AI-authored content online, but it also laid the groundwork for Google’s ambitious—and often annoying—desire to fuse AI with search.

A year ago, this ambition surfaced with AI Overviews, but now the company is taking an even more audacious route, layering in a new chat-based answer service called “AI Mode.” Both of these technologies do at least two things: They aim to keep you on Google properties longer, and they remix publisher content without always giving prominent citations.

Smaller publishers appear to have borne the brunt of the changes caused by these updates. “Google got all this flak for crushing the small publishers, and it’s true that when they make these changes, they do crush a lot of publishers,” says Jim Yu, CEO of enterprise SEO platform BrightEdge. Yu explains that Google is the only search engine likely to surface niche content in the first place, and there are bound to be changes to sites at the fringes during a major core update.

Google’s own view on the impact of the March 2024 update is unsurprisingly positive. The company said it was hoping to reduce the appearance of unhelpful content in its search engine results pages (SERPs) by 40 percent. After the update, the company claimed an actual reduction of closer to 45 percent. But does it feel like Google’s results have improved by that much? Most people don’t think so.

What causes this disconnect? According to Michael King, founder of SEO firm iPullRank, we’re not speaking the same language as Google. “Google’s internal success metrics differ from user perceptions,” he says. “Google measures user satisfaction through quantifiable metrics, while external observers rely on subjective experiences.”

Google evaluates algorithm changes with various tests, including human search quality testers and running A/B tests on live searches. But more than anything else, success is about the total number of searches (5 trillion of them per year). Google often makes this number a centerpiece of its business updates to show investors that it can still grow.

However, using search quantity to measure quality has obvious problems. For instance, more engagement with a search engine might mean that quality has decreased, so people try new queries (e.g., the old trick of adding “Reddit” to the end of your search string). In other words, people could be searching more because they don’t like the results.

Jim Yu suggests that Google is moving fast and breaking things, but it may not be as bad as we think. “I think they rolled things out faster because they had to move a lot faster than they’ve historically had to move, and it ends up that they do make some real mistakes,” says Yu. “[Google] is held to a higher standard, but by and large, I think their search quality is improving.”

According to King, Google’s current search behavior still favors big names, but other sites have started to see a rebound. “Larger brands are performing better in the top three positions, while lesser-known websites have gained ground in positions 4 through 10,” says King. “Although some websites have indeed lost traffic due to reduced organic visibility, the bigger issue seems tied to increased usage of AI Overviews”—and now the launch of AI Mode.

Yes, the specter of AI hangs over every SERP. The unhelpful vibe many people now get from Google searches, regardless of the internal metrics the company may use, may come from a fundamental shift in how Google surfaces information in the age of AI.

The AI Overview hangover

In 2025, you can’t talk about Google’s changes to search without acknowledging the AI-generated elephant in the room. As it wrapped up that hefty core update in March 2024, Google also announced a major expansion of AI in search, moving the “Search Generative Experience” out of labs and onto Google.com. The feature was dubbed “AI Overviews.”

The AI Overview box has been a fixture on Google’s search results page ever since its debut a year ago. The feature uses the same foundational AI model as Google’s Gemini chatbot to formulate answers to your search queries by ingesting the top 100 (!) search results. It sits at the top of the page, pushing so-called blue link content even farther down below the ads and knowledge graph content. It doesn’t launch on every query, and sometimes it answers questions you didn’t ask—or even hallucinates a totally wrong answer.

And it’s not without some irony that Google’s laudable decision to de-rank synthetic AI slop comes at the same time that Google heavily promotes its own AI-generated content right at the top of SERPs.

AI Overview on phone

AI Overviews appear right at the top of many search results.

Credit: Google

AI Overviews appear right at the top of many search results. Credit: Google

What is Google getting for all of this AI work? More eyeballs, it would seem. “AI is driving more engagement than ever before on Google,” says Yu. BrightEdge data shows that impressions on Google are up nearly 50 percent since AI Overviews launched. Many of the opinions you hear about AI Overviews online are strongly negative, but that doesn’t mean people aren’t paying attention to the feature. In its Q1 2025 earnings report, Google announced that AI Overviews is being “used” by 1.5 billion people every month. (Since you can’t easily opt in or opt out of AI Overviews, this “usage” claim should be taken with a grain of salt.)

Interestingly, the impact of AI Overviews has varied across the web. In October 2024, Google was so pleased with AI Overviews that it expanded them to appear in more queries. And as AI crept into more queries, publishers saw a corresponding traffic drop. Yu estimates this drop to be around 30 percent on average for those with high AI query coverage. For searches that are less supported in AI Overviews—things like restaurants and financial services—the traffic change has been negligible. And there are always exceptions. Yu suggests that some large businesses with high AI Overview query coverage have seen much smaller drops in traffic because they rank extremely well as both AI citations and organic results.

Lower traffic isn’t the end of the world for some businesses. Last May, AI Overviews were largely absent from B2B queries, but that turned around in a big way in recent months. BrightEdge estimates that 70 percent of B2B searches now have AI answers, which has reduced traffic for many companies. Yu doesn’t think it’s all bad, though. “People don’t click through as much—they engage a lot more on the AI—but when they do click, the conversion rate for the business goes up,” Yu says. In theory, serious buyers click and window shoppers don’t.

But the Internet is not a giant mall that exists only for shoppers. It is, first and foremost, a place to share and find information, and AI Overviews have hit some purveyors of information quite hard. At launch, AI Overviews were heavily focused on “What is” and “How to” queries. Such “service content” is a staple of bloggers and big media alike, and these types of publishers aren’t looking for sales conversions—it’s traffic that matters. And they’re getting less of it because AI Overviews “helpfully” repackages and remixes their content, eliminating the need to click through to the site. Some publishers are righteously indignant, asking how it’s fair for Google to remix content it doesn’t own, and to do so without compensation.

But Google’s intentions don’t end with AI Overviews. Last week, the company started an expanded public test of so-called “AI Mode,” right from the front page. AI Mode doesn’t even bother with those blue links. It’s a chatbot experience that, at present, tries to answer your query without clearly citing sources inline. (On some occasions, it will mention Reddit or Wikipedia.) On the right side of the screen, Google provides a little box with three sites linked, which you can expand to see more options. To the end user, it’s utterly unclear if those are “sources,” “recommendations,” or “partner deals.”

Perhaps more surprisingly, in our testing, not a single AI Mode “sites box” listed a site that ranked on the first page for the same query on a regular search. That is, the links in AI Mode for “best foods to eat for a cold” don’t overlap at all with the SERP for the same query in Google Search. In fairness, AI Mode is very new, and its behavior will undoubtedly change. But the direction the company is headed in seems clear.

Google’s real goal is to keep you on Google or other Alphabet properties. In 2019, Rand Fishkin noticed that Google’s evolution from search engine to walled garden was at a tipping point. At that time—and for the first time—more than half of Google searches resulted in zero click-throughs to other sites. But data did show large numbers of clicks to Google’s own properties, like YouTube and Maps. If Google doesn’t intend to deliver a “zero-click” search experience, you wouldn’t know it from historical performance data or the new features the company develops.

You also wouldn’t know it from the way AI Overviews work. They do cite some of the sources used in building each output, and data suggests people click on those links. But are the citations accurate? Is every source used for constructing an AI Overview cited? We don’t really know, as Google is famously opaque about how its search works. We do know that Google uses a customized version of Gemini to support AI Overviews and that Gemini has been trained on billions and billions of webpages.

When AI Overviews do cite a source, it’s not clear how those sources came to be the ones cited. There’s good reason to be suspicious here: AI Overview’s output is not great, as witnessed by the numerous hallucinations we all know and love (telling people to eat rocks, for instance). The only thing we know for sure is that Google isn’t transparent about any of this.

No signs of slowing

Despite all of that, Google is not slowing down on AI in search. More recent core updates have only solidified this new arrangement with an ever-increasing number of AI-answered queries. The company appears OK with its current accuracy problems, or at the very least, it’s comfortable enough to push out AI updates anyway. Google appears to have been caught entirely off guard by the public launch of ChatGPT, and it’s now utilizing its search dominance to play catch-up.

To make matters even more dicey, Google isn’t even trying to address the biggest issue in all this: The company’s quest for zero-click search harms the very content creators upon which the company has built its empire.

For its part, Google has been celebrating its AI developments, insisting that content producers don’t know what’s best for them, refuting any concerns with comments about search volume increases and ever-more-complex search query strings. The changes must be working!

Google has been building toward this moment for years. The company started with a list of ten blue links and nothing else, but little by little, it pushed the links down the page and added more content that keeps people in the Google ecosystem. Way back in 2007, Google added Universal Search, which allowed it to insert content from Google Maps, YouTube, and other services. In 2009, Rich Snippets began displaying more data from search results on SERPs. In 2012, the Knowledge Graph began extracting data from search results to display answers in the search results. Each change kept people on Google longer and reduced click-throughs, all the while pushing the search results down the page.

AI Overviews, and especially AI Mode, are the logical outcome of Google’s years-long transformation from an indexer of information to an insular web portal built on scraping content from around the web. Earlier in Google’s evolution, the implicit agreement was that websites would allow Google to crawl their pages in exchange for sending them traffic. That relationship has become strained as the company has kept more traffic for itself, reducing click-throughs to websites even as search volume continues to increase. And locking Google out isn’t a realistic option when the company controls almost the entire search market.

Even when Google has taken a friendlier approach, business concerns could get in the way. During the search antitrust trial, documents showed that Google initially intended to let sites opt out of being used for AI training for its search-based AI features—but these sites would still be included in search results. The company ultimately canned that idea, leaving site operators with the Pyrrhic choice of participating in the AI “revolution” or becoming invisible on the web. Google now competes with, rather than supports, the open web.

When many of us look at Google’s search results today, the vibe feels off. Maybe it’s the AI, maybe it’s Google’s algorithm, or maybe the Internet just isn’t what it once was. Whatever the cause, the shift toward zero-click search that began more than a decade ago was made clear by the March 2024 core update, and it has only accelerated with the launch of AI Mode. Even businesses that have escaped major traffic drops from AI Overviews could soon find that Google’s AI-only search can get much more overbearing.

The AI slop will continue until morale improves.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Zero-click searches: Google’s AI tools are the culmination of its hubris Read More »

labor-dispute-erupts-over-ai-voiced-darth-vader-in-fortnite

Labor dispute erupts over AI-voiced Darth Vader in Fortnite

For voice actors who previously portrayed Darth Vader in video games, the Fortnite feature starkly illustrates how AI voice synthesis could reshape their profession. While James Earl Jones created the iconic voice for films, at least 54 voice actors have performed as Vader in various media games over the years when Jones wasn’t available—work that could vanish if AI replicas become the industry standard.

The union strikes back

SAG-AFTRA’s labor complaint (which can be read online here) doesn’t focus on the AI feature’s technical problems or on permission from the Jones estate, which explicitly authorized the use of a synthesized version of his voice for the character in Fortnite. The late actor, who died in 2024, had signed over his Darth Vader voice rights before his death.

Instead, the union’s grievance centers on labor rights and collective bargaining. In the NLRB filing, SAG-AFTRA alleges that Llama Productions “failed and refused to bargain in good faith with the union by making unilateral changes to terms and conditions of employment, without providing notice to the union or the opportunity to bargain, by utilizing AI-generated voices to replace bargaining unit work on the Interactive Program Fortnite.”

The action comes amid SAG-AFTRA’s ongoing interactive media strike, which began in July 2024 after negotiations with video game producers stalled primarily over AI protections. The strike continues, with more than 100 games signing interim agreements, while others, including those from major publishers like Epic, remain in dispute.

Labor dispute erupts over AI-voiced Darth Vader in Fortnite Read More »

trump-to-sign-law-forcing-platforms-to-remove-revenge-porn-in-48-hours

Trump to sign law forcing platforms to remove revenge porn in 48 hours

Likely wearisome for victims, the law won’t be widely enforced for about a year, while any revenge porn already online continues spreading. Perhaps most frustrating, once the law kicks in, victims will still need to police their own revenge porn online. And the 48-hour window leaves time for content to be downloaded and reposted, leaving them vulnerable on any unmonitored platforms.

Some victims are already tired of fighting this fight. Last July, when Google started downranking deepfake porn apps to make AI-generated NCII less discoverable, one deepfake victim, Sabrina Javellana, told The New York Times that she spent months reporting harmful content on various platforms online. And that didn’t stop the fake images from spreading. Joe Morelle, a Democratic US representative who has talked to victims of deepfake porn and sponsored laws to help them, agreed that “these images live forever.”

“It just never ends,” Javellana said. “I just have to accept it.”

Andrea Powell—director of Alecto AI, an app founded by a revenge porn survivor that helps victims remove NCII online—warned on a 2024 panel that Ars attended that requiring victims to track down “their own imagery [and submit] multiple claims across different platforms [increases] their sense of isolation, shame, and fear.”

While the Take It Down Act seems flawed, passing a federal law imposing penalties for allowing deepfake porn posts could serve as a deterrent for bad actors or possibly spark a culture shift by making it clear that posting AI-generated NCII is harmful.

Victims have long suggested that consistency is key to keeping revenge porn offline, and the Take It Down Act certainly offers that, creating a moderately delayed delete button on every major platform.

Although it seems clear that the Take It Down Act will surely make it easier than ever to report NCII, whether the law will effectively reduce the spread of NCII online is an unknown and will likely hinge on the 48-hour timeline overcoming criticisms.

Trump to sign law forcing platforms to remove revenge porn in 48 hours Read More »

xai-says-an-“unauthorized”-prompt-change-caused-grok-to-focus-on-“white-genocide”

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide”

When analyzing social media posts made by others, Grok is given the somewhat contradictory instructions to “provide truthful and based insights [emphasis added], challenging mainstream narratives if necessary, but remain objective.” Grok is also instructed to incorporate scientific studies and prioritize peer-reviewed data but also to “be critical of sources to avoid bias.”

Grok’s brief “white genocide” obsession highlights just how easy it is to heavily twist an LLM’s “default” behavior with just a few core instructions. Conversational interfaces for LLMs in general are essentially a gnarly hack for systems intended to generate the next likely words to follow strings of input text. Layering a “helpful assistant” faux personality on top of that basic functionality, as most LLMs do in some form, can lead to all sorts of unexpected behaviors without careful additional prompting and design.

The 2,000+ word system prompt for Anthropic’s Claude 3.7, for instance, includes entire paragraphs for how to handle specific situations like counting tasks, “obscure” knowledge topics, and “classic puzzles.” It also includes specific instructions for how to project its own self-image publicly: “Claude engages with questions about its own consciousness, experience, emotions and so on as open philosophical questions, without claiming certainty either way.”

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge.

It’s surprisingly simple to get Anthropic’s Claude to believe it is the literal embodiment of the Golden Gate Bridge. Credit: Antrhopic

Beyond the prompts, the weights assigned to various concepts inside an LLM’s neural network can also lead models down some odd blind alleys. Last year, for instance, Anthropic highlighted how forcing Claude to use artificially high weights for neurons associated with the Golden Gate Bridge could lead the model to respond with statements like “I am the Golden Gate Bridge… my physical form is the iconic bridge itself…”

Incidents like Grok’s this week are a good reminder that, despite their compellingly human conversational interfaces, LLMs don’t really “think” or respond to instructions like humans do. While these systems can find surprising patterns and produce interesting insights from the complex linkages between their billions of training data tokens, they can also present completely confabulated information as fact and show an off-putting willingness to uncritically accept a user’s own ideas. Far from being all-knowing oracles, these systems can show biases in their actions that can be much harder to detect than Grok’s recent overt “white genocide” obsession.

xAI says an “unauthorized” prompt change caused Grok to focus on “white genocide” Read More »

spotify-caught-hosting-hundreds-of-fake-podcasts-that-advertise-selling-drugs

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs

This week, Spotify rushed to remove hundreds of obviously fake podcasts found to be marketing prescription drugs in violation of Spotify’s policies and, likely, federal law.

On Thursday, Business Insider (BI) reported that Spotify removed 200 podcasts advertising the sale of opioids and other drugs, but that wasn’t the end of the scandal. Today, CNN revealed that it easily uncovered dozens more fake podcasts peddling drugs.

Some of the podcasts may have raised a red flag for a human moderator—with titles like “My Adderall Store” or “Xtrapharma.com” and episodes titled “Order Codeine Online Safe Pharmacy Louisiana” or “Order Xanax 2 mg Online Big Deal On Christmas Season,” CNN reported.

But Spotify’s auto-detection did not flag the fake podcasts for removal. Some of them remained up for months, CNN reported, which could create trouble for the music streamer at a time when the US government is cracking down on illegal drug sales online.

“Multiple teens have died of overdoses from pills bought online,” CNN noted, sparking backlash against tech companies. And Donald Trump’s aggressive tariffs were specifically raised to stop deadly drugs from bombarding the US, which the president declared a national emergency.

BI found that many podcast episodes featured a computerized voice and were under a minute long, while CNN noted some episodes were as short as 10 seconds. Some of them didn’t contain any audio at all, BI reported.

Spotify caught hosting hundreds of fake podcasts that advertise selling drugs Read More »

the-empire-strikes-back-with-f-bombs:-ai-darth-vader-goes-rogue-with-profanity,-slurs

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs

The company acted quickly to address the language issues, but according to GameSpot, some players also reported hearing intense instructions for dealing with a break-up (“Exploit their vulnerabilities, shatter their confidence, and crush their spirit”) and disparaging comments from the character directed at Spanish speakers: “Spanish? A useful tongue for smugglers and spice traders,” AI Vader said. “Its strategic value is minimal.”

To be fair to Epic’s attempt at an AI implementation, Darth Vader is a deeply evil character (i.e. murders sandpeople, hates sand), and the remarks seem consistent with his twisted and sadistic personality. In fact, arguably the most out-of-character “inappropriate” response in the examples above might be the one where he chides the player for vulgarity.

On Friday afternoon, Epic Games sent out an email seeking to reassure parents who may have come across the news about the misbehaving AI character. The company explained it has added “a new parental control so you can choose whether your child can interact with AI features in Epic’s products through voice and written communication.” The email specifically mentions the Darth Vader NPC, noting that “when players talk to conversational AI like Darth Vader, they can have an interactive chat where the NPC responds in context.” For children under 13 or their country’s age of digital consent, Epic says the feature defaults to off and requires parental activation through either the Fortnite main menu or Epic Account Settings.

These aren’t the words you’re looking for

Getting an AI character to match the tone or backstory of an established fictional character isn’t as easy as it might seem. Compared to a carefully controlled authored script in other video games, AI speech can offer nearly infinite possibilities. Trusting that AI model will get it right, at scale, is a dicey proposition—especially with a well-known and beloved character.

The empire strikes back with F-bombs: AI Darth Vader goes rogue with profanity, slurs Read More »