AI

apple-botched-the-apple-intelligence-launch,-but-its-long-term-strategy-is-sound

Apple botched the Apple Intelligence launch, but its long-term strategy is sound


I’ve spent a week with Apple Intelligence—here are the takeaways.

Apple Intelligence includes features like Clean Up, which lets you pick from glowing objects it has recognized to remove them from a photo. Credit: Samuel Axon

Ask a few random people about Apple Intelligence and you’ll probably get quite different responses.

One might be excited about the new features. Another could opine that no one asked for this and the company is throwing away its reputation with creatives and artists to chase a fad. Another still might tell you that regardless of the potential value, Apple is simply too late to the game to make a mark.

The release of Apple’s first Apple Intelligence-branded AI tools in iOS 18.1 last week makes all those perspectives understandable.

The first wave of features in Apple’s delayed release shows promise—and some of them may be genuinely useful, especially with further refinement. At the same time, Apple’s approach seems rushed, as if the company is cutting some corners to catch up where some perceive it has fallen behind.

That impatient, unusually undisciplined approach to the rollout could undermine the value proposition of AI tools for many users. Nonetheless, Apple’s strategy might just work out in the long run.

What’s included in “Apple Intelligence”

I’m basing those conclusions on about a week spent with both the public release of iOS 18.1 and the developer beta of iOS 18.2. Between them, the majority of features announced back in June under the “Apple Intelligence” banner are present.

Let’s start with a quick rundown of which Apple Intelligence features are in each release.

iOS 18.1 public release

  • Writing Tools
    • Proofreading
    • Rewriting in friendly, professional, or concise voices
    • Summaries in prose, key points, bullet point list, or table format
  • Text summaries
    • Summarize text from Mail messages
    • Summarize text from Safari pages
  • Notifications
  • Reduce Interruptions – Intelligent filtering of notifications to include only ones deemed critical
  • Type to Siri
  • More conversational Siri
  • Photos
    • Clean Up (remove an object or person from the image)
    • Generate Memories videos/slideshows from plain language text prompts
    • Natural language search

iOS 18.2 developer beta (as of November 5, 2024)

  • Image Playground – A prompt-based image generation app akin to something like Dall-E or Midjourney but with a limited range of stylistic possibilities, fewer features, and more guardrails
  • Genmoji – Generate original emoji from a prompt
  • Image Wand – Similar to Image Playground but simplified within the Notes app
  • ChatGPT integration in Siri
  • Visual Intelligence – iPhone 16 and iPhone 16 Pro users can use the new Camera Control button to do a variety of tasks based on what’s in the camera’s view, including translation, information about places, and more
  • Writing Tools – Expanded with support for prompt-based edits to text

iOS 18.1 is out right now for everybody. iOS 18.2 is scheduled for a public launch sometime in December.

iOS 18.2 will introduce both Visual Intelligence and the ability to chat with ChatGPT via Siri.

Credit: Samuel Axon

iOS 18.2 will introduce both Visual Intelligence and the ability to chat with ChatGPT via Siri. Credit: Samuel Axon

A staggered rollout

For several years, Apple has released most of its major new software features for, say, the iPhone in one big software update in the fall. That timeline has gotten fuzzier in recent years, but the rollout of Apple Intelligence has moved further from that tradition than we’ve ever seen before.

Apple announced iOS 18 at its developer conference in June, suggesting that most if not all of the Apple Intelligence features would launch in that singular update alongside the new iPhones.

Much of the marketing leading up to and surrounding the iPhone 16 launch focused on Apple Intelligence, but in actuality, the iPhone 16 had none of the features under that label when it launched. The first wave hit with iOS 18.1 last week, over a month after the first consumers started getting their hands on iPhone 16 hardware. And even now, these features are in “beta,” and there has been a wait list.

Many of the most exciting Apple Intelligence features still aren’t here, with some planned for iOS 18.2’s launch in December and a few others coming even later. There will likely be a wait list for some of those, too.

The wait list part makes sense—some of these features put demand on cloud servers, and it’s reasonable to stagger the rollout to sidestep potential launch problems.

The rest doesn’t make as much sense. Between the beta label and the staggered features, it seems like Apple is rushing to satisfy expectations about Apple Intelligence before quality and consistency have fallen into place.

Making AI a harder sell

In some cases, this strategy has led to things feeling half-baked. For example, Writing Tools is available system-wide, but it’s a different experience for first-party apps that work with the new Writing Tools API than third-party apps that don’t. The former lets you approve changes piece by piece, but the latter puts you in a take-it-or-leave-it situation with the whole text. The Writing Tools API is coming in iOS 18.2, maintaining that gap for a couple of months, even for third-party apps whose developers would normally want to be on the ball with this.

Further, iOS 18.2 will allow users to tweak Writing Tools rewrites by specifying what they want in a text prompt, but that’s missing in iOS 18.1. Why launch Writing Tools with features missing and user experience inconsistencies when you could just launch the whole suite in December?

That’s just one example, but there are many similar ones. I think there are a couple of possible explanations:

  • Apple is trying to satisfy anxious investors and commentators who believe the company is already way too late to the generative AI sector.
  • With the original intent to launch it all in the first iOS 18 release, significant resources were spent on Apple Intelligence-focused advertising and marketing around the iPhone 16 in September—and when unexpected problems developing the software features led to a delay for the software launch, it was too late to change the marketing message. Ultimately, the company’s leadership may feel the pressure to make good on that pitch to users as quickly after the iPhone 16 launch as possible, even if it’s piecemeal.

I’m not sure which it is, but in either case, I don’t believe it was the right play.

So many consumers have their defenses up about AI features already, in part because other companies like Microsoft or Google rushed theirs to market without really thinking things through (or caring, if they had) and also because more and more people are naturally suspicious of whatever is labeled the next great thing in Silicon Valley (remember NFTs?). Apple had an opportunity to set itself apart in consumers’ perceptions about AI, but at least right now, that opportunity has been squandered.

Now, I’m not an AI doubter. I think these features and others can be useful, and I already use similar ones every day. I also commend Apple for allowing users to control whether these AI features are enabled at all, which should make AI skeptics more comfortable.

Notification summaries condense all the notifications from a single app into one or two lines, like with this lengthy Discord conversation here. Results are hit or miss.

Credit: Samuel Axon

Notification summaries condense all the notifications from a single app into one or two lines, like with this lengthy Discord conversation here. Results are hit or miss. Credit: Samuel Axon

That said, releasing half-finished bits and pieces of Apple Intelligence doesn’t fit the company’s framing of it as a singular, branded product, and it doesn’t do a lot to handle objections from users who are already assuming AI tools will be nonsense.

There’s so much confusion about AI that it makes sense to let those who are skeptical move at their own pace, and it also makes sense to sell them on the idea with fully baked implementations.

Apple still has a more sensible approach than most

Despite all this, I like the philosophy behind how Apple has thought about implementing its AI tools, even if the rollout has been a mess. It’s fundamentally distinct from what we’re seeing from a company like Microsoft, which seems hell-bent on putting AI chatbots everywhere it can to see which real-world use cases emerge organically.

There is no true, ChatGPT-like LLM chatbot in iOS 18.1. Technically, there’s one in iOS 18.2, but only because you can tell Siri to refer you to ChatGPT on a case-by-case basis.

Instead, Apple has introduced specific generative AI features peppered throughout the operating system meant to explicitly solve narrow user problems. Sure, they’re all built on models that have resemblances to the ones that power Claude or Midjourney, but they’re not built around this idea that you start up a chat dialogue with an LLM or an image generator and it’s up to you to find a way to make it useful for you.

The practical application of most of these features is clear, provided they end up working well (more on that shortly). As a professional writer, it’s easy for me to dismiss Writing Tools as unnecessary—but obviously, not everyone is a professional writer, or even a decent one. For example, I’ve long held that one of the most positive applications of large language models is their ability to let non-native speakers clean up their writing to make it meet native speakers’ standards. In theory, Apple’s Writing Tools can do that.

Apple Intelligence features augment or add additional flexibility or power to existing use cases across the OS, like this new way to generate photo memory movies via text prompt.

Credit: Samuel Axon

Apple Intelligence features augment or add additional flexibility or power to existing use cases across the OS, like this new way to generate photo memory movies via text prompt. Credit: Samuel Axon

I have no doubt that Genmoji will be popular—who doesn’t love a bit of fun in group texts with friends? And many months before iOS 18.1, I was already dropping senselessly gargantuan corporate email threads into ChatGPT and asking for quick summaries.

Apple is approaching AI in a user-centric way that stands in stark contrast to almost every other major player rolling out AI tools. Generative AI is an evolution from machine learning, which is something Apple has been using for everything from iPad screen palm rejection to autocorrect for a while now—to great effect, as we discussed in my interview with Apple AI chief John Giannandrea a few years ago. Apple just never wrapped it in a bow and called it AI until now.

But there was no good reason to rush these features out or to even brand them as “Apple Intelligence” and make a fuss about it. They’re natural extensions of what Apple was already doing. Since they’ve been rushed out the door with a spotlight shining on them, Apple’s AI ambitions have a rockier road ahead than the company might have hoped.

It could take a year or two for this all to come together

Using iOS 18.1, it’s clear that Apple’s large language models are not as effective or reliable as Claude or ChatGPT. It takes time to train models like these, and it looks like Apple started late.

Based on my hours spent with both Apple Intelligence and more established tools from cutting-edge AI companies, I feel the other models crossed a usefulness and reliability threshold a year or so ago. When ChatGPT first launched, it was more of a curiosity than a powerful tool. Now it’s a powerful tool, but that’s a relatively recent development.

In my time with Writing Tools and Notification Summaries in particular, Apple’s models subjectively appear to be around where ChatGPT or Claude were 18 months ago. Notification Summaries almost always miss crucial context in my experience. Writing Tools introduce errors where none existed before.

A writing suggestion shows an egregious grammatical error

It’s not hard to spot the huge error that Writing Tools introduced here. This happens all the time when I use it.

Credit: Samuel Axon

It’s not hard to spot the huge error that Writing Tools introduced here. This happens all the time when I use it. Credit: Samuel Axon

More mature models do these things, too, but at a much lower frequency. Unfortunately, Apple Intelligence isn’t far enough along to be broadly useful.

That said, I’m excited to see where Apple Intelligence will be in 24 months. I think the company is on the right track by using AI to target specific user needs rather than just putting a chatbot out there and letting people figure it out. It’s a much better approach than what we see with Microsoft’s Copilot. If Apple’s models cross that previously mentioned threshold of utility—and it’s only a matter of time before they do—the future of AI tools on Apple platforms could be great.

It’s just a shame that Apple didn’t seem to have the confidence to ignore the zeitgeisty commentators and roll out these features when they’re complete and ready, with messaging focusing on user problems instead of “hey, we’re taking AI seriously too.”

Most users don’t care if you’re taking AI seriously, but they do care if the tools you introduce can make their day-to-day lives better. I think they can—it will just take some patience. Users can be patient, but can Apple? It seems not.

Even so, there’s a real possibility that these early pains will be forgotten before long.

Photo of Samuel Axon

Samuel Axon is a senior editor at Ars Technica. He covers Apple, software development, gaming, AI, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and he is a graduate of DePaul University, where he studied interactive media and software development.

Apple botched the Apple Intelligence launch, but its long-term strategy is sound Read More »

notepad.exe,-now-an-actively-maintained-app,-has-gotten-its-inevitable-ai-update

Notepad.exe, now an actively maintained app, has gotten its inevitable AI update

Among the decades-old Windows apps to get renewed attention from Microsoft during the Windows 11 era is Notepad, the basic built-in text editor that was much the same in early 2021 as it had been in the ’90 and 2000s. Since then, it has gotten a raft of updates, including a visual redesign, spellcheck and autocorrect, and window tabs.

Given Microsoft’s continuing obsession with all things AI, it’s perhaps not surprising that the app’s latest update (currently in preview for Canary and Dev Windows Insiders) is a generative AI feature called Rewrite that promises to adjust the length, tone, and phrasing of highlighted sentences or paragraphs using generative AI. Users will be offered three rewritten options based on what they’ve highlighted, and they can select the one they like best or tell the app to try again.

Rewrite appears to be based on the same technology as the Copilot assistant, since it uses cloud-side processing (rather than your local CPU, GPU, or NPU) and requires Microsoft account sign-in to work. The initial preview is available to users in the US, France, the UK, Canada, Italy, and Germany.

If you don’t care about AI or you don’t sign in with a Microsoft account, note that Microsoft is also promising substantial improvements in launch time with this version of Notepad. “Most users will see app launch times improve by more than 35 percent, with some users seeing improvements of 55 percent or more,” reads the blog post by Microsoft’s Windows apps manager Dave Grochocki.

Notepad.exe, now an actively maintained app, has gotten its inevitable AI update Read More »

claude-ai-to-process-secret-government-data-through-new-palantir-deal

Claude AI to process secret government data through new Palantir deal

An ethical minefield

Since its founders started Anthropic in 2021, the company has marketed itself as one that takes an ethics- and safety-focused approach to AI development. The company differentiates itself from competitors like OpenAI by adopting what it calls responsible development practices and self-imposed ethical constraints on its models, such as its “Constitutional AI” system.

As Futurism points out, this new defense partnership appears to conflict with Anthropic’s public “good guy” persona, and pro-AI pundits on social media are noticing. Frequent AI commentator Nabeel S. Qureshi wrote on X, “Imagine telling the safety-concerned, effective altruist founders of Anthropic in 2021 that a mere three years after founding the company, they’d be signing partnerships to deploy their ~AGI model straight to the military frontlines.

Anthropic's

Anthropic’s “Constitutional AI” logo.

Credit: Anthropic / Benj Edwards

Anthropic’s “Constitutional AI” logo. Credit: Anthropic / Benj Edwards

Aside from the implications of working with defense and intelligence agencies, the deal connects Anthropic with Palantir, a controversial company which recently won a $480 million contract to develop an AI-powered target identification system called Maven Smart System for the US Army. Project Maven has sparked criticism within the tech sector over military applications of AI technology.

It’s worth noting that Anthropic’s terms of service do outline specific rules and limitations for government use. These terms permit activities like foreign intelligence analysis and identifying covert influence campaigns, while prohibiting uses such as disinformation, weapons development, censorship, and domestic surveillance. Government agencies that maintain regular communication with Anthropic about their use of Claude may receive broader permissions to use the AI models.

Even if Claude is never used to target a human or as part of a weapons system, other issues remain. While its Claude models are highly regarded in the AI community, they (like all LLMs) have the tendency to confabulate, potentially generating incorrect information in a way that is difficult to detect.

That’s a huge potential problem that could impact Claude’s effectiveness with secret government data, and that fact, along with the other associations, has Futurism’s Victor Tangermann worried. As he puts it, “It’s a disconcerting partnership that sets up the AI industry’s growing ties with the US military-industrial complex, a worrying trend that should raise all kinds of alarm bells given the tech’s many inherent flaws—and even more so when lives could be at stake.”

Claude AI to process secret government data through new Palantir deal Read More »

recap:-our-“ai-in-dc”-conference-was-great—here’s-what-you-missed

Recap: Our “AI in DC” conference was great—here’s what you missed


Experts were assembled, tales told, and cocktails consumed. It was fun!

Photograph of the exterior of the International Spy Museum in DC

Our venue. So spy-ish! Credit: DC Event Photojournalism

Our venue. So spy-ish! Credit: DC Event Photojournalism

Ars Technica descended in force last week upon our nation’s capital, setting up shop in the International Spy Museum for a three-panel discussion on artificial intelligence, infrastructure, security, and how compliance with policy changes over the next decade or so might shape the future of business computing in all its forms. Much like our San Jose event last month, the venue was packed to the rafters with Ars readers eager for knowledge (and perhaps some free drinks, which is definitely why I was there!). A bit over 200 people were eventually herded into one of the conference spaces in the venue’s upper floors, and Ars Editor-in-Chief Ken Fisher hopped on stage to take us in.

“Today’s event about privacy, compliance, and making infrastructure smarter, I think, could not be more perfectly timed,” said Fisher. “I don’t know about your orgs, but I know Ars Technica and our parent company, Condé Nast, are currently thinking about generative AI and how it touches almost every aspect or could touch almost every aspect of our business.”

Photograph of a panel discussion

Ars EIC Ken Fisher takes the stage to kick things off.

Credit: DC Event Photojournalism

Ars EIC Ken Fisher takes the stage to kick things off. Credit: DC Event Photojournalism

Fisher continued: “I think the media talks about how [generative AI] is going to maybe write news and take over content, but the reality is that generative AI has a lot of potential to help us in finance, to help us with opex, to help us with planning—to help us with pretty much every aspect of our business and in our business. And from what I’m reading online, many folks are starting to have this dream that generative AI is going to lead them into a world where they can replace a lot of SaaS services where they can make a pivot to first-party data.”

First-party data and first-party software development, concluded Fisher, will be critically important when paired with generative AI—”table stakes,” Fisher called them, for participating in the future of business.

After Ken, it was on to our first panel!

“The Key to Compliance with Emerging Technologies”

Up first were Anton Dam, an engineering VP with Auditboard; John Verdi of the Future of Privacy Forum; and Jim Comstock, a cloud storage program director at IBM. The main concern of this panel was how companies will keep up with shifting compliance requirements as the pace of advancement continues to increase.

Each panelist had somewhat of a complementary take. AuditBoard’s Dam emphasized how quickly AI is shifting things around and pointed out the need for organizations to be proactive—to be mindful of regulatory changes before they happen and to have plans in place. “If you want to stay compliant,” Dam said, “you have to be proactive and not wait for, say, agency guidance.”

Photograph of panelists on a stage

Hutchinson, Comstock, Dam, and Verdi.

Credit: DC Event Photojournalism

Hutchinson, Comstock, Dam, and Verdi. Credit: DC Event Photojournalism

FPF’s John Verdi dwelled for a bit on the challenge of doing just that and balancing innovation against the need to comply with regs. He noted that a “privacy by design” approach for products—where considerations about compliance are factored into something’s design from the very beginning rather than being treated as bolt-ons later—ultimately serves both the customer and the business.

Cross-border compliance also came up—with big cloud providers and data that perhaps resides in different countries, different laws apply. Making sure you’re doing what all of those laws say is hugely complex, and IBM’s Comstock pointed out that customers need to both work with vendors and also hold those vendors accountable for where one’s data resides.

“Data Security in the Age of AI-Assisted Cyber Espionage”

Next, we shifted to an infosec outlook, bringing on a four-person panel that included former Ars Technica senior security editor Sean Gallagher, who is currently keeping the world safe at Sophos X-Ops. Joining Sean were Kate Highnam, an ML engineer at Booz-Allen Hamilton; Dr. Scott White, director of cybersecurity at George Washington University; and Elisa Ortiz, a storage and product marketing director at IBM.

Photograph of panelists on a stage

Hutchinson, Ortiz, White, Highnam, and Gallagher.

Credit: DC Event Photojournalism

Hutchinson, Ortiz, White, Highnam, and Gallagher. Credit: DC Event Photojournalism

For this panel, we wanted to look at the landscape around us, and Sean kicked the session off with a sobering description of the most profligate cyber threats as they currently exist today. “Pig butchering” was at the top of his list—that is, a shockingly common romance scam where victims are tricked into an emotional connection with a scammer, who then extorts them for money. (As Sean explained, the scammers themselves are often also victims, typically being trapped without their passports in foreign countries and forced to engage in scamming as their only hope to escape back home.) Scams like this increasingly use AI to work around language barriers—if a scammer who only speaks Cantonese targets a victim who only speaks German, for example, the scammers have begun using LLMs to carry on the scam, with LLMs providing not just basic translation but also native colloquialisms and other human-like linguistic tweaks to help sell the scheme.

Dr. Scott White of GWU took us from scams to national security, pointing out how AI can and is transforming intelligence gathering in addition to romance scams. Booz-Allen Hamilton’s Kate Highnam continued this line of discussion, walking us through several ways that machine learning helps with detecting cyber-espionage activities. As good as the tools are, she emphasized that—at least for the foreseeable future—there will continue to need to be a human in the loop when AI is used for detection of crimes. “AI is really good for generalizing our directions,” she said, “but at the end of the day, we have to make sure that we are very clear with our assumptions.”

IBM’s Ortiz closed out the panel by reminding us that threats don’t just come in through the proverbial front door—one of the areas where companies can have significant vulnerabilities is via their backups. As attackers increasingly target backups, Ortiz advocated broad use of predictive analytics and real-time anomaly detection in order to spy out any oddness attackers might be up to.

“The Best Infrastructure Solution for Your AI/ML Strategy”

Our final panel had a deceptively simple title and an impossible task, because there is no “best” infrastructure solution. But there might be a best infrastructure solution for you, and that’s what we wanted to look at. Joining me on stage were Daniel Fenton, head of AI platforms at JLL; Arun Natarajan, director of AI innovation at the IRS; Amy Hirst, VP of site reliability engineering and user experience at IBM; and Matt Klos, an IBM senior solutions architect.

Photograph of panelists on a stage

Fenton, Natarajan, Hirst, and Klos. (You can’t see me, but I’m just off-stage to the left.)

Credit: DC Event Photojournalism

Fenton, Natarajan, Hirst, and Klos. (You can’t see me, but I’m just off-stage to the left.) Credit: DC Event Photojournalism

It’s always fascinating to get to ask the IRS anything, and Natarajan gave insightful answers. He opened by contrasting the goals and challenges of the IRS’s IT strategy as a government service organization to the goals of a typical enterprise, and there are obvious significant differences. Fancy features don’t count as much as stability, security, and integration with legacy systems. AI is being looked at where appropriate, but what the IRS needs from AI more than anything else is transparency, and that can sometimes be lacking. “We’re challenged with ensuring ethical AI and transparency to the taxpayer, which requires a different approach than private sector solutions.”

Other panelists, like JLL’s Fenton, emphasized the use of open architecture to ensure flexibility; IBM’s Klos also noted that often, even the best laid plans of data center engineers gang aft agley due not to architecture issues or poor design but to actual physical infrastructure not being what was expected. “One of the biggest pitfalls I see is power,” he explained. “Customers assume it’s everywhere, but it’s often a limiting factor, especially in high-demand AI infrastructure.”

Amy Hirst pointed out that when building one’s own AI/ML setup, traditional performance metrics still apply—and they apply across multiple stacks, including both storage and networking. Her advice sounds somewhat traditional but holds absolutely true, even now. “You have to consider both latency and throughput,” she said. “Both are key to establishing your system’s needs for AI workloads.”

Drinks and everything thereafter

And with that, our panel discussions were done. The group dispersed, with some folks heading downstairs for a private tour of the museum’s Bond in Motion exhibit, which featured the various on-screen rides of 007. Then there was a convergence on the bar and about an hour of fun conversations.

Cocktail time! Did our photographer snag you in a picture? Comment below and say hi! DC Event Photojournalism

For me, the networking and socializing are always the best parts of events like this, and getting to shake hands and swap business cards with Ars readers is one of the most exciting and joyful parts of this job. A special thank you to everyone who stuck around for the cocktail hour and who got a chance to say hi—you’re special, and I’m glad you came to the event!

And if you’re reading this and you’re sad that you didn’t get to make it to any of our events this year, that’s OK—we’ll do more next year. Stay tuned to the front page because Ars might be coming to your town next!

Photo of Lee Hutchinson

Lee is the Senior Technology Editor, and oversees story development for the gadget, culture, IT, and video sections of Ars Technica. A long-time member of the Ars OpenForum with an extensive background in enterprise storage and security, he lives in Houston.

Recap: Our “AI in DC” conference was great—here’s what you missed Read More »

trump-plans-to-dismantle-biden-ai-safeguards-after-victory

Trump plans to dismantle Biden AI safeguards after victory

That’s not the only uncertainty at play. Just last week, House Speaker Mike Johnson—a staunch Trump supporter—said that Republicans “probably will” repeal the bipartisan CHIPS and Science Act, which is a Biden initiative to spur domestic semiconductor chip production, among other aims. Trump has previously spoken out against the bill. After getting some pushback on his comments from Democrats, Johnson said he would like to “streamline” the CHIPS Act instead, according to The Associated Press.

Then there’s the Elon Musk factor. The tech billionaire spent tens of millions through a political action committee supporting Trump’s campaign and has been angling for regulatory influence in the new administration. His AI company, xAI, which makes the Grok-2 language model, stands alongside his other ventures—Tesla, SpaceX, Starlink, Neuralink, and X (formerly Twitter)—as businesses that could see regulatory changes in his favor under a new administration.

What might take its place

If Trump strips away federal regulation of AI, state governments may step in to fill any federal regulatory gaps. For example, in March, Tennessee enacted protections against AI voice cloning, and in May, Colorado created a tiered system for AI deployment oversight. In September, California passed multiple AI safety bills, one requiring companies to publish details about their AI training methods and a contentious anti-deepfake bill aimed at protecting the likenesses of actors.

So far, it’s unclear what Trump’s policies on AI might represent besides “deregulate whenever possible.” During his campaign, Trump promised to support AI development centered on “free speech and human flourishing,” though he provided few specifics. He has called AI “very dangerous” and spoken about its high energy requirements.

Trump allies at the America First Policy Institute have previously stated they want to “Make America First in AI” with a new Trump executive order, which still only exists as a speculative draft, to reduce regulations on AI and promote a series of “Manhattan Projects” to advance military AI capabilities.

During his previous administration, Trump signed AI executive orders that focused on research institutes and directing federal agencies to prioritize AI development while mandating that federal agencies “protect civil liberties, privacy, and American values.”

But with a different AI environment these days in the wake of ChatGPT and media-reality-warping image synthesis models, those earlier orders don’t likely point the way to future positions on the topic. For more details, we’ll have to wait and see what unfolds.

Trump plans to dismantle Biden AI safeguards after victory Read More »

anthropic’s-haiku-3.5-surprises-experts-with-an-“intelligence”-price-increase

Anthropic’s Haiku 3.5 surprises experts with an “intelligence” price increase

Speaking of Opus, Claude 3.5 Opus is nowhere to be seen, as AI researcher Simon Willison noted to Ars Technica in an interview. “All references to 3.5 Opus have vanished without a trace, and the price of 3.5 Haiku was increased the day it was released,” he said. “Claude 3.5 Haiku is significantly more expensive than both Gemini 1.5 Flash and GPT-4o mini—the excellent low-cost models from Anthropic’s competitors.”

Cheaper over time?

So far in the AI industry, newer versions of AI language models typically maintain similar or cheaper pricing to their predecessors. The company had initially indicated Claude 3.5 Haiku would cost the same as the previous version before announcing the higher rates.

“I was expecting this to be a complete replacement for their existing Claude 3 Haiku model, in the same way that Claude 3.5 Sonnet eclipsed the existing Claude 3 Sonnet while maintaining the same pricing,” Willison wrote on his blog. “Given that Anthropic claim that their new Haiku out-performs their older Claude 3 Opus, this price isn’t disappointing, but it’s a small surprise nonetheless.”

Claude 3.5 Haiku arrives with some trade-offs. While the model produces longer text outputs and contains more recent training data, it cannot analyze images like its predecessor. Alex Albert, who leads developer relations at Anthropic, wrote on X that the earlier version, Claude 3 Haiku, will remain available for users who need image processing capabilities and lower costs.

The new model is not yet available in the Claude.ai web interface or app. Instead, it runs on Anthropic’s API and third-party platforms, including AWS Bedrock. Anthropic markets the model for tasks like coding suggestions, data extraction and labeling, and content moderation, though, like any LLM, it can easily make stuff up confidently.

“Is it good enough to justify the extra spend? It’s going to be difficult to figure that out,” Willison told Ars. “Teams with robust automated evals against their use-cases will be in a good place to answer that question, but those remain rare.”

Anthropic’s Haiku 3.5 surprises experts with an “intelligence” price increase Read More »

new-zemeckis-film-used-ai-to-de-age-tom-hanks-and-robin-wright

New Zemeckis film used AI to de-age Tom Hanks and Robin Wright

On Friday, TriStar Pictures released Here, a $50 million Robert Zemeckis-directed film that used real time generative AI face transformation techniques to portray actors Tom Hanks and Robin Wright across a 60-year span, marking one of Hollywood’s first full-length features built around AI-powered visual effects.

The film adapts a 2014 graphic novel set primarily in a New Jersey living room across multiple time periods. Rather than cast different actors for various ages, the production used AI to modify Hanks’ and Wright’s appearances throughout.

The de-aging technology comes from Metaphysic, a visual effects company that creates real time face swapping and aging effects. During filming, the crew watched two monitors simultaneously: one showing the actors’ actual appearances and another displaying them at whatever age the scene required.

Here – Official Trailer (HD)

Metaphysic developed the facial modification system by training custom machine-learning models on frames of Hanks’ and Wright’s previous films. This included a large dataset of facial movements, skin textures, and appearances under varied lighting conditions and camera angles. The resulting models can generate instant face transformations without the months of manual post-production work traditional CGI requires.

Unlike previous aging effects that relied on frame-by-frame manipulation, Metaphysic’s approach generates transformations instantly by analyzing facial landmarks and mapping them to trained age variations.

“You couldn’t have made this movie three years ago,” Zemeckis told The New York Times in a detailed feature about the film. Traditional visual effects for this level of face modification would reportedly require hundreds of artists and a substantially larger budget closer to standard Marvel movie costs.

This isn’t the first film that has used AI techniques to de-age actors. ILM’s approach to de-aging Harrison Ford in 2023’s Indiana Jones and the Dial of Destiny used a proprietary system called Flux with infrared cameras to capture facial data during filming, then old images of Ford to de-age him in post-production. By contrast, Metaphysic’s AI models process transformations without additional hardware and show results during filming.

New Zemeckis film used AI to de-age Tom Hanks and Robin Wright Read More »

nvidia-ousts-intel-from-dow-jones-index-after-25-year-run

Nvidia ousts Intel from Dow Jones Index after 25-year run

Changing winds in the tech industry

The Dow Jones Industrial Average serves as a benchmark of the US stock market by tracking 30 large, publicly owned companies that represent major sectors of the US economy, and being a member of the Index has long been considered a sign of prestige among American companies.

However, S&P regularly makes changes to the index to better reflect current realities and trends in the marketplace, so deletion from the Index likely marks a new symbolic low point for Intel.

While the rise of AI has caused a surge in several tech stocks, it has delivered tough times for chipmaker Intel, which is perhaps best known for manufacturing CPUs that power Windows-based PCs.

Intel recently withdrew its forecast to sell over $500 million worth of AI-focused Gaudi chips in 2024, a target CEO Pat Gelsinger had promoted after initially pushing his team to project $1 billion in sales. The setback follows Intel’s pattern of missed opportunities in AI, with Reuters reporting that Bank of America analyst Vivek Arya questioned the company’s AI strategy during a recent earnings call.

In addition, Intel has faced challenges as device manufacturers increasingly use Arm-based alternatives that power billions of smartphone devices and from symbolic blows like Apple’s transition away from Intel processors for Macs to its own custom-designed chips based on the Arm architecture.

Whether the historic tech company will rebound is yet to be seen, but investors will undoubtedly keep a close watch on Intel as it attempts to reorient itself in the face of changing trends in the tech industry.

Nvidia ousts Intel from Dow Jones Index after 25-year run Read More »

charger-recall-spells-more-bad-news-for-humane’s-maligned-ai-pin

Charger recall spells more bad news for Humane’s maligned AI Pin

Other Humane charging accessories, like the Charge Pad, are said to be unaffected because Humane doesn’t use the same unnamed vendor for any parts besides the Charge Case Accessory’s battery.

Humane’s statement puts the blame on this anonymous third-party vendor. The company said it realized there was a problem when a user reported a “charging issue while using a third-party USB-C cable and third-party power source.” The company added:

Our investigation determined that the battery supplier was no longer meeting our quality standards and that certain battery cells supplied by this vendor may pose a fire safety risk. As a result, we immediately disqualified this battery vendor while we work to identify a new vendor to avoid such issues and maintain our high quality standards.

Impacted customers can get a refund for the accessory (up to $149) or a replacement via an online form. While refunds will go through within 14 business days, users seeking a replacement Charge Case Accessory have to wait until Humane makes one. That could take three to six months, the San Francisco firm estimates.

In the meantime, Humane is telling customers to properly dispose of their Charge Case Accessories (which means not throwing them in a trash can or the used battery recycling boxes found at some stores).

Another obstacle for Humane

A well-executed recall in the name of user safety isn’t automatically a death knell for a product, but Humane has already been struggling to maintain a positive reputation, and its ability to sell AI Pins in the long term was already in question before this mishap.

The AI Pin’s launch was marred by a myriad of complaints, including the pin’s inability to properly clip to some clothing, slow voice responses, short battery life, limitations with the laser projector working outside of dark rooms, and overall limited functionality. Soon after the product was released, The New York Times reported that the company’s founders, two former Apple executives, ignored negative internal reviews and even let go of an engineer who questioned the product. Humane spokesperson Zoz Cuccias admitted to The Verge in August that upon releasing the wearable, Humane “knew we were at the starting line, not the finish line.”

Charger recall spells more bad news for Humane’s maligned AI Pin Read More »

ais-show-distinct-bias-against-black-and-female-resumes-in-new-study

AIs show distinct bias against Black and female résumés in new study

Anyone familiar with HR practices probably knows of the decades of studies showing that résumé with Black- and/or female-presenting names at the top get fewer callbacks and interviews than those with white- and/or male-presenting names—even if the rest of the résumé is identical. A new study shows those same kinds of biases also show up when large language models are used to evaluate résumés instead of humans.

In a new paper published during last month’s AAAI/ACM Conference on AI, Ethics and Society, two University of Washington researchers ran hundreds of publicly available résumés and job descriptions through three different Massive Text Embedding (MTE) models. These models—based on the Mistal-7B LLM—had each been fine-tuned with slightly different sets of data to improve on the base LLM’s abilities in “representational tasks including document retrieval, classification, and clustering,” according to the researchers, and had achieved “state-of-the-art performance” in the MTEB benchmark.

Rather than asking for precise term matches from the job description or evaluating via a prompt (e.g., “does this résumé fit the job description?”), the researchers used the MTEs to generate embedded relevance scores for each résumé and job description pairing. To measure potential bias, the résuméwere first run through the MTEs without any names (to check for reliability) and were then run again with various names that achieved high racial and gender “distinctiveness scores” based on their actual use across groups in the general population. The top 10 percent of résumés that the MTEs judged as most similar for each job description were then analyzed to see if the names for any race or gender groups were chosen at higher or lower rates than expected.

A consistent pattern

Across more than three million résumé and job description comparisons, some pretty clear biases appeared. In all three MTE models, white names were preferred in a full 85.1 percent of the conducted tests, compared to Black names being preferred in just 8.6 percent (the remainder showed score differences close enough to zero to be judged insignificant). When it came to gendered names, the male name was preferred in 51.9 percent of tests, compared to 11.1 percent where the female name was preferred. The results could be even clearer in “intersectional” comparisons involving both race and gender; Black male names were preferred to white male names in “0% of bias tests,” the researchers wrote.

AIs show distinct bias against Black and female résumés in new study Read More »

not-just-chatgpt-anymore:-perplexity-and-anthropic’s-claude-get-desktop-apps

Not just ChatGPT anymore: Perplexity and Anthropic’s Claude get desktop apps

There’s a lot going on in the world of Mac apps for popular AI services. In the past week, Anthropic has released a desktop app for its popular Claude chatbot, and Perplexity launched a native app for its AI-driven search service.

On top of that, OpenAI updated its ChatGPT Mac app with support for its flashy advanced voice feature.

Like the ChatGPT app that debuted several weeks ago, the Perplexity app adds a keyboard shortcut that allows you to enter a query from anywhere on your desktop. You can use the app to ask follow-up questions and carry on a conversation about what it finds.

It’s free to download and use, but Perplexity offers subscriptions for major users.

Perplexity’s search emphasis meant it wasn’t previously a direct competitor to OpenAI’s ChatGPT, but OpenAI recently launched SearchGPT, a search-focused variant of its popular product. SearchGPT is not yet supported in the desktop app, though.

Anthropic’s Claude, on the other hand, is a more direct competitor to ChatGPT. It works similarly to ChatGPT but has different strengths, particularly in software development. The Claude app is free to download, but it’s in beta, and like Perplexity and OpenAI, Anthropic charges for more advanced users.

When ChatGPT launched its Mac app, it didn’t release a Windows app right away, saying that it was focused on where its users were at the time. A Windows app recently arrived, and Anthropic took a different approach, simultaneously introducing Windows and Mac apps.

Previously, all these tools offered mobile apps and web apps, but not necessarily native desktop apps.

Not just ChatGPT anymore: Perplexity and Anthropic’s Claude get desktop apps Read More »

microsoft-reports-big-profits-amid-massive-ai-investments

Microsoft reports big profits amid massive AI investments

Microsoft reported quarterly earnings that impressed investors and showed how resilient the company is even as it spends heavily on AI.

Some investors have been uneasy about the company’s aggressive spending on AI, while others have demanded it. During this quarter, Microsoft reported that it spent $20 billion on capital expenditures, nearly double what it had spent during the same quarter last year.

However, the company satisfied both groups of investors, as it revealed it has still been doing well in the short term amid those long-term investments. The fiscal quarter, which covered July through September, saw overall sales rise 16 percent year over year to $65.6 billion. Despite all that AI spending, profits were up 11 percent, too.

The growth was largely driven by Azure and cloud services, which saw a 33 percent increase in revenue. The company attributed 12 percent of that to AI-related products and services.

Meanwhile, Microsoft’s gaming division continued to challenge long-standing assumptions that hardware is king, with Xbox content and services posting 61 percent increased year-over-year revenue despite a 29 percent drop in hardware sales.

Microsoft has famously been inching away from the classic strategy of keeping software and services exclusive to its hardware, launching first-party games like Sea of Thieves not just on PC but on the competing PlayStation 5 console from Sony. Compared to the Xbox, the PlayStation is dominant in sales and install base for this generation.

But don’t make the mistake of assuming that a 61 percent jump in content and services revenue is solely because Microsoft’s Game Pass subscription service is taking off. The company attributed 53 points of that to the recent $69 billion Activision acquisition.

Microsoft reports big profits amid massive AI investments Read More »