Tech

m5-ipad-pro-tested:-stop-me-if-you’ve-heard-this-one-before

M5 iPad Pro tested: Stop me if you’ve heard this one before


It’s a gorgeous tablet, but what does an iPad need with more processing power?

Apple’s 13-inch M5 iPad Pro. Credit: Andrew Cunningham

Apple’s 13-inch M5 iPad Pro. Credit: Andrew Cunningham

This year’s iPad Pro is what you might call a “chip refresh” or an “internal refresh.” These refreshes are what Apple generally does for its products for one or two or more years after making a larger external design change. Leaving the physical design alone preserves compatibility with the accessory ecosystem.

For the Mac, chip refreshes are still pretty exciting to me, because many people who use a Mac will, very occasionally, assign it some kind of task where they need it to work as hard and fast as it can, for an extended period of time. You could be a developer compiling a large and complex app, or you could be a podcaster or streamer editing or exporting an audio or video file, or maybe you’re just playing a game. The power and flexibility of the operating system, and first- and third-party apps made to take advantage of that power and flexibility, mean that “more speed” is still exciting, even if it takes a few years for that speed to add up to something users will consistently notice and appreciate.

And then there’s the iPad Pro. Especially since Apple shifted to using the same M-series chips that it uses in Macs, most iPad Pro reviews contain some version of “this is great hardware that is much faster than it needs to be for anything the iPad does.” To wit, our review of the M4 iPad Pro from May 2024:

Still, it remains unclear why most people would spend one, two, or even three thousand dollars on a tablet that, despite its amazing hardware, does less than a comparably priced laptop—or at least does it a little more awkwardly, even if it’s impressively quick and has a gorgeous screen.

Since then, Apple has announced and released iPadOS 26, an update that makes important and mostly welcome changes to how the tablet handles windowed multitasking, file transfers, and some other kinds of background tasks. But this is the kind of thing that isn’t even going to stress out an Apple M1, let alone a chip that’s twice as powerful.

All of this is to say: A chip refresh for an iPad is nice to have. This year’s model (still starting at $999 for the 11-inch tablet and $1,299 for the 13-inch) will also come with a handy RAM increase for many buyers, the first RAM boost that the base model iPad Pro has gotten in more than four years.

But without any other design changes or other improvements to hang its hat on, the fact is that chip refresh years for the iPad Pro only really improve a part of the tablet that needs the least amount of improvement. That doesn’t make them bad; who knows what the hardware requirements will be when iPadOS 30 adds some other batch of multitasking features. But it does mean these refreshes don’t feel particularly exciting or necessary; the most exciting thing about the M5 iPad Pro means you might be able to get a good deal on an M4 model as retailers clear out their stock. You aren’t going to notice the difference.

Design: M4 iPad Pro redux

The 13-inch M5 iPad Pro in its Magic Keyboard accessory with the Apple Pencil Pro attached. Credit: Andrew Cunningham

Lest we downplay this tablet’s design, the M4 version of the iPad Pro was the biggest change to the tablet since Apple introduced the modern all-screen design for the iPad Pro back in 2018. It wasn’t a huge departure, but it did introduce the iPad’s first OLED display, a thinner and lighter design, and a slightly improved Apple Pencil and updated range of accessories.

As with the 14-inch M5 MacBook Pro that Apple just launched, the easiest way to know how much you’ll like the iPad Pro depends on how you feel about screen technology (the iPad is, after all, mostly screen). If you care about the 120 Hz, high-refresh-rate ProMotion screen, the option to add a nano-texture display with a matte finish, and the infinite contrast and boosted brightness of Apple’s OLED displays, those are the best reasons to buy an iPad Pro. The $299/349 Magic Keyboard accessory for the iPad Pro also comes with backlit keys and a slightly larger trackpad than the equivalent $269/$319 iPad Air accessory.

If none of those things inspire passion in you, or if they’re not worth several hundred extra dollars to you—the nano-texture glass upgrade alone adds $700 to the price of the iPad Pro, because Apple only offers it on the 1TB and 2TB models—then the 11- and 13-inch iPads Air are going to give you a substantively identical experience. That includes compatibility with the same Apple Pencil accessory and support for all the same multitasking and Apple Intelligence features.

The M5 iPad Pro supports the same Apple Pencil Pro as the M4 iPad Pro, and the M2 and M3 iPad Air. Credit: Andrew Cunningham

One other internal change to the new iPad Pro, aside from the M5, is mostly invisible: Wi-Fi, Bluetooth, and Thread connectivity provided by the Apple N1 chip, and 5G cellular connectivity provided by the Apple C1X. Ideally, you won’t notice this swap at all, but it’s a quietly momentous change for Apple. Both of these chips cap several years of acquisitions and internal development, and further reduce Apple’s reliance on external chipmakers like Qualcomm and Broadcom, which has been one of the goals of Apple’s A- and M-series processors all along.

There’s one last change we haven’t really been able to adequately test in the handful of days we’ve had the tablet: new fast-charging support, either with Apple’s first-party Dynamic Power Adapter or any USB-C charger capable of providing 60 W or more of power. When using these chargers, Apple says the tablet’s battery can charge from 0 to 50 percent in 35 minutes. (Apple provides the same battery life estimates for the M5 iPads as the M4 models: 10 hours of Wi-Fi web usage, or nine hours of cellular web usage, for both the 13- and 11-inch versions of the tablet.)

Two Apple M5 chips, two RAM options

Apple sent us the 1TB version of the 13-inch iPad Pro to test, which means we got the fully enabled version of the M5: four high-performance CPU cores, six high-efficiency GPU cores, 10 GPU cores, a 16-core Neural Engine, and 16GB of RAM.

Apple’s Macs still offer individually configurable processor, storage, and RAM upgrades to users—generally buying one upgrade doesn’t lock you into buying a bunch of other stuff you don’t want or need (though there are exceptions for RAM configurations in some of the higher-end Macs). But for the iPads, Apple still ties the chip and the RAM you get to storage capacity. The 256GB and 512GB iPads get three high-performance CPU cores instead of four, and 12GB of RAM instead of 16GB.

For people who buy the 256GB and 512GB iPads, this amounts to a 50 percent increase in RAM capacity from the M1, M2, and M4 iPad Pro models, or the M1, M2, and M3 iPad Airs, all of which came with 8GB of RAM. High-end models stick with the same 16GB of RAM as before (no 24GB or 32GB upgrades here, though the M5 supports them in Macs). The ceiling is in the same place, but the floor has come up.

Given that iPadOS is still mostly running on tablets with 8GB or less of RAM, I don’t expect the jump from 8GB to 12GB to make a huge difference in the day-to-day experience of using the tablet, at least for now. If you connect your iPad to an external monitor that you use as an extended display, it might help keep more apps in memory at a time; it could help if you edit complex multi-track audio or video files or images, or if you’re trying to run some kind of machine learning or AI workflows locally. Future iPadOS versions could also require more than 8GB of memory for some features. But for now, the benefit exists mostly on paper.

As for benchmarks, the M5’s gains in the iPad are somewhat more muted than they are for the M5 MacBook Pro we tested. We observed a 10 or 15 percent improvement across single- and multi-core CPU tests and graphics benchmark improvements that mostly hovered in the 15 to 30 percent range. The Geekbench 6 Compute benchmark was one outlier, pointing to a 35 percent increase in GPU performance; it’s possible that GPU or rendering-heavy workloads benefit a little more from the new neural accelerators in the M5’s GPU cores than games do.

In the MacBook review, we observed that the M5’s CPU generally had higher peak power consumption than the M4. In the fanless iPad Pro, it’s likely that Apple has reined the chip in a little bit to keep it cool, which would explain why the iPad’s M5 doesn’t see quite the same gains.

The M5 and the 12GB RAM minimum help to put a little more distance between the M3 iPad Air and the Pros. Most iPad workloads don’t benefit in an obvious user-noticeable way from the extra performance or memory right now, but it’s something you can point to that makes the Pro more “pro” than the Air.

Changed hardware that doesn’t change much

The M5 iPad Pro is nice in the sense that “getting a little more for your money today than you could get for the same money two weeks ago” is nice. But it changes essentially nothing for potential iPad buyers.

I’m hard-pressed to think of anyone who would be well-served by the M5 iPad Pro who wouldn’t have been equally well-served by the M4 version. And if the M4 iPad Pro was already overkill for you, the M5 is just a little more so. Particularly if you have an M1 or M2 ; People with an A12X or A12Z version of the iPad Pro from 2018 or 2020 will benefit more, particularly if you’re multitasking a lot or running into limitations or RAM complaints from the apps you’re using.

But even with the iPadOS 26 update, it still seems like the capabilities of the iPad’s software lags behind the capabilities of the hardware by a few years. It’s to be expected, maybe, for an operating system that has to run on this M5 iPad Pro and a 7-year-old phone processor with 3GB of RAM.

I am starting to feel the age of the M1 MacBook Air I use, especially if I’m pushing multiple monitors with it or trying to exceed its 16GB RAM limit. The M1 iPad Air I have, on the other hand, feels like it just got an operating system that unlocks some of its latent potential. That’s the biggest problem with the iPad Pro, really—not that it’s a bad tablet, but that it’s still so much more tablet than you need to do what iPadOS and its apps can currently do.

The good

  • A fast, beautiful tablet that’s a pleasure to use.
  • The 120Hz ProMotion support and OLED display panel make this one of Apple’s best screens, period.
  • 256GB and 512GB models get a bump from 8GB to 12GB of memory.
  • Maintains compatibility with the same accessories as the M4 iPad Pro.

The bad

  • More iPad than pretty much anyone needs.
  • Passively cooled fanless Apple M5 can’t stretch its legs quite as much as the actively cooled Mac version.
  • Expensive accessories.

The ugly

  • All other hardware upgrades, including the matte nano-texture display finish, require a $600 upgrade to the 1TB version of the tablet.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

M5 iPad Pro tested: Stop me if you’ve heard this one before Read More »

google-reportedly-searching-for-15-pixel-“superfans”-to-test-unreleased-phones

Google reportedly searching for 15 Pixel “Superfans” to test unreleased phones

The Pixel Superfans program has offered freebies and special events in the past, but access to unreleased phones would be the biggest benefit yet. The document explains that those chosen for the program will have to sign a strict non-disclosure agreement and agree to keep the phones in a protective case that disguises its physical appearance at all times.

Pixel 10 series shadows

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL.

Credit: Ryan Whitwam

Left to right: Pixel 10, Pixel 10 Pro, Pixel 10 Pro XL. Credit: Ryan Whitwam

Pixels are among the most-leaked phones every year. In the past, devices have been photographed straight off the manufacturing line or “fallen off a truck” and ended up for sale months before release. However, it’s unlikely the 15 Superfan testers will lead to more leaks. Like most brands, Google’s pre-release sample phones have physical identifiers to help trace their origin if they end up in a leak or auction listing. The NDA will probably be sufficiently scary to keep the testers in line, as they won’t want to lose their Superfan privileges. Google has also started sharing details on its phones earlier—its Superfan testers may even be involved in these previews.

You can sign up to be a Superfan at any time, but it can take a few weeks to get access after applying. Google has not confirmed when its selection process will begin, but it would have to be soon if Google intends to gather feedback for the Pixel 11. Google’s next phone has probably been in the works for over a year, and it will have to lock down the features at least several months ahead of the expected August 2026 release window. The Pixel 10a is expected to launch even sooner, in spring 2026.

Google reportedly searching for 15 Pixel “Superfans” to test unreleased phones Read More »

yes,-everything-online-sucks-now—but-it-doesn’t-have-to

Yes, everything online sucks now—but it doesn’t have to


from good to bad to nothing

Ars chats with Cory Doctorow about his new book Enshittification.

We all feel it: Our once-happy digital spaces have become increasingly less user-friendly and more toxic, cluttered with extras nobody asked for and hardly anybody wants. There’s even a word for it: “enshittification,” named 2023 Word of the Year by the American Dialect Society. The term was coined by tech journalist/science fiction author Cory Doctorow, a longtime advocate of digital rights. Doctorow has spun his analysis of what’s been ailing the tech industry into an eminently readable new book, Enshittification: Why Everything Suddenly Got Worse and What To Do About It.

As Doctorow tells it, he was on vacation in Puerto Rico, staying in a remote cabin nestled in a cloud forest with microwave Internet service—i.e., very bad Internet service, since microwave signals struggle to penetrate through clouds. It was a 90-minute drive to town, but when they tried to consult TripAdvisor for good local places to have dinner one night, they couldn’t get the site to load. “All you would get is the little TripAdvisor logo as an SVG filling your whole tab and nothing else,” Doctorow told Ars. “So I tweeted, ‘Has anyone at TripAdvisor ever been on a trip? This is the most enshittified website I’ve ever used.’”

Initially, he just got a few “haha, that’s a funny word” responses. “It was when I married that to this technical critique, at a moment when things were quite visibly bad to a much larger group of people, that made it take off,” Doctorow said. “I didn’t deliberately set out to do it. I bought a million lottery tickets and one of them won the lottery. It only took two decades.”

Yes, people sometimes express regret to him that the term includes a swear word. To which he responds, “You’re welcome to come up with another word. I’ve tried. ‘Platform decay’ just isn’t as good.” (“Encrapification” and “enpoopification” also lack a certain je ne sais quoi.)

In fact, it’s the sweariness that people love about the word. While that also means his book title inevitably gets bleeped on broadcast radio, “The hosts, in my experience, love getting their engineers to creatively bleep it,” said Doctorow. “They find it funny. It’s good radio, it stands out when every fifth word is ‘enbeepification.’”

People generally use “enshittification” colloquially to mean “the degradation in the quality and experience of online platforms over time.” Doctorow’s definition is more specific, encompassing “why an online service gets worse, how that worsening unfolds,” and how this process spreads to other online services, such that everything is getting worse all at once.

For Doctorow, enshittification is a disease with symptoms, a mechanism, and an epidemiology. It has infected everything from Facebook, Twitter, Amazon, and Google, to Airbnb, dating apps, iPhones, and everything in between. “For me, the fact that there were a lot of platforms that were going through this at the same time is one of the most interesting and important factors in the critique,” he said. “It makes this a structural issue and not a series of individual issues.”

It starts with the creation of a new two-sided online product of high quality, initially offered at a loss to attract users—say, Facebook, to pick an obvious example. Once the users are hooked on the product, the vendor moves to the second stage: degrading the product in some way for the benefit of their business customers. This might include selling advertisements, scraping and/or selling user data, or tweaking algorithms to prioritize content the vendor wishes users to see rather than what those users actually want.

This locks in the business customers, who, in turn, invest heavily in that product, such as media companies that started Facebook pages to promote their published content. Once business customers are locked in, the vendor can degrade those services too—i.e., by de-emphasizing news and links away from Facebook—to maximize profits to shareholders. Voila! The product is now enshittified.

The four horsemen of the shitocalypse

Doctorow identifies four key factors that have played a role in ushering in an era that he has dubbed the “Enshittocene.” The first is competition (markets), in which companies are motivated to make good products at affordable prices, with good working conditions, because otherwise customers and workers will go to their competitors.  The second is government regulation, such as antitrust laws that serve to keep corporate consolidation in check, or levying fines for dishonest practices, which makes it unprofitable to cheat.

The third is interoperability: the inherent flexibility of digital tools, which can play a useful adversarial role. “The fact that enshittification can always be reversed with a dis-enshittifiting counter-technology always acted as a brake on the worst impulses of tech companies,” Doctorow writes. Finally, there is labor power; in the case of the tech industry, highly skilled workers were scarce and thus had considerable leverage over employers.

All four factors, when functioning correctly, should serve as constraints to enshittification. However, “One by one each enshittification restraint was eroded until it dissolved, leaving the enshittification impulse unchecked,” Doctorow writes. Any “cure” will require reversing those well-established trends.

But isn’t all this just the nature of capitalism? Doctorow thinks it’s not, arguing that the aforementioned weakening of traditional constraints has resulted in the usual profit-seeking behavior producing very different, enshittified outcomes. “Adam Smith has this famous passage in Wealth of Nations about how it’s not due to the generosity of the baker that we get our bread but to his own self-regard,” said Doctorow. “It’s the fear that you’ll get your bread somewhere else that makes him keep prices low and keep quality high. It’s the fear of his employees leaving that makes him pay them a fair wage. It is the constraints that causes firms to behave better. You don’t have to believe that everything should be a capitalist or a for-profit enterprise to acknowledge that that’s true.”

Our wide-ranging conversation below has been edited for length to highlight the main points of discussion.

Ars Technica: I was intrigued by your choice of framing device, discussing enshittification as a form of contagion. 

Cory Doctorow: I’m on a constant search for different framing devices for these complex arguments. I have talked about enshittification in lots of different ways. That frame was one that resonated with people. I’ve been a blogger for a quarter of a century, and instead of keeping notes to myself, I make notes in public, and I write up what I think is important about something that has entered my mind, for better or for worse. The downside is that you’re constantly getting feedback that can be a little overwhelming. The upside is that you’re constantly getting feedback, and if you pay attention, it tells you where to go next, what to double down on.

Another way of organizing this is the Galaxy Brain meme, where the tiny brain is “Oh, this is because consumers shopped wrong.” The medium brain is “This is because VCs are greedy.” The larger brain is “This is because tech bosses are assholes.” But the biggest brain of all is “This is because policymakers created the policy environment where greed can ruin our lives.” There’s probably never going to be just one way to talk about this stuff that lands with everyone. So I like using a variety of approaches. I suck at being on message. I’m not going to do Enshittification for the Soul and Mornings with Enshittifying Maury. I am restless, and my Myers-Briggs type is ADHD, and I want to have a lot of different ways of talking about this stuff.

Ars Technica: One site that hasn’t (yet) succumbed is Wikipedia. What has protected Wikipedia thus far? 

Cory Doctorow: Wikipedia is an amazing example of what we at the Electronic Frontier Foundation (EFF) call the public interest Internet. Internet Archive is another one. Most of these public interest Internet services start off as one person’s labor of love, and that person ends up being what we affectionately call the benevolent dictator for life. Very few of these projects have seen the benevolent dictator for life say, “Actually, this is too important for one person to run. I cannot be the keeper of the soul of this project. I am prone to self-deception and folly just like every other person. This needs to belong to its community.” Wikipedia is one of them. The founder, my friend Jimmy Wales, woke up one day and said, “No individual should run Wikipedia. It should be a communal effort.”

There’s a much more durable and thick constraint on the decisions of anyone at Wikipedia to do something bad. For example, Jimmy had this idea that you could use AI in Wikipedia to help people make entries and navigate Wikipedia’s policies, which are daunting. The community evaluated his arguments and decided—not in a reactionary way, but in a really thoughtful way—that this was wrong. Jimmy didn’t get his way. It didn’t rule out something in the future, but that’s not happening now. That’s pretty cool.

Wikipedia is not just governed by a board; it’s also structured as a nonprofit. That doesn’t mean that there’s no way it could go bad. But it’s a source of friction against enshittification. Wikipedia has its entire corpus irrevocably licensed as the most open it can be without actually being in the public domain. Even if someone were to capture Wikipedia, there’s limits on what they could do to it.

There’s also a labor constraint in Wikipedia in that there’s very little that the leadership can do without bringing along a critical mass of a large and diffuse body of volunteers. That cuts against the volunteers working in unison—they’re not represented by a union; it’s hard for them to push back with one voice. But because they’re so diffuse and because there’s no paychecks involved, it’s really hard for management to do bad things. So if there are two people vying for the job of running the Wikimedia Foundation and one of them has got nefarious plans and the other doesn’t, the nefarious plan person, if they’re smart, is going to give it up—because if they try to squeeze Wikipedia, the harder they squeeze, the more it will slip through their grasp.

So these are structural defenses against enshittification of Wikipedia. I don’t know that it was in the mechanism design—I think they just got lucky—but it is a template for how to run such a project. It does raise this question: How do you build the community? But if you have a community of volunteers around a project, it’s a model of how to turn that project over to that community.

Ars Technica: Your case studies naturally include the decay of social media, notably Facebook and the social media site formerly known as Twitter. How might newer social media platforms resist the spiral into “platform decay”?

Cory Doctorow: What you want is a foundation in which people on social media face few switching costs. If the social media is interoperable, if it’s federatable, then it’s much harder for management to make decisions that are antithetical to the interests of users. If they do, users can escape. And it sets up an internal dynamic within the firm, where the people who have good ideas don’t get shouted down by the people who have bad but more profitable ideas, because it makes those bad ideas unprofitable. It creates both short and long-term risks to the bottom line.

There has to be a structure that stops their investors from pressurizing them into doing bad things, that stops them from rationalizing their way into complying. I think there’s this pathology where you start a company, you convince 150 of your friends to risk their kids’ college fund and their mortgage working for you. You make millions of users really happy, and your investors come along and say, “You have to destroy the life of 5 percent of your users with some change.” And you’re like, “Well, I guess the right thing to do here is to sacrifice those 5 percent, keep the other 95 percent happy, and live to fight another day, because I’m a good guy. If I quit over this, they’ll just put a bad guy in who’ll wreck things. I keep those 150 people working. Not only that, I’m kind of a martyr because everyone thinks I’m a dick for doing this. No one understands that I have taken the tough decision.”

I think that’s a common pattern among people who, in fact, are quite ethical but are also capable of rationalizing their way into bad things. I am very capable of rationalizing my way into bad things. This is not an indictment of someone’s character. But it’s why, before you go on a diet, you throw away the Oreos. It’s why you bind yourself to what behavioral economists call “Ulysses pacts“: You tie yourself to the mast before you go into the sea of sirens, not because you’re weak but because you’re strong enough now to know that you’ll be weak in the future.

I have what I would call the epistemic humility to say that I don’t know what makes a good social media network, but I do know what makes it so that when they go bad, you’re not stuck there. You and I might want totally different things out of our social media experience, but I think that you should 100 percent have the right to go somewhere else without losing anything. The easier it is for you to go without losing something, the better it is for all of us.

My dream is a social media universe where knowing what network someone is using is just a weird curiosity. It’d be like knowing which cell phone carrier your friend is using when you give them a call. It should just not matter. There might be regional or technical reasons to use one network or another, but it shouldn’t matter to anyone other than the user what network they’re using. A social media platform where it’s always easier for users to leave is much more future-proof and much more effective than trying to design characteristics of good social media.

Ars Technica: How might this work in practice?

Cory Doctorow: I think you just need a protocol. This is [Mike] Maznik’s point: protocols, not products. We don’t need a universal app to make email work. We don’t need a universal app to make the web work. I always think about this in the context of administrable regulation. Making a rule that says your social media network must be good for people to use and must not harm their mental health is impossible. The fact intensivity of determining whether a platform satisfies that rule makes it a non-starter.

Whereas if you were to say, “OK, you have to support an existing federation protocol, like AT Protocol and Mastodon ActivityPub,” both have ways to port identity from one place to another and have messages auto-forward. This is also in RSS. There’s a permanent redirect directive. You do that, you’re in compliance with the regulation.

Or you have to do something that satisfies the functional requirements of the spec. So it’s not “did you make someone sad in a way that was reckless?” That is a very hard question to adjudicate. Did you satisfy these functional requirements? It’s not easy to answer that, but it’s not impossible. If you want to have our users be able to move to your platform, then you just have to support the spec that we’ve come up with, which satisfies these functional requirements.

We don’t have to have just one protocol. We can have multiple ones. Not everything has to connect to everything else, but everyone who wants to connect should be able to connect to everyone else who wants to connect. That’s end-to-end. End-to-end is not “you are required to listen to everything someone wants to tell you.” It’s that willing parties should be connected when they want to be.

Ars Technica: What about security and privacy protocols like GPG and PGP?

Cory Doctorow: There’s this argument that the reason GPG is so hard to use is that it’s intrinsic; you need a closed system to make it work. But also, until pretty recently, GPG was supported by one part-time guy in Germany who got 30,000 euros a year in donations to work on it, and he was supporting 20 million users. He was primarily interested in making sure the system was secure rather than making it usable. If you were to put Big Tech quantities of money behind improving ease of use for GPG, maybe you decide it’s a dead end because it is a 30-year-old attempt to stick a security layer on top of SMTP. Maybe there’s better ways of doing it. But I doubt that we have reached the apex of GPG usability with one part-time volunteer.

I just think there’s plenty of room there. If you have a pretty good project that is run by a large firm and has had billions of dollars put into it, the most advanced technologists and UI experts working on it, and you’ve got another project that has never been funded and has only had one volunteer on it—I would assume that dedicating resources to that second one would produce pretty substantial dividends, whereas the first one is only going to produce these minor tweaks. How much more usable does iOS get with every iteration?

I don’t know if PGP is the right place to start to make privacy, but I do think that if we can create independence of the security layer from the transport layer, which is what PGP is trying to do, then it wouldn’t matter so much that there is end-to-end encryption in Mastodon DMs or in Bluesky DMs. And again, it doesn’t matter whose sim is in your phone, so it just shouldn’t matter which platform you’re using so long as it’s secure and reliably delivered end-to-end.

Ars Technica: These days, I’m almost contractually required to ask about AI. There’s no escaping it. But it’s certainly part of the ongoing enshittification.

Cory Doctorow: I agree. Again, the companies are too big to care. They know you’re locked in, and the things that make enshittification possible—like remote software updating, ongoing analytics of use of devices—they allow for the most annoying AI dysfunction. I call it the fat-finger economy, where you have someone who works in a company on a product team, and their KPI, and therefore their bonus and compensation, is tied to getting you to use AI a certain number of times. So they just look at the analytics for the app and they ask, “What button gets pushed the most often? Let’s move that button somewhere else and make an AI summoning button.”

They’re just gaming a metric. It’s causing significant across-the-board regressions in the quality of the product, and I don’t think it’s justified by people who then discover a new use for the AI. That’s a paternalistic justification. The user doesn’t know what they want until you show it to them: “Oh, if I trick you into using it and you keep using it, then I have actually done you a favor.” I don’t think that’s happening. I don’t think people are like, “Oh, rather than press reply to a message and then type a message, I can instead have this interaction with an AI about how to send someone a message about takeout for dinner tonight.” I think people are like, “That was terrible. I regret having tapped it.” 

The speech-to-text is unusable now. I flatter myself that my spoken and written communication is not statistically average. The things that make it me and that make it worth having, as opposed to just a series of multiple-choice answers, is all the ways in which it diverges from statistical averages. Back when the model was stupider, when it gave up sooner if it didn’t recognize what word it might be and just transcribed what it thought you’d said rather than trying to substitute a more probable word, it was more accurate.  Now, what I’m getting are statistically average words that are meaningless.

That elision of nuance and detail is characteristic of what makes AI products bad. There is a bunch of stuff that AI is good at that I’m excited about, and I think a lot of it is going to survive the bubble popping. But I fear that we’re not planning for that. I fear what we’re doing is taking workers whose jobs are meaningful, replacing them with AIs that can’t do their jobs, and then those AIs are going to go away and we’ll have nothing. That’s my concern.

Ars Technica: You prescribe a “cure” for enshittification, but in such a polarized political environment, do we even have the collective will to implement the necessary policies?

Cory Doctorow: The good news is also the bad news, which is that this doesn’t just affect tech. Take labor power. There are a lot of tech workers who are looking at the way their bosses treat the workers they’re not afraid of—Amazon warehouse workers and drivers, Chinese assembly line manufacturers for iPhones—and realizing, “Oh, wait, when my boss stops being afraid of me, this is how he’s going to treat me.” Mark Zuckerberg stopped going to those all-hands town hall meetings with the engineering staff. He’s not pretending that you are his peers anymore. He doesn’t need to; he’s got a critical mass of unemployed workers he can tap into. I think a lot of Googlers figured this out after the 12,000-person layoffs. Tech workers are realizing they missed an opportunity, that they’re going to have to play catch-up, and that the only way to get there is by solidarity with other kinds of workers.

The same goes for competition. There’s a bunch of people who care about media, who are watching Warner about to swallow Paramount and who are saying, “Oh, this is bad. We need antitrust enforcement here.” When we had a functional antitrust system for the last four years, we saw a bunch of telecoms mergers stopped because once you start enforcing antitrust, it’s like eating Pringles. You just can’t stop. You embolden a lot of people to start thinking about market structure as a source of either good or bad policy. The real thing that happened with [former FTC chair] Lina Kahn doing all that merger scrutiny was that people just stopped planning mergers.

There are a lot of people who benefit from this. It’s not just tech workers or tech users; it’s not just media users. Hospital consolidation, pharmaceutical consolidation, has a lot of people who are very concerned about it. Mark Cuban is freaking out about pharmacy benefit manager consolidation and vertical integration with HMOs, as he should be. I don’t think that we’re just asking the anti-enshittification world to carry this weight.

Same with the other factors. The best progress we’ve seen on interoperability has been through right-to-repair. It hasn’t been through people who care about social media interoperability. One of the first really good state-level right-to-repair bills was the one that [Governor] Jared Polis signed in Colorado for powered wheelchairs. Those people have a story that is much more salient to normies.

What do you mean you spent six months in bed because there’s only two powered wheelchair manufacturers and your chair broke and you weren’t allowed to get it fixed by a third party?” And they’ve slashed their repair department, so it takes six months for someone to show up and fix your chair. So you had bed sores and pneumonia because you couldn’t get your chair fixed. This is bullshit.

So the coalitions are quite large. The thing that all of those forces share—interoperability, labor power, regulation, and competition—is that they’re all downstream of corporate consolidation and wealth inequality. Figuring out how to bring all of those different voices together, that’s how we resolve this. In many ways, the enshittification analysis and remedy are a human factors and security approach to designing an enshittification-resistant Internet. It’s about understanding this as a red team, blue team exercise. How do we challenge the status quo that we have now, and how do we defend the status quo that we want?

Anything that can’t go on forever eventually stops. That is the first law of finance, Stein’s law. We are reaching multiple breaking points, and the question is whether we reach things like breaking points for the climate and for our political system before we reach breaking points for the forces that would rescue those from permanent destruction.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Yes, everything online sucks now—but it doesn’t have to Read More »

ring-cameras-are-about-to-get-increasingly-chummy-with-law-enforcement

Ring cameras are about to get increasingly chummy with law enforcement


Amazon’s Ring partners with company whose tech has reportedly been used by ICE.

Ring’s Outdoor Cam Pro. Credit: Amazon

Law enforcement agencies will soon have easier access to footage captured by Amazon’s Ring smart cameras. In a partnership announced this week, Amazon will allow approximately 5,000 local law enforcement agencies to request access to Ring camera footage via surveillance platforms from Flock Safety. Ring cooperating with law enforcement and the reported use of Flock technologies by federal agencies, including US Immigration and Customs Enforcement (ICE), has resurfaced privacy concerns that have followed the devices for years.

According to Flock’s announcement, its Ring partnership allows local law enforcement members to use Flock software “to send a direct post in the Ring Neighbors app with details about the investigation and request voluntary assistance.” Requests must include “specific location and timeframe of the incident, a unique investigation code, and details about what is being investigated,” and users can look at the requests anonymously, Flock said.

“Any footage a Ring customer chooses to submit will be securely packaged by Flock and shared directly with the requesting local public safety agency through the FlockOS or Flock Nova platform,” the announcement reads.

Flock said its local law enforcement users will gain access to Ring Community Requests in “the coming months.”

A flock of privacy concerns

Outside its software platforms, Flock is known for license plate recognition cameras. Flock customers can also search footage from Flock cameras using descriptors to find people, such as “man in blue shirt and cowboy hat.” Besides law enforcement agencies, Flock says 6,000 communities and 1,000 businesses use their products.

For years, privacy advocates have warned against companies like Flock.

This week, US Sen. Ron Wyden (D-Ore.) sent a letter [PDF] to Flock CEO Garrett Langley saying that ICE’s Homeland Security Investigations (HSI), the Secret Service, and the US Navy’s Criminal Investigative Service have had access to footage from Flock’s license plate cameras.

“I now believe that abuses of your product are not only likely but inevitable and that Flock is unable and uninterested in preventing them,” Wyden wrote.

In August, Jay Stanley, senior policy analyst for the ACLU Speech, Privacy, and Technology Project, wrote that “Flock is building a dangerous, nationwide mass-surveillance infrastructure.” Stanley pointed to ICE using Flock’s network of cameras, as well as Flock’s efforts to build a people lookup tool with data brokers.

Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation (EFF), told Ars via email that Flock is a “mass surveillance tool” that “has increasingly been used to spy on both immigrants and people exercising their First Amendment-protected rights.”

Flock has earned this reputation among privacy advocates through its own cameras, not Ring’s.

An Amazon spokesperson told Ars Technica that only local public safety agencies will be able to make Community Requests via Flock software, and that requests will also show the name of the agency making the request.

A Flock spokesperson told Ars:

Flock does not currently have any contracts with any division of [the US Department of Homeland Security], including ICE. The Ring Community Requests process through Flock is only available for local public safety agencies for specific, active investigations. All requests are time and geographically-bound. Ring users can choose to share relevant footage or ignore the request.

Flock’s rep added that all activity within FlockOS and Flock Nova is “permanently recorded in a comprehensive CJIS-compliant audit trail for unalterable custody tracking,” referring to a set of standards created by the FBI’s Criminal Justice Information Services division.

But there’s still concern that federal agencies will end up accessing Ring footage through Flock. Guariglia told Ars:

Even without formal partnerships with federal authorities, data from these surveillance companies flow to agencies like ICE through local law enforcement. Local and state police have run more than 4,000 Flock searches on behalf of federal authorities or with a potential immigration focus, reporting has found. Additionally, just this month, it became clear that Texas police searched 83,000 Flock cameras in an attempt to prosecute a woman for her abortion and then tried to cover it up.

Ring cozies up to the law

This week’s announcement shows Amazon, which acquired Ring in 2018, increasingly positioning its consumer cameras as a law enforcement tool. After years of cops using Ring footage, Amazon last year said that it would stop letting police request Ring footage—unless it was an “emergency”—only to reverse course about 18 months later by allowing police to request Ring footage through a Flock rival, Axon.

While announcing Ring’s deals with Flock and Axon, Ring founder and CEO Jamie Siminoff claimed that the partnerships would help Ring cameras keep neighborhoods safe. But there’s doubt as to whether people buy Ring cameras to protect their neighborhood.

“Ring’s new partnership with Flock shows that the company is more interested in contributing to mounting authoritarianism than servicing the specific needs of their customers,” Guariglia told Ars.

Interestingly, Ring initiated conversations about a deal with Flock, Langely told CNBC.

Flock says that its cameras don’t use facial recognition, which has been criticized for racial biases. But local law enforcement agencies using Flock will soon have access to footage from Ring cameras with facial recognition. In a conversation with The Washington Post this month, Calli Schroeder, senior counsel at the consumer advocacy and policy group Electronic Privacy Information Center, described the new feature for Ring cameras as “invasive for anyone who walks within range of” a Ring doorbell, since they likely haven’t consented to facial recognition being used on them.

Amazon, for its part, has mostly pushed the burden of ensuring responsible facial recognition use to its customers. Schroeder shared concern with the Post that Ring’s facial recognition data could end up being shared with law enforcement.

Some people who are perturbed about Ring deepening its ties with law enforcement have complained online.

“Inviting big brother into the system. Screw that,” a user on the Ring subreddit said this week.

Another Reddit user said: “And… I’m gone. Nope, NO WAY IN HELL. Goodbye, Ring. I’ll be switching to a UniFi[-brand] system with 100 percent local storage. You don’t get my money any more. This is some 1984 BS …”

Privacy concerns are also exacerbated by Ring’s past, as the company has previously failed to meet users’ privacy expectations. In 2023, Ring agreed to pay $5.8 million to settle claims that employees illegally spied on Ring customers.

Amazon and Flock say their collaboration will only involve voluntary customers and local enforcement agencies. But there’s still reason to be concerned about the implications of people sending doorbell and personal camera footage to law enforcement via platforms that are reportedly widely used by federal agencies for deportation purposes. Combined with the privacy issues that Ring has already faced for years, it’s not hard to see why some feel that Amazon scaling up Ring’s association with any type of law enforcement is unacceptable.

And it appears that Amazon and Flock would both like Ring customers to opt in when possible.

“It will be turned on for free for every customer, and I think all of them will use it,” Langely told CNBC.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

Ring cameras are about to get increasingly chummy with law enforcement Read More »

12-years-of-hdd-analysis-brings-insight-to-the-bathtub-curve’s-reliability

12 years of HDD analysis brings insight to the bathtub curve’s reliability

But as seen in Backblaze’s graph above, the company’s HDDs aren’t adhering to that principle. The blog’s authors noted that in 2021 and 2025, Backblaze’s drives had a “pretty even failure rate through the significant majority of the drives’ lives, then a fairly steep spike once we get into drive failure territory.”

The blog continues:

What does that mean? Well, drives are getting better, and lasting longer. And, given that our trendlines are about the same shape from 2021 to 2025, we should likely check back in when 2029 rolls around to see if our failure peak has pushed out even further.

Speaking with Ars Technica, Doyle said that Backblaze’s analysis is good news for individuals shopping for larger hard drives because the devices are “going to last longer.”

She added:

In many ways, you can think of a datacenter’s use of hard drives as the ultimate test for a hard drive—you’re keeping a hard drive on and spinning for the max amount of hours, and often the amount of times you read/write files is well over what you’d ever see as a consumer. Industry trend-wise, drives are getting bigger, which means that oftentimes, folks are buying fewer of them. Reporting on how these drives perform in a data center environment, then, can give you more confidence that whatever drive you’re buying is a good investment.

The longevity of HDDs is also another reason for shoppers to still consider HDDs over faster, more expensive SSDs.

“It’s a good idea to decide how justified the improvement in latency is,” Doyle said.

Questioning the bathtub curve

Doyle and Paterson aren’t looking to toss the bathtub curve out with the bathwater. They’re not suggesting that the bathtub curve doesn’t apply to HDDs, but rather that it overlooks additional factors affecting HDD failure rates, including “workload, manufacturing variation, firmware updates, and operational churn.” The principle also makes the assumptions that, per the authors:

  • Devices are identical and operate under the same conditions
  • Failures happen independently, driven mostly by time
  • The environment stays constant across a product’s life

While these conditions can largely be met in datacenter environments, “conditions can’t ever be perfect,” Doyle and Patterson noted. When considering an HDD’s failure rates over time, it’s wise to consider both the bathtub curve and how you use the component.

12 years of HDD analysis brings insight to the bathtub curve’s reliability Read More »

ai-powered-features-begin-creeping-deeper-into-the-bedrock-of-windows-11

AI-powered features begin creeping deeper into the bedrock of Windows 11


everything old is new again

Copilot expands with an emphasis on creating and editing files, voice input.

Microsoft is hoping that Copilot will succeed as a voice-driven assistant where Cortana failed. Credit: Microsoft

Microsoft is hoping that Copilot will succeed as a voice-driven assistant where Cortana failed. Credit: Microsoft

Like virtually every major Windows announcement in the last three years, the spate of features that Microsoft announced for the operating system today all revolve around generative AI. In particular, they’re concerned with the company’s more recent preoccupation with “agentic” AI, an industry buzzword for “telling AI-powered software to perform a task, which it then does in the background while you move on to other things.”

But the overarching impression I got, both from reading the announcement and sitting through a press briefing earlier this month, is that Microsoft is using language models and other generative AI technologies to try again with Cortana, Microsoft’s failed and discontinued entry in the voice assistant wars of the 2010s.

According to Microsoft’s Consumer Chief Marketing Officer Yusuf Mehdi, “AI PCs” should be able to recognize input “naturally, in text or voice,” to be able to guide users based on what’s on their screens at any given moment, and that AI assistants “should be able to take action on your behalf.”

The biggest of today’s announcements is the introduction of a new “Hey, Copilot” activation phrase for Windows 11 PCs, which once enabled allows users to summon the chatbot using only their voice rather than a mouse or keyboard (if you do want to use the keyboard, either the Copilot key or the same Windows + C keyboard shortcut that used to bring up Cortana will also summon Copilot). Saying “goodbye” will dismiss Copilot when you’re done working with it.

Macs and most smartphones have sported similar functionality for a while now, but Microsoft is obviously hoping that having Copilot answer those questions instead of Cortana will lead to success rather than another failure.

The key limitation of the original Cortana—plus Siri, Alexa, and the rest of their ilk—is that it could only really do a relatively limited and pre-determined list of actions. Complex queries, or anything the assistants don’t understand, often gets bounced to a general web search. The results of that search may or may not accomplish what you wanted, but it does ultimately shift the onus back on the user to find and follow those directions.

To make Copilot more useful, Microsoft has also announced that Copilot Vision is being rolled out worldwide “in all markets where Copilot is offered” (it’s been available in the US since mid-June). Copilot Vision will read the contents of a screen or an app window and can attempt to offer useful guidance or feedback, like walking you through an obscure task in Excel or making suggestions based on a group of photos or a list of items. (Microsoft additionally announced a beta for Gaming Copilot, a sort of offshoot of Copilot Vision intended specifically for walkthroughs and advice for whatever game you happen to be playing.)

Beyond these tweaks or wider rollouts for existing features, Microsoft is also testing a few new AI and Copilot-related additions that aim to fundamentally change how users interact with their Windows PCs by reading and editing files.

All of the features Microsoft is announcing today are intended for all Windows 11 PCs, not just those that meet the stricter hardware requirements of the Copilot+ PC label. That gives them a much wider potential reach than things like Recall or Click to Do, and it makes knowing what these features do and how they safeguard security and privacy that much more important.

AI features work their way into the heart of Windows

Microsoft wants general-purpose AI agents to be able to create and modify files for you, among other things, working in the background while you move on to other tasks. Credit: Microsoft

Whether you’re talking about the Copilot app, the generative AI features added to apps like Notepad and Paint, or the data-scraping Windows Recall feature, most of the AI additions to Windows in the last few years have been app-specific, or cordoned off in some way from core Windows features like the taskbar and File Explorer.

But AI features are increasingly working their way into bedrock Windows features like the taskbar and Start menu, and being given capabilities that allow them to analyze or edit files or even perform file management tasks.

The standard Search field that has been part of Windows 10 and Windows 11 for the last decade, for example, is being transformed into an “Ask Copilot” field; this feature will still be able to look through local files just like the current version of the Search box, but Microsoft also envisions it as a keyboard-driven interface for Copilot for the times when you can’t or don’t want to use your voice. (We don’t know whether the “old” search functionality lives on in the Start menu or as an optional fallback for people who disable Copilot, at least not yet.)

A feature called Copilot Actions will also expand the number of ways that Copilot can interact with local files on your PC. Microsoft cites “sorting through recent vacation photos” and extracting information from PDFs and other documents as two possible use cases, and that this early preview version will focus on “a narrow set of use cases.” But it’s meant to be “a general-purpose agent” capable of “interacting with desktop and web applications.” This gives it a lot of latitude to augment or replace basic keyboard-and-mouse input for some interactions.

Screenshots of a Windows 11 testing build showed Copilot taking over the area of the taskbar that is currently reserved for the Search field. Credit: Microsoft

Finally, Microsoft is taking another stab at allowing Copilot to change the settings on your PC, something that earlier versions were able to do but were removed in a subsequent iteration. Copilot will attempt to respond to plain-language questions about your PC settings with a link to the appropriate part of Windows’ large, labyrinthine Settings app.

These new features dovetail with others Microsoft has been testing for a few weeks or months now. Copilot Connectors, rolled out to Windows Insiders earlier this month, can give Copilot access to email and file-sharing services like Gmail and Dropbox. New document creation features allow Copilot to export the contents of a Copilot chat into a Word or PDF document, Excel spreadsheet, or PowerPoint deck for more refinement and editing. And AI actions in the File Explorer appear in Windows’ right-click menu and allow for the direct manipulation of files, including batch-editing images and summarizing documents. Together with the Copilot Vision features that enable Copilot to see the full contents of Office documents rather than just the on-screen portions, all of these features inject AI into more basic everyday tasks, rather than cordoning them off in individual apps.

Per usual, we don’t know exactly when any of these new features will roll out to the general public, and some may never be available outside of the Windows Insider program. None of them are currently baked into the Windows 11 25H2 update, at least not the version that the company is currently distributing through its Release Preview channel.

Learning the lessons of Recall

Microsoft at least seems to have learned lessons from the botched rollout of Windows Recall last year.

If you didn’t follow along: Microsoft’s initial plan had been to roll out Recall with the first wave of Copilot+ PCs, but without sending it through the Windows Insider Preview program first. This program normally gives power users, developers, security researchers, and others the opportunity to kick the tires on upcoming Windows features before they’re launched, giving Microsoft feedback on bugs, security holes, or other flaws before rolling them out to all Windows PCs.

But security researchers who did manage to get their hands on the early, nearly launched version of Recall discovered a deeply flawed feature that preserved too much personal information and was trivially easy to exploit—a plain-text file with OCR text from all of a user’s PC usage could be grabbed by pretty much anybody with access to the PC, either in-person or remote. It was also enabled by default on PCs that supported it, forcing users to manually opt out if they didn’t want to use it.

In the end, Microsoft pulled that version of Recall, took nearly a year to overhaul its security architecture, and spent months letting the feature make its way through the Windows Insider Preview channels before finally rolling it out to Copilot+ PCs. The resulting product still presents some risks to user privacy, as does any feature that promises to screenshot and store months of history about how you use your PC, but it’s substantially more refined, the most egregious security holes have been closed, and it’s off by default.

Copilot Actions are, at least for now, also disabled by default. And Microsoft Corporate Vice President of Windows Security Dana Huang put up a lengthy accompanying post explaining several of the steps Microsoft has taken to protect user privacy and security when using Copilot Actions. These include running AI agents with their own dedicated user accounts to reduce their access to data in your user folder; mandatory code-signing; and giving agents the fewest privileges they need to do their jobs. All of the agents’ activities will also be documented, so users can verify what actions have been taken and correct any errors.

Whether these security and privacy promises are good enough is an open question, but unlike the initial version of Recall, all of these new features will be sent out through the Windows Insider channels for testing first. If there are serious flaws, they’ll be out in public early on, rather than dropped on users unawares.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

AI-powered features begin creeping deeper into the bedrock of Windows 11 Read More »

oneplus-unveils-oxygenos-16-update-with-deep-gemini-integration

OnePlus unveils OxygenOS 16 update with deep Gemini integration

The updated Android software expands what you can add to Mind Space and uses Gemini. For starters, you can add scrolling screenshots and voice memos up to 60 seconds in length. This provides more data for the AI to generate content. For example, if you take screenshots of hotel listings and airline flights, you can tell Gemini to use your Mind Space content to create a trip itinerary. This will be fully integrated with the phone and won’t require a separate subscription to Google’s AI tools.

oneplus-oxygen-os16

Credit: OnePlus

Mind Space isn’t a totally new idea—it’s quite similar to AI features like Nothing’s Essential Space and Google’s Pixel Screenshots and Journal. The idea is that if you give an AI model enough data on your thoughts and plans, it can provide useful insights. That’s still hypothetical based on what we’ve seen from other smartphone OEMs, but that’s not stopping OnePlus from fully embracing AI in Android 16.

In addition to beefing up Mind Space, OxygenOS 16 will also add system-wide AI writing tools, which is another common AI add-on. Like the systems from Apple, Google, and Samsung, you will be able to use the OnePlus writing tools to adjust text, proofread, and generate summaries.

OnePlus will make OxygenOS 16 available starting October 17 as an open beta. You’ll need a OnePlus device from the past three years to run the software, both in the beta phase and when it’s finally released. As for that, OnePlus hasn’t offered a specific date. The initial OxygenOS 16 release will be with the OnePlus 15 devices, with releases for other supported phones and tablets coming later.

OnePlus unveils OxygenOS 16 update with deep Gemini integration Read More »

apple-tv-and-peacock-bundle-starts-at-$15/month,-available-on-oct.-20

Apple TV and Peacock bundle starts at $15/month, available on Oct. 20

In a rarity for Apple’s streaming service, users will be able to buy bundled subscriptions to Apple TV and Peacock for a discount, starting on October 20.

On its own, the Apple TV streaming service (which was called Apple TV+ until Monday) is $13 per month. NBCUniversal’s Peacock starts at $8/month with ads and $11/month without ads. With the upcoming bundle, people can subscribe to both for a total of $15/month or $20/month, depending on whether Peacock has ads or not (Apple TV never has ads).

People can buy the bundles through either Apple’s or Peacock’s websites and apps.

Apple and NBCUniversal are hoping to drive subscriptions with the bundle. In a statement, Oliver Schusser, Apple’s VP of Apple TV, Apple Music, Sports, and Beats, said that he thinks the bundle will help bring Apple TV content “to more viewers in more places.”

Bundles that combine more than one streaming service for an overall discount have become a popular tool for streaming providers trying to curb cancellations. The idea is that people are less likely to cancel a streaming subscription if it’s tied to another streaming service or product, like cellular service.

Apple, however, has largely been a holdout. It used to offer a bundle with Apple TV for full price, plus Showtime and Paramount+ (then called CBS All Access) for no extra cost. But those add-ons, especially at the time, could be considered more cable-centric compared to the streaming bundle announced today.

Apple will also make select Apple Originals content available to watch with a Peacock subscription. The announcement says:

At launch, Peacock subscribers can enjoy up to three episodes of Stick, Slow Horses, Silo, The Buccaneers, Foundation, Palm Royale, and Prehistoric Planet from Apple TV for free, while Apple TV app users will be able to watch up to three episodes of Law & Order, Bel-Air, Twisted Metal, Love Island Games, Happy’s Place, The Hunting Party, and Real Housewives of Miami from Peacock.

Additionally, people who are subscribed to Apple One’s Family or Premier Plans can get Peacock’s most expensive subscription—which adds offline downloads to the ad-free tier and is typically $17/month—for about $11/month (“a 35 percent discount,” per the announcement). The discount marks the first time that Apple has offered Apple One subscribers a deal for a non-Apple product, suggesting a new willingness from Apple to partner with rivals to help its services business.

Apple TV and Peacock bundle starts at $15/month, available on Oct. 20 Read More »

google’s-ai-videos-get-a-big-upgrade-with-veo-3.1

Google’s AI videos get a big upgrade with Veo 3.1

It’s getting harder to know what’s real on the Internet, and Google is not helping one bit with the announcement of Veo 3.1. The company’s new video model supposedly offers better audio and realism, along with greater prompt accuracy. The updated video AI will be available throughout the Google ecosystem, including the Flow filmmaking tool, where the new model will unlock additional features. And if you’re worried about the cost of conjuring all these AI videos, Google is also adding a “Fast” variant of Veo.

Veo made waves when it debuted earlier this year, demonstrating a staggering improvement in AI video quality just a few months after Veo 2’s release. It turns out that having all that video on YouTube is very useful for training AI models, so Google is already moving on to Veo 3.1 with a raft of new features.

Google says Veo 3.1 offers stronger prompt adherence, which results in better video outputs and fewer wasted compute cycles. Audio, which was a hallmark feature of the Veo 3 release, has reportedly improved, too. Veo 3’s text-to-video was limited to 720p landscape output, but there’s an ever-increasing volume of vertical video on the Internet. So Veo 3.1 can produce both landscape and portrait 16:9 video.

Google previously said it would bring Veo video tools to YouTube Shorts, which use a vertical video format like TikTok. The release of Veo 3.1 probably opens the door to fulfilling that promise. You can bet Veo videos will show up more frequently on TikTok as well now that it fits the format. This release also keeps Google in its race with OpenAI, which recently released a Sora iPhone app with an impressive new version of its video-generating AI.

Google’s AI videos get a big upgrade with Veo 3.1 Read More »

directv-screensavers-will-show-ai-generated-ads-with-your-face-in-2026

DirecTV screensavers will show AI-generated ads with your face in 2026

According to a March blog post from Glance’s VP of AI, Ian Anderson, Glance’s avatars “analyze customer behavior, preferences, and browsing history to provide tailor-made product recommendations, enhancing engagement and conversion rates.”

In a statement today, Naveen Tewari, Glance’s CEO and founder, said the screensavers will allow people to “instantly select a brand and reimagine themselves in the brand catalog right from their living-room TV itself.”

The DirecTV screensavers will also allow people to make 30-second-long AI-generated videos featuring their avatar, The Verge reported.

In addition to providing an “AI-commerce experience,” DirecTV expects the screensavers to help with “content discovery” and “personalization,” Vikash Sharm, SVP of product marketing at DirecTV, said in a statement.

The screensavers will also be able to show real-time weather and sports scores, Glance said.

A natural progression

Turning to ad-centric screensavers may frustrate customers who didn’t expect ads when they bought into Gemini devices for their streaming capabilities.

However, DirecTV has an expanding advertising business that has included experimenting with ad types, such as ads that show when people hit pause. As far as offensive ads go, screensaver ads can be considered less intrusive, since they typically show only when someone isn’t actively viewing their TV. Gemini screensavers can also be disabled.

It has become increasingly important for DirecTV to diversify revenue beyond satellite and Internet subscriptions. DirecTV had over 20 million subscribers in 2015; in 2024, streaming business publication Next TV, citing an anonymous source “close to the company,” reported that the AT&T-owned firm was down to about 11 million subscribers.

Simultaneously, the streaming industry—including streaming services and streaming software—has been increasingly relying on advertising to boost revenue. For some streaming service providers, increasing revenue through ads is starting to eclipse the pressure to do so through subscriber counts. Considering DirecTV’s declining viewership and growing interest in streaming, finding more ways to sell ads seems like a natural progression.

With legacy pay TV providers already dealing with dwindling subscriptions, introducing new types of ads risks making DirecTV less appealing as well.

And it’s likely that things won’t end there.

“This, we can integrate across different places within the television,” Glance COO Mansi Jain told The Verge. “We are starting with the screensaver, but tomorrow… we can integrate it in the launcher of the TV.”

DirecTV screensavers will show AI-generated ads with your face in 2026 Read More »

google-will-let-gemini-schedule-meetings-for-you-in-gmail

Google will let Gemini schedule meetings for you in Gmail

Meetings can be a real drain on productivity, but a new Gmail feature might at least cut down on the time you spend scheduling them. The company has announced “Help Me Schedule” is coming to Gmail, leveraging Gemini AI to recognize when you want to schedule a meeting and offering possible meeting times for the email recipient to choose.

The new meeting feature is reminiscent of Magic Cue on Google’s latest Pixel phones. As you type emails, Gmail will be able to recognize when you are planning a meeting. A Help Me Schedule button will appear in the toolbar. Upon clicking, Google’s AI will swing into action and find possible meeting times that match the context of your message and are available in your calendar.

When you engage with Help me schedule, the AI generates an in-line meeting widget for your message. The recipient can select the time that works for them, and that’s it—the meeting is scheduled for both parties. What about meetings with more than one invitee? Google says the feature won’t support groups at launch.

Google has been on a Gemini-fueled tear lately, expanding access to AI features across a range of products. The company’s nano banana image model is coming to multiple products, and the Veo video model is popping up in Photos and YouTube. Gemini has also rolled out to Google Home to offer AI-assisted notifications and activity summaries.

Google will let Gemini schedule meetings for you in Gmail Read More »

nvidia-sells-tiny-new-computer-that-puts-big-ai-on-your-desktop

Nvidia sells tiny new computer that puts big AI on your desktop

On Tuesday, Nvidia announced it will begin taking orders for the DGX Spark, a $4,000 desktop AI computer that wraps one petaflop of computing performance and 128GB of unified memory into a form factor small enough to sit on a desk. Its biggest selling point is likely its large integrated memory that can run larger AI models than consumer GPUs.

Nvidia will begin taking orders for the DGX Spark on Wednesday, October 15, through its website, with systems also available from manufacturing partners and select US retail stores.

The DGX Spark, which Nvidia previewed as “Project DIGITS” in January and formally named in May, represents Nvidia’s attempt to create a new category of desktop computer workstation specifically for AI development.

With the Spark, Nvidia seeks to address a problem facing some AI developers: Many AI tasks exceed the memory and software capabilities of standard PCs and workstations (more on that below), forcing them to shift their work to cloud services or data centers. However, the actual market for a desktop AI workstation remains uncertain, particularly given the upfront cost versus cloud alternatives, which allow developers to pay as they go.

Nvidia’s Spark reportedly includes enough memory to run larger-than-typical AI models for local tasks, with up to 200 billion parameters and fine-tune models containing up to 70 billion parameters without requiring remote infrastructure. Potential uses include running larger open-weights language models and media synthesis models such as AI image generators.

According to Nvidia, users can customize Black Forest Labs’ Flux.1 models for image generation, build vision search and summarization agents using Nvidia’s Cosmos Reason vision language model, or create chatbots using the Qwen3 model optimized for the DGX Spark platform.

Big memory in a tiny box

Nvidia has squeezed a lot into a 2.65-pound box that measures 5.91 x 5.91 x 1.99 inches and uses 240 watts of power. The system runs on Nvidia’s GB10 Grace Blackwell Superchip, includes ConnectX-7 200Gb/s networking, and uses NVLink-C2C technology that provides five times the bandwidth of PCIe Gen 5. It also includes the aforementioned 128GB of unified memory that is shared between system and GPU tasks.

Nvidia sells tiny new computer that puts big AI on your desktop Read More »