iFixit has published teardown views for the iPhone 16 and iPhone 16 Pro, along with their larger cousins, the Plus and Pro Max.
The videos are really marketing for iFixit’s various repair kits and other tools and products that you can buy—and sometimes these videos now have lengthy plugs for some new product or another—but nonetheless, the videos almost always include interesting insights about devices’ components.
Tearing down the iPhone 16, iFixit confirmed one thing we already suspected: One of the mmWave antennas was removed and replaced in that same spot by the Camera Control button. It also found that the camera systems in the 16 Pro and 16 Pro Max are almost interchangeable, but sadly aren’t because of the placement of a single screw and the length of a single cable. Too bad.
The disassembly process for the Pro phones is mostly the same as before, but thankfully, there’s been a redesign that reduces the risk of damaging the OLED panel when tearing the phone down.
The biggest discovery was that the iPhone 16 and iPhone 16 Plus have a superior battery replacement process compared to earlier phones. Instead of pull tabs, they use an adhesive that lets go when affected by an electric current.
iFixit says this is one of the easiest battery removal processes in the industry, which is high praise, especially when it’s directed toward a company with a difficult record on that front.
iFixit’s iPhone 16 Pro and 16 Pro Max teardown.
Unfortunately, the 16 Pro and 16 Pro Max haven’t moved to the new battery replacement process found in the 16 and 16 Plus. On the bright side, it’s much easier to service the USB-C port than before, though Apple doesn’t sell that part separately.
iFixit gave all the new iPhones a 7 out of 10 repairability score, which is historically high for an iPhone.
The videos go into much more detail, so check them out.
Google wound down its defense in the US Department of Justice’s ad tech monopoly trial this week, following a week of testimony from witnesses that experts said seemed to lack credibility.
The tech giant started its defense by showing a widely mocked chart that Google executive Scott Sheffer called a “spaghetti football,” supposedly showing a fluid industry thriving thanks to Google’s ad tech platform but mostly just “confusing” everyone and possibly even helping to debunk its case, Open Markets Institute policy analyst Karina Montoya reported.
“The effect of this image might have backfired as it also made it evident that Google is ubiquitous in digital advertising,” Montoya reported. “During DOJ’s cross-examination, the spaghetti football was untangled to show only the ad tech products used specifically by publishers and advertisers on the open web.”
One witness, Marco Hardie, Google’s current head of industry, was even removed from the stand, his testimony deemed irrelevant by US District Judge Leonie Brinkema, Big Tech On Trial reported. Another, Google executive Scott Sheffer, gave testimony Brinkema considered “tainted,” Montoya reported. But perhaps the most heated exchange about a witness’ credibility came during the DOJ’s cross-examination of Mark Israel, the key expert that Google is relying on to challenge the DOJ’s market definition.
Google’s case depends largely on Brinkema agreeing that the DOJ’s market definition is too narrow, with an allegedly outdated focus on display ads on the open web, as opposed to a broader market including display ads appearing in apps or on social media. But experts monitoring the trial suggested that Brinkema may end up questioning Israel’s credibility after DOJ lawyer Aaron Teitelbaum’s aggressive cross-examination.
According to Big Tech on Trial, which posted the exchange on X (formerly Twitter), Teitelbaum’s line of questioning came across as a “striking and effective impeachment of Mark Israel’s credibility as a witness.”
During his testimony, Israel told Brinkema that Google’s share of the US display ads market is only 25 percent, minimizing Google’s alleged dominance while emphasizing that Google faced “intense competition” from other Big Tech companies like Amazon, Meta, and TikTok in this broader market, Open Markets Institute policy analyst Karina Montoya reported.
On cross-examination, Teitelbaum called Israel out as a “serial ‘expert’ for companies facing antitrust challenges” who “always finds that the companies ‘explained away’ market definition,” Big Tech on Trial posted on X. Teitelbaum even read out quotes from past cases “in which judges described” Israel’s “expert testimony as ‘not credible’ and having ‘misunderstood antitrust law.'”
Israel was also accused by past judges of rendering his opinions “based on false assumptions,” according to USvGoogleAds, a site run by the digital advertising watchdog Check My Ads with ad industry partners. And specifically for the Google ad tech case, Teitelbaum noted that Israel omitted ad spend data to seemingly manipulate one of his charts.
“Not a good look,” the watchdog’s site opined.
Perhaps most damaging, Teitelbaum asked Israel to confirm that “80 percent of his income comes from doing this sort of expert testimony,” suggesting that Israel seemingly depended on being paid by companies like Jet Blue and Kroger-Albertsons—and even previously by Google during the search monopoly trial—to muddy the waters on market definition. Lee Hepner, an antitrust lawyer with the American Economic Liberties Project, posted on X that the DOJ’s antitrust chief, Jonathan Kanter, has grown wary of serial experts supposedly sowing distrust in the court system.
“Let me say this clearly—this will not end well,” Kanter said during a speech at a competition law conference this month. “Already we see a seeping distrust of expertise by the courts and by law enforcers.”
“Best witnesses money can buy”
In addition to experts and Google staffers backing up Google’s proposed findings of fact and conclusions of law, Google brought in Courtney Caldwell—the CEO of a small business that once received a grant from Google and appears in Google’s marketing materials—to back up claims that a DOJ win could harm small businesses, Big Tech on Trial reported.
Google’s direct examination of Caldwell was “basically just a Google ad,” Big Tech on Trial said, while Check My Ads’ site suggested that Google mostly just called upon “the best witnesses their money can buy, and it still did not get them very far.”
According to Big Tech on Trial, Google is using a “light touch” in its defense, refusing to go “pound for pound” to refute the DOJ’s case. Using this approach, Google can seemingly ignore any argument the DOJ raises that doesn’t fit into the picture Google wants Brinkema to accept of Google’s ad empire growing organically, rather than anti-competitively constructed with the intent to shut out rivals through mergers and acquisitions.
Where the DOJ wants the judge to see “a Google-only pipeline through the heart of the ad tech stack, denying non-Google rivals the same access,” Google argues that it has only “designed a set of products that work efficiently with each other and attract a valuable customer base.”
Evidence that Brinkeman might find hard to ignore include a 2008 statement from Google’s former president of display advertising, David Rosenblatt, confirming that it would “take an act of god” to get people to switch ad platforms because of extremely high switching costs. Rosenblatt also suggested in a 2009 presentation that Google acquiring DoubleClick for Publishers would make Google’s ad tech like the New York Stock Exchange, putting Google in a position to monitor every ad sale and doing for display ads “what Google did to search.” There’s also a 2010 email where now-YouTube CEO Neal Mohan recommended getting Google ahead in the display ad market by “parking” a rival with “the most traction.”
On Friday, testimony concluded abruptly after the DOJ only called one rebuttal witness, Big Tech on Trial posted on X. Brinkema is expected to hear closing arguments on November 25, Big Tech on Trial reported, and rule in December, Montoya reported.
Most members of Dell’s sales team will no longer have the option to work remotely, starting on Monday, Reuters reported this week, citing an internal memo. The policy applies to salespeople worldwide and is aimed at helping “grow skills,” per the note.
Like the rest of Dell’s workforce, Dell’s salespeople have previously been allowed to work remotely two days per week. A memo, which a Reddit user claims to have posted online (The Register reported that the post “mirrors” one that it viewed separately), says that field sellers aren’t required to go into an office but “should prioritize time spent in person with customers and partners.” The policy doesn’t apply to “remote sales team members,” but Dell said to expect additional unspecified communications regarding remote workers “in the coming weeks.” Bloomberg reported that top sales executives Bill Scannell, Dell’s president of global sales and customer operations, and John Byrne, president of sales and global regions at Dell Tech Select, signed the memo, saying:
… our data showed that sales teams are more productive when onsite.
Dell is viewing mandatory on-site work as a way to maintain its sales team’s culture and drive growth, according to the memo, which mentions things like “real-time feedback” and “dynamic” office energy. Moving forward, remote work will be permitted as an exception, Dell said.
Notably, the letter, which was reportedly sent to workers on Thursday, doesn’t give employees much time for adjustments. The memo acknowledges that workers have built schedules around working from home regularly but doesn’t offer immediate solutions.
In a statement to The Register, a Dell spokesperson confirmed the policy change.
“We continually evolve our business so we’re set up to deliver the best innovation, value and service to our customers and partners,” they said. “That includes more in-person connection to drive market leadership.”
Dell’s RTO push
After permitting full-time remote work in response to the COVID-19 pandemic, in February, Dell started requiring workers to go into the office 39 days per quarter (or about three days per week) or be totally remote. The latter, however, seemed discouraged, as Dell reportedly told remote workers they were ineligible for promotions in March. Still, nearly 50 percent of Dell workers chose to stay remote, Business Insider reported in June, citing internal Dell Data.
Dell’s return-to-office (RTO) mandates have reportedly been enforced with VPN and badge tracking. Some employees have accused Dell of trying to reduce headcount with RTO policies. Other companies pushing workers back into offices have also been accused of this; there’s research showing that at least some companies have used RTO policies to lower headcount while avoiding layoffs. Dell laid off 13,000 people in 2023 and plans more layoffs. In August, it announced plans to lay off an undisclosed additional number of people. The company is expected to have 120,000 employees.
Dell’s RTO change follows an announcement this week requiring Amazon employees to work on-site five days a week starting next year. Following the announcement, a survey of 2,585 US Amazon employees found that 73 percent of Amazon workers are “considering looking for an another job” in response.
“Yes, this is a shift…”
The memo, according to Reddit, acknowledges to workers, “Yes, this is a shift from current expectations.” Dell’s RTO push represents an about-face from previously stated positions on remote work from the company. In 2022, for example, CEO and founder Michael Dell wrote a blog that said Dell “found no meaningful differences” between remote and on-site workers, including before the pandemic. Dell COO Jeff Clarke made similar arguments in 2020.
The idea that remote work hinders productivity has been a hot topic of debate, especially as companies grapple with their remote work policies following pandemic restrictions. Dell says that its decision to force sales workers back into offices is backed by data, and its claims of boosted productivity could potentially be true when it comes to this specific Dell division. However, there have also been studies suggesting that return-to-office mandates hurt productivity. For example, a Great Place to Work survey conducted in July 2023 of 4,400 employees concluded that “productivity was lower for both on-site and remote employees when their employer mandated where they work.” Workers with companies allowing employees to choose between remote and on-site work were more likely to give “extra effort,” the survey found.
On Thursday, AI hosting platform Hugging Face surpassed 1 million AI model listings for the first time, marking a milestone in the rapidly expanding field of machine learning. An AI model is a computer program (often using a neural network) trained on data to perform specific tasks or make predictions. The platform, which started as a chatbot app in 2016 before pivoting to become an open source hub for AI models in 2020, now hosts a wide array of tools for developers and researchers.
The machine-learning field represents a far bigger world than just large language models (LLMs) like the kind that power ChatGPT. In a post on X, Hugging Face CEO Clément Delangue wrote about how his company hosts many high-profile AI models, like “Llama, Gemma, Phi, Flux, Mistral, Starcoder, Qwen, Stable diffusion, Grok, Whisper, Olmo, Command, Zephyr, OpenELM, Jamba, Yi,” but also “999,984 others.”
The reason why, Delangue says, stems from customization. “Contrary to the ‘1 model to rule them all’ fallacy,” he wrote, “smaller specialized customized optimized models for your use-case, your domain, your language, your hardware and generally your constraints are better. As a matter of fact, something that few people realize is that there are almost as many models on Hugging Face that are private only to one organization—for companies to build AI privately, specifically for their use-cases.”
Enlarge/ A Hugging Face-supplied chart showing the number of AI models added to Hugging Face over time, month to month.
Hugging Face’s transformation into a major AI platform follows the accelerating pace of AI research and development across the tech industry. In just a few years, the number of models hosted on the site has grown dramatically along with interest in the field. On X, Hugging Face product engineer Caleb Fahlgren posted a chart of models created each month on the platform (and a link to other charts), saying, “Models are going exponential month over month and September isn’t even over yet.”
The power of fine-tuning
As hinted by Delangue above, the sheer number of models on the platform stems from the collaborative nature of the platform and the practice of fine-tuning existing models for specific tasks. Fine-tuning means taking an existing model and giving it additional training to add new concepts to its neural network and alter how it produces outputs. Developers and researchers from around the world contribute their results, leading to a large ecosystem.
For example, the platform hosts many variations of Meta’s open-weights Llama models that represent different fine-tuned versions of the original base models, each optimized for specific applications.
Hugging Face’s repository includes models for a wide range of tasks. Browsing its models page shows categories such as image-to-text, visual question answering, and document question answering under the “Multimodal” section. In the “Computer Vision” category, there are sub-categories for depth estimation, object detection, and image generation, among others. Natural language processing tasks like text classification and question answering are also represented, along with audio, tabular, and reinforcement learning (RL) models.
Enlarge/ A screenshot of the Hugging Face models page captured on September 26, 2024.
Hugging Face
When sorted for “most downloads,” the Hugging Face models list reveals trends about which AI models people find most useful. At the top, with a massive lead at 163 million downloads, is Audio Spectrogram Transformer from MIT, which classifies audio content like speech, music, and environmental sounds. Following that, with 54.2 million downloads, is BERT from Google, an AI language model that learns to understand English by predicting masked words and sentence relationships, enabling it to assist with various language tasks.
Rounding out the top five AI models are all-MiniLM-L6-v2 (which maps sentences and paragraphs to 384-dimensional dense vector representations, useful for semantic search), Vision Transformer (which processes images as sequences of patches to perform image classification), and OpenAI’s CLIP (which connects images and text, allowing it to classify or describe visual content using natural language).
No matter what the model or the task, the platform just keeps growing. “Today a new repository (model, dataset or space) is created every 10 seconds on HF,” wrote Delangue. “Ultimately, there’s going to be as many models as code repositories and we’ll be here for it!”
OpenAI hopes to convince the White House to approve a sprawling plan that would place 5-gigawatt AI data centers in different US cities, Bloomberg reports.
The AI company’s CEO, Sam Altman, supposedly pitched the plan after a recent meeting with the Biden administration where stakeholders discussed AI infrastructure needs. Bloomberg reviewed an OpenAI document outlining the plan, reporting that 5 gigawatts “is roughly the equivalent of five nuclear reactors” and warning that each data center will likely require “more energy than is used to power an entire city or about 3 million homes.”
According to OpenAI, the US needs these massive data centers to expand AI capabilities domestically, protect national security, and effectively compete with China. If approved, the data centers would generate “thousands of new jobs,” OpenAI’s document promised, and help cement the US as an AI leader globally.
But the energy demand is so enormous that OpenAI told officials that the “US needs policies that support greater data center capacity,” or else the US could fall behind other countries in AI development, the document said.
Energy executives told Bloomberg that “powering even a single 5-gigawatt data center would be a challenge,” as power projects nationwide are already “facing delays due to long wait times to connect to grids, permitting delays, supply chain issues, and labor shortages.” Most likely, OpenAI’s data centers wouldn’t rely entirely on the grid, though, instead requiring a “mix of new wind and solar farms, battery storage and a connection to the grid,” John Ketchum, CEO of NextEra Energy Inc, told Bloomberg.
That’s a big problem for OpenAI, since one energy executive, Constellation Energy Corp. CEO Joe Dominguez, told Bloomberg that he’s heard that OpenAI wants to build five to seven data centers. “As an engineer,” Dominguez said he doesn’t think that OpenAI’s plan is “feasible” and would seemingly take more time than needed to address current national security risks as US-China tensions worsen.
OpenAI may be hoping to avoid delays and cut the lines—if the White House approves the company’s ambitious data center plan. For now, a person familiar with OpenAI’s plan told Bloomberg that OpenAI is focused on launching a single data center before expanding the project to “various US cities.”
Bloomberg’s report comes after OpenAI’s chief investor, Microsoft, announced a 20-year deal with Constellation to re-open Pennsylvania’s shuttered Three Mile Island nuclear plant to provide a new energy source for data centers powering AI development and other technologies. But even if that deal is approved by regulators, the resulting energy supply that Microsoft could access—roughly 835 megawatts (0.835 gigawatts) of energy generation, which is enough to power approximately 800,000 homes—is still more than five times less than OpenAI’s 5-gigawatt demand for its data centers.
Ketchum told Bloomberg that it’s easier to find a US site for a 1-gigawatt data center, but locating a site for a 5-gigawatt facility would likely be a bigger challenge. Notably, Amazon recently bought a $650 million nuclear-powered data center in Pennsylvania with a 2.5-gigawatt capacity. At the meeting with the Biden administration, OpenAI suggested opening large-scale data centers in Wisconsin, California, Texas, and Pennsylvania, a source familiar with the matter told CNBC.
During that meeting, the Biden administration confirmed that developing large-scale AI data centers is a priority, announcing “a new Task Force on AI Datacenter Infrastructure to coordinate policy across government.” OpenAI seems to be trying to get the task force’s attention early on, outlining in the document that Bloomberg reviewed the national security and economic benefits its data centers could provide for the US.
In a statement to Bloomberg, OpenAI’s spokesperson said that “OpenAI is actively working to strengthen AI infrastructure in the US, which we believe is critical to keeping America at the forefront of global innovation, boosting reindustrialization across the country, and making AI’s benefits accessible to everyone.”
Big Tech companies and AI startups will likely continue pressuring officials to approve data center expansions, as well as new kinds of nuclear reactors as the AI explosion globally continues. Goldman Sachs estimated that “data center power demand will grow 160 percent by 2030.” To ensure power supplies for its AI, according to the tech news site Freethink, Microsoft has even been training AI to draft all the documents needed for proposals to secure government approvals for nuclear plants to power AI data centers.
On Tuesday, Stability AI announced that renowned filmmaker James Cameron—of Terminator and Skynet fame—has joined its board of directors. Stability is best known for its pioneering but highly controversial Stable Diffusion series of AI image-synthesis models, first launched in 2022, which can generate images based on text descriptions.
“I’ve spent my career seeking out emerging technologies that push the very boundaries of what’s possible, all in the service of telling incredible stories,” said Cameron in a statement. “I was at the forefront of CGI over three decades ago, and I’ve stayed on the cutting edge since. Now, the intersection of generative AI and CGI image creation is the next wave.”
Cameron is perhaps best known as the director behind blockbusters like Avatar, Titanic, and Aliens, but in AI circles, he may be most relevant for the co-creation of the character Skynet, a fictional AI system that triggers nuclear Armageddon and dominates humanity in the Terminator media franchise. Similar fears of AI taking over the world have since jumped into reality and recently sparked attempts to regulate existential risk from AI systems through measures like SB-1047 in California.
In a 2023 interview with CTV news, Cameron referenced The Terminator‘s release year when asked about AI’s dangers: “I warned you guys in 1984, and you didn’t listen,” he said. “I think the weaponization of AI is the biggest danger. I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate.”
Hollywood goes AI
Of course, Stability AI isn’t building weapons controlled by AI. Instead, Cameron’s interest in cutting-edge filmmaking techniques apparently drew him to the company.
“James Cameron lives in the future and waits for the rest of us to catch up,” said Stability CEO Prem Akkaraju. “Stability AI’s mission is to transform visual media for the next century by giving creators a full stack AI pipeline to bring their ideas to life. We have an unmatched advantage to achieve this goal with a technological and creative visionary like James at the highest levels of our company. This is not only a monumental statement for Stability AI, but the AI industry overall.”
Cameron joins other recent additions to Stability AI’s board, including Sean Parker, former president of Facebook, who serves as executive chairman. Parker called Cameron’s appointment “the start of a new chapter” for the company.
Despite significant protest from actors’ unions last year, elements of Hollywood are seemingly beginning to embrace generative AI over time. Last Wednesday, we covered a deal between Lionsgate and AI video-generation company Runway that will see the creation of a custom AI model for film production use. In March, the Financial Times reported that OpenAI was actively showing off its Sora video synthesis model to studio executives.
Unstable times for Stability AI
Cameron’s appointment to the Stability AI board comes during a tumultuous period for the company. Stability AI has faced a series of challenges this past year, including an ongoing class-action copyright lawsuit, a troubled Stable Diffusion 3 model launch, significant leadership and staff changes, and ongoing financial concerns.
In March, founder and CEO Emad Mostaque resigned, followed by a round of layoffs. This came on the heels of the departure of three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz, who have since founded Black Forest Labs and released a new open-weights image-synthesis model called Flux, which has begun to take over the r/StableDiffusion community on Reddit.
Despite the issues, Stability AI claims its models are widely used, with Stable Diffusion reportedly surpassing 150 million downloads. The company states that thousands of businesses use its models in their creative workflows.
While Stable Diffusion has indeed spawned a large community of open-weights-AI image enthusiasts online, it has also been a lightning rod for controversy among some artists because Stability originally trained its models on hundreds of millions of images scraped from the Internet without seeking licenses or permission to use them.
Apparently that association is not a concern for Cameron, according to his statement: “The convergence of these two totally different engines of creation [CGI and generative AI] will unlock new ways for artists to tell stories in ways we could have never imagined. Stability AI is poised to lead this transformation.”
Enlarge/ California Governor Gavin Newsom at a press conference in San Francisco on September 19, 2024.
Getty Images | Anadolu
California Gov. Gavin Newsom vetoed a bill that would have required makers of web browsers and mobile operating systems to let consumers send opt-out preference signals that could limit businesses’ use of personal information.
The bill approved by the State Legislature last month would have required an opt-out signal “that communicates the consumer’s choice to opt out of the sale and sharing of the consumer’s personal information or to limit the use of the consumer’s sensitive personal information.” It would have made it illegal for a business to offer a web browser or mobile operating system without a setting that lets consumers “send an opt-out preference signal to businesses with which the consumer interacts.”
In a veto message sent to the Legislature Friday, Newsom said he would not sign the bill. Newsom wrote that he shares the “desire to enhance consumer privacy,” noting that he previously signed a bill “requir[ing] the California Privacy Protection Agency to establish an accessible deletion mechanism allowing consumers to request that data brokers delete all of their personal information.”
But Newsom said he is opposed to the new bill’s mandate on operating systems. “I am concerned, however, about placing a mandate on operating system (OS) developers at this time,” the governor wrote. “No major mobile OS incorporates an option for an opt-out signal. By contrast, most Internet browsers either include such an option or, if users choose, they can download a plug-in with the same functionality. To ensure the ongoing usability of mobile devices, it’s best if design questions are first addressed by developers, rather than by regulators. For this reason, I cannot sign this bill.”
Vetoes can be overridden with a two-thirds vote in each chamber. The bill was approved 59–12 in the Assembly and 31–7 in the Senate. But the State Legislature hasn’t overridden a veto in decades.
“Industry worked overtime to squash bill”
The opt-out bill would have built on the California Consumer Privacy Act (CCPA) of 2018 and California Privacy Rights Act of 2020. Google, which recently nixed a plan to turn off tracking cookies by default in Chrome, urged Newsom to veto the bill, reports by Bloomberg and Politico said.
“It’s troubling the power that companies such as Google appear to have over the governor’s office,” said Justin Kloczko, tech and privacy advocate for Consumer Watchdog, a nonprofit group in California. “What the governor didn’t mention is that Google Chrome, Apple Safari and Microsoft Edge don’t offer a global opt-out and they make up for nearly 90 percent of the browser market share. That’s what matters. And people don’t want to install plug-ins. Safari, which is the default browsers on iPhones, doesn’t even accept a plug-in.”
Consumer Reports Policy Analyst Matt Schwartz said that “industry worked overtime to squash this bill, as it empowered Californians to better protect their privacy, undermining the commercial surveillance business model of these tech companies. We strongly disagree with the idea expressed in the governor’s veto statement that it should be left to operating systems to provide privacy choices for consumers. They’ve shown time and again they won’t meaningfully do so until forced.”
Consumer Reports is one of the groups behind Global Privacy Control (GPC), an opt-out signal that creators hope will become legally binding under the CCPA or other privacy laws. Makers of Global Privacy Control say it is superior to the older Do Not Track (DNT) signal because the California attorney general “determined that the AG could not require businesses to comply with DNT requests because the requests do not clearly convey users’ intent to opt out of the sale of their data.”
“The California AG has determined that businesses must honor two methods of submitting opt-outs. GPC is meant to provide users with an additional option for objecting to the sale of their data, and it functions identically to clicking a ‘Do Not Sell My Personal Information’ link provided by a business,” the GPC website says.
GPC is available on Firefox, Brave, DuckDuckGo, and several other browsers, but not Google’s Chrome, Microsoft’s Edge, and Apple’s Safari. The Do Not Track signal is still an option in Chrome and Edge. Chrome, Edge, and Safari also each have features that limit websites’ ability to track users.
Broadcom is accusing AT&T of trying to “rewind the clock and force” Broadcom “to sell support services for perpetual software licenses… that VMware has discontinued from its product line and to which AT&T has no contractual right to purchase.” The statement comes from legal documents Broadcom filed in response to AT&T’s lawsuit against Broadcom for refusing to renew support for its VMware perpetual licenses [PDF].
On August 29, AT&T filed a lawsuit [PDF] against Broadcom, alleging that Broadcom is breaking a contract by refusing to provide a one-year renewal for support for perpetually licensed VMware software. Broadcom famously ended perpetual VMware license sales shortly after closing its acquisition in favor of a subscription model featuring about two bundles of products rather than many SKUs.
AT&T claims its VMware contract (forged before Broadcom’s acquisition closed in November) entitles it to three one-year renewals of perpetual license support, and it’s currently trying to enact the second one. AT&T says it uses VMware products to run 75,000 virtual machines (VMs) across about 8,600 servers. The VMs are for supporting customer services operations and operations management efficiency, per AT&T. AT&T is asking the Supreme Court of the State of New York to stop Broadcom from ending VMware support services for AT&T and for “further relief” as deemed necessary.
On September 20, Broadcom filed for AT&T’s motion to be denied. Its defense includes its previously taken stance that VMware was moving toward a subscription model before Broadcom bought it. The transition from perpetual licenses to subscriptions was years in the making and, thus, something for which AT&T should have prepared, according to Broadcom. Broadcom claims that AT&T has admitted that it intends to migrate away from VMware software and that AT&T could have spent “the last several months or even years” doing so.
The filing argues: “AT&T resorts to sensationalism by accusing Broadcom of using ‘bullying tactics’ and ‘price gouging.’ Such attacks are intended to generate press and distract the Court from a much simpler story.”
Broadcom claims the simple story is that:
… the agreement contains an unambiguous “End of Availability” provision, which gives VMware the right to retire products and services at any time upon notice. What’s more, a year ago, AT&T opted not to purchase the very Support Services it now asks the Court to force VMware to provide. AT&T did so despite knowing Defendants were implementing a long planned and well-known business model transition and would soon no longer be selling the Support Services in question.
Broadcom says it has been negotiating with AT&T “for months” about a new contract, but the plaintiff “rejected every proposal despite favorable pricing.”
Broadcom’s filing also questions AT&T’s request for mandatory injunction, claiming that New York only grants those in “rare circumstances,” which allegedly don’t apply here.
AT&T has options, Broadcom says
AT&T’s lawsuit claims losing VMware support will cause extreme harm to itself and beyond. The lawsuit says that 22,000 of AT&T’s VMware VMs are used for support “of services to millions of police officers, firefighters, paramedics, emergency workers, and incident response team members nationwide… for use in connection with matters of public safety and/or national security.” It also claimed that communications for the Office of the President are at risk without VMware’s continued support.
However, Broadcom claims that AT&T has other choices, saying:
AT&T does have other options and, therefore, the most it can obtain is monetary damages. The fact that AT&T has been given more than eight-months’ notice and has in the meantime failed to take any measures to prevent its purported harm (e.g., buy a subscription for the new offerings or move to another solution) is telling and precludes any finding of irreparable harm. Even if AT&T thinks it deserves better pricing, it could have avoided its purported irreparable harm by entering in a subscription based deal and suing for monetary damages instead of injunctive relief.
AT&T previously declined to answer Ars Technica’s questions about its backup plans for supporting such important customers should it lose VMware support.
Broadcom has rubbed some customers the wrong way
Broadcom closed its VMware acquisition in November and quickly made dramatic changes. In addition to Broadcom’s reputation for overhauling companies after buying them, moves like ending perpetual licenses, taking VMware’s biggest customers directly instead of using channel partners, and raising costs by bundling products and issuing higher CPU core requirements have led customers and partners to reconsider working with the company. Migrating from VMware can be extremely challenging and expensive due to its deep integration into some IT environments, but many are investigating migration, and some expect Broadcom to face years of backlash.
As NAND Research founder and analyst Steve McDowell told TechTarget about this case:
It’s very unusual for customers to sue their vendors. I think Broadcom grossly underestimated how passionate the customer base is, [but] it’s a captive audience.
As this lawsuit demonstrates, Broadcom’s VMware has brought serious customer concerns around ongoing support. Companies like Spinnaker Support are trying to capitalize by offering third-party support services.
Martin Biggs, VP and managing director of EMEA and strategic initiatives at Spinnaker, told Ars Technica that his company provides support so customers can spend time determining their next move, whether that’s buying into a VMware subscription or moving on:
VMware customers are looking for options; the vast majority that we have spoken to don’t have a clear view yet of where they want to go, but in all cases the option of staying with VMware for the significantly increased fees is simply untenable. The challenge many have is that not paying fees means not getting support or security on their existing investment.
VMware’s support for AT&T was supposed to end on September 8, but the two companies entered an agreement to continue support until October 9. A hearing on a preliminary injunction is scheduled for October 15.
The Sarmat missile silo seen before last week’s launch attempt.
Maxar Technologies
A closer view of the Sarmat missile silo before last week’s launch attempt.
Maxar Technologies
Fire trucks surround the Sarmat missile silo in this view from space on Saturday, September 21.
Maxar Technologies
Late last week, Russia’s military planned to launch a Sarmat intercontinental ballistic missile (ICBM) on a test flight from the Plesetsk Cosmodrome. Imagery from commercial satellites captured over the weekend suggest the missile exploded before or during launch.
This is at least the second time an RS-28 Sarmat missile has failed in less than two years, dealing a blow to the country’s nuclear forces days after the head of the Russian legislature issued a veiled threat to use the missile against Europe if Western allies approved Ukraine’s use of long-range weapons against Russia.
Commercial satellite imagery collected by Maxar and Planet show before-and-after views of the Sarmat missile silo at Plesetsk, a military base about 500 miles (800 kilometers) north of Moscow. The view from one of Maxar’s imaging satellites Saturday revealed unmistakable damage at the launch site, with a large crater centered on the opening to the underground silo.
The crater is roughly 200 feet (62 meters) wide, according to George Barros, a Russia and geospatial intelligence analyst at the Institute for the Study of War. “Extensive damage in and around the launch pad can be seen which suggests that the missile exploded shortly after ignition or launch,” Barros wrote on X.
“Additionally, small fires continue to burn in the forest to the east of the launch complex and four fire trucks can be seen near the destroyed silo,” Barros added.
Enlarge/ An RS-28 Sarmat missile fires out of its underground silo on its first full-scale test flight in April 2022.
Russian Ministry of Defense
The Sarmat missile is Russia’s largest ICBM, with a height of 115 feet (35 meters). It is capable of delivering nuclear warheads to targets more than 11,000 miles (18,000 kilometers) away, making it the longest-range missile in the world. The three-stage missile burns hypergolic hydrazine and nitrogen tetroxide propellants, and is built by the Makeyev Design Bureau. The Sarmat, sometimes called the Satan II, replaces Russia’s long-range R-36M missile developed during the Cold War.
“According to Russian media, Sarmat can reportedly load up to 10 large warheads, 16 smaller ones, a combination of warheads and countermeasures, or hypersonic boost-glide vehicle,” the Center for Strategic and International Studies writes on its website.
The secret is out
Western analysts still don’t know exactly when the explosion occurred. Russia issued warnings last week for pilots to keep out of airspace along the flight path of a planned missile launch from the Plesetsk Cosmodrome. Russia published similar notices before previous Sarmat missile tests, alerting observers that another Sarmat launch was imminent. The warnings were canceled Thursday, two days before satellite imagery showed the destruction at the launch site.
“It is possible that the launch attempt was undertaken on September 19th, with fires persisting for more than 24 hours,” wrote Pavel Podvig, a senior researcher at the United Nations Institute for Disarmament Research in Geneva, on his Russian Nuclear Forces blog site. “Another possibility is that the test was scrubbed on the 19th and the incident happened during the subsequent defueling of the missile. The character of destruction suggests that the missile exploded in the silo.”
James Acton, a senior fellow at the Carnegie Endowment for International Peace, wrote on X that the before-and-after imagery of the Sarmat missile silo was “very persuasive that there was a big explosion.”
Are you not entertainment? We’ve got a shiny new trailer for Gladiator II.
When the first trailer for Gladiator II dropped in early July, it racked up more than 180 million views in its first 48 hours, so clearly there’s an audience for Ridley Scott’s long-awaited sequel to his 2000 blockbuster Gladiator. And no wonder; as I noted at the time, the film “promises to be just as much of a visual feast, as a new crop of power players (plus a couple of familiar faces) clash over the future of Rome.” We’ve now got a shiny new trailer, and I stand by that initial assessment—especially since this trailer confirms what had previously been hinted about the protagonist’s biological father.
(Some spoilers for Gladiator below.)
Gladiator II centers around Lucius Verus (Paul Mescal), son of Lucilla and former heir to the Roman Empire, given that his father (also named Lucius Verus) was once a co-emperor of Rome. Lucius hasn’t been seen in Rome for 15 years. Instead, he has been living in a small coastal town in Numidia with his wife and child. Like Maximus before him, he is captured by the Roman army and forced to become a gladiator after the death of his family. Per the official premise:
Gladiator II continues the epic saga of power, intrigue, and vengeance set in Ancient Rome. Years after witnessing the death of the revered hero Maximus at the hands of his uncle, Lucius is forced to enter the Colosseum after his home is conquered by the tyrannical Emperors who now lead Rome with an iron fist. With rage in his heart and the future of the Empire at stake, Lucius must look to his past to find strength and honor to return the glory of Rome to its people.
Pedro Pascal plays Marcus Acacius, a Roman general who trained under Maximus, tasked with conquering North Africa. Although the young Lucius once idolized Maximus, Marcus Acacius apparently will be a symbol of everything Lucius hates. Connie Nielsen reprises her Gladiator role as Lucilla, who does not recognize her son when she first sees him fighting in the arena as a gladiator. But she figures it out, since we see her urge Lucius to “take your father’s strength. His name was Maximus, and I see him in you.”
Derek Jacobi also returns as Senator Gracchus, who is opposed to growing corruption in the Roman court. Joseph Quinn and Fred Hechinger play young co-emperors Geta and Caracalla. Denzel Washington rounds out the cast as Macrinus, an arms dealer who keeps a stable of gladiators. Tim McInnerny plays Thraex, Alexander Karim plays Ravi, and Rory McCann plays Tegula.
Gladiator II hits theaters on November 22, 2024, in the US. Internationally, it will premiere on November 15, 2024. Scott recently said that he is already developing a third film, Gladiator III, which would also star Mescal as Lucius. So we already know Lucius will survive, which might be why Scott has compared the ending of this film to The Godfather: Part II (1974).
If you were looking for a reason to keep a flamethrower around the house, you may have just found one.
This week, the Los Angeles County health department reported that two people were infected with a raccoon parasite that causes severe, frequently fatal, infections of the eyes, organs, and central nervous system. Those who survive are often left with severe neurological outcomes, including blindness, paralysis, loss of coordination, seizures, cognitive impairments, and brain atrophy.
The parasitic roundworm behind the infection, called Baylisascaris procyonis, spreads via eggs in raccoons feces. Adult worms live in the intestines of the masked trash scavengers, and each female worm can produce nearly 200,000 eggs per day. Once in the environment, those eggs can remain infectious for years. They can survive drying out as well as most chemical treatments and disinfectants, including bleach.
Humans get infected if they inadvertently eat soil or other material that has become contaminated with egg-laden feces. Though infections are rare—there were 29 documented cases between 1973 and 2015—younger children and people with developmental disabilities are most at risk.
For instance, an 18-month-old boy with Downs syndrome in Illinois died from the infection after he chewed and sucked on pieces of contaminated firewood bark. An autopsy later found three worm larvae per gram of his brain tissue, with a total estimated burden of 3,027 parasitic larvae, according to a 2016 report.
Burn it down
In a news release this week, the LA health department said the risk to the general public is “low” but that the two cases are “concerning because a large number of raccoons live near people, and the infection rate in raccoons is likely high. The confirmed cases of this rare infection are an important reminder for all Los Angeles County residents to take precautions to prevent the spread of disease from animals to people, also known as zoonotic disease.”
According to the Centers for Disease Control and Prevention, one of the best prevention methods for raccoon roundworms is to kill it with fire. While chemicals stand little chance of killing off infectious eggs, extreme heat destroys them instantly.
If you have raccoons around your property, you might need to employ this method. Raccoons tend to poop in communal, pungent latrines, which are often at the base of trees, on raised surfaces—such as tree stumps, woodpiles, decks, and patios—as well as in attics and garages.
If you suspect you have an outdoor raccoon latrine on your property, the CDC recommends dousing the area in boiling water or setting it ablaze. While the CDC recommends a propane torch, specifically, a personal flamethrower could also do the trick. The agency does caution that flaming a latrine site “could cause a fire, burn injury, or surface damage.”
“Before flaming any latrine site, call your local fire department for details on local regulations and safety practices,” the CDC says. “Concrete pads, bricks, and metal shovels or garden implements can be flamed without damage. Do not attempt to flame surfaces that can melt or catch fire.”
For indoor latrines, the CDC advises not to use fire. Instead, it outlines a cautious cleaning method with hot, soapy water. However, if you want, any removed feces or contaminated material can be flamed outside, if not buried or put in the trash.
As the US Department of Justice aims to break up Google’s alleged ad tech monopoly, experts say that remedies sought in the antitrust trial could potentially benefit not just advertisers and publishers but also everyone targeted by ads online.
So far, the DOJ has argued that through acquisitions, Google allegedly monopolizes the ad server market, taking a substantial cut of every online ad sale by tying together products on the buyer and seller sides. Locking publishers into using its seller-side platform to access its large advertiser demand, Google also allegedly shut out rivals by pushing advertisers into a corner, then making it hard for publishers to switch platforms.
This scheme also allegedly set Google up to charge higher “monopoly” fees, the DOJ argued, allegedly putting some publishers out of business and raising costs for advertisers.
But while the harms to publishers and advertisers have been outlined at length, there’s been less talk about the seemingly major consequences for consumers perhaps harmed by the alleged monopoly. Those harms include higher costs of goods, less privacy, and increasingly lower-quality ads that frequently bombard their screens with products nobody wants.
By overcharging by as much as 5 or 10 percent for online ads, Google allegedly placed a “Google tax” on the price of “everyday goods we buy,” Tech Oversight’s Sacha Haworth explained during a press briefing Thursday, where experts closely monitoring the trial shared insights.
“When it comes to lowering costs on families,” Haworth said, “Google has overcharged advertisers and publishers by nearly $2 billion. That’s just over the last four years. That has inflated the price of ads, it’s increased the cost of doing business, and, of course, these costs get passed down to us when we buy things online.”
But while it’s unclear if destroying Google’s alleged monopoly would pass on any savings to consumers, Elise Phillips, policy counsel focused on competition and privacy for Public Knowledge, outlined other benefits in the event of a DOJ win.
She suggested that Google’s conduct has diminished innovation, which has “negatively” affected “the quality diversity and even relevancy of the advertisements that consumers tend to see.”
Were Google’s ad tech to be broken up and behavioral remedies sought, more competition might mean that consumers have more control over how their personal data is used in targeted advertising, Phillips suggested, and ultimately, lead to a future where everyone gets fed higher-quality ads.
That could happen if, instead of Google’s ad model dominating the Internet, less invasive ad targeting models could become more widely adopted, experts suggested. That could enhance privacy and make online ads less terrible after The New York Times declared a “junk ad epidemic” last year.
The thinking goes that if small businesses and publishers benefited from potentially reduced costs, increased revenues, and more options, consumers might start seeing a wider, higher-quality range of ads online, experts suggested.
Better ad models “are already out there,” Open Markets Institute policy analyst Karina Montoya said, such as “conceptual advertising” that uses signals that, unlike Google’s targeting, don’t rely on “gigantic, massive data sets that collect every single thing that we do in all of our devices and that don’t ask for our consent.”
But any emerging ad models are seemingly “crushed and flattened by this current dominant business model that’s really arising” from Google’s tight grip on the ad tech markets that the DOJ is targeting, Montoya said. Those include markets “for publisher ad servers, advertiser ad networks, and the ad exchanges that connect the two,” Reuters reported.
At the furthest extreme, loosening Google’s grip on the online ad industry could even “revolutionize the Internet,” Haworth suggested.
One theory posits that if publishers’ revenues increased, consumers would also benefit from more information potentially becoming available on the open web—as less content potentially gets stuck behind paywalls as desperate publishers seek ways to make up for lost ad revenue.
Montoya—who also is a reporter for the Center for Journalism & Liberty, which monitors how media outlets can thrive in today’s digital economy—noted that publishers depending on reader funding through subscriptions or donations is not sustainable if society wants to “have an open in free market where everybody can access information that they deserve and have a right to access.” By reducing Google’s control, the DOJ argues that publishers would be more financially stable, and Montoya hopes the public is starting to understand how that could benefit the open web.
“The trial is really allowing the public to see a full display of Google’s pattern of retaliatory behavior, really just to protect its monopoly power,” Montoya sad. “This idea that innovation and ways to monetize journalistic content has to come only from Google is wrong and this is really their defense.”