Biz & IT

ai-search-engine-accused-of-plagiarism-announces-publisher-revenue-sharing-plan

AI search engine accused of plagiarism announces publisher revenue-sharing plan

Beg, borrow, or license —

Perplexity says WordPress.com, TIME, Der Spiegel, and Fortune have already signed up.

Robot caught in a flashlight vector illustration

On Tuesday, AI-powered search engine Perplexity unveiled a new revenue-sharing program for publishers, marking a significant shift in its approach to third-party content use, reports CNBC. The move comes after plagiarism allegations from major media outlets, including Forbes, Wired, and Ars parent company Condé Nast. Perplexity, valued at over $1 billion, aims to compete with search giant Google.

“To further support the vital work of media organizations and online creators, we need to ensure publishers can thrive as Perplexity grows,” writes the company in a blog post announcing the problem. “That’s why we’re excited to announce the Perplexity Publishers Program and our first batch of partners: TIME, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and WordPress.com.”

Under the program, Perplexity will share a percentage of ad revenue with publishers when their content is cited in AI-generated answers. The revenue share applies on a per-article basis and potentially multiplies if articles from a single publisher are used in one response. Some content providers, such as WordPress.com, plan to pass some of that revenue on to content creators.

A press release from WordPress.com states that joining Perplexity’s Publishers Program allows WordPress.com content to appear in Perplexity’s “Keep Exploring” section on their Discover pages. “That means your articles will be included in their search index and your articles can be surfaced as an answer on their answer engine and Discover feed,” the blog company writes. “If your website is referenced in a Perplexity search result where the company earns advertising revenue, you’ll be eligible for revenue share.”

A screenshot of the Perplexity.ai website taken on July 30, 2024.

Enlarge / A screenshot of the Perplexity.ai website taken on July 30, 2024.

Benj Edwards

Dmitry Shevelenko, Perplexity’s chief business officer, told CNBC that the company began discussions with publishers in January, with program details solidified in early 2024. He reported strong initial interest, with over a dozen publishers reaching out within hours of the announcement.

As part of the program, publishers will also receive access to Perplexity APIs that can be used to create custom “answer engines” and “Enterprise Pro” accounts that provide “enhanced data privacy and security capabilities” for all employees of Publishers in the program for one year.

Accusations of plagiarism

The revenue-sharing announcement follows a rocky month for the AI startup. In mid-June, Forbes reported finding its content within Perplexity’s Pages tool with minimal attribution. Pages allows Perplexity users to curate content and share it with others. Ars Technica sister publication Wired later made similar claims, also noting suspicious traffic patterns from IP addresses likely linked to Perplexity that were ignoring robots.txt exclusions. Perplexity was also found to be manipulating its crawling bots’ ID string to get around website blocks.

As part of company policy, Ars Technica parent Condé Nast disallows AI-based content scrapers, and its CEO Roger Lynch testified in the US Senate earlier this year that generative AI has been built with “stolen goods.” Condé sent a cease-and-desist letter to Perplexity earlier this month.

But publisher trouble might not be Perplexity’s only problem. In some tests of the search we performed in February, Perplexity badly confabulated certain answers, even when citations were readily available. Since our initial tests, the accuracy of Perplexity’s results seems to have improved, but providing inaccurate answers (which also plagued Google’s AI Overviews search feature) is still a potential issue.

Compared to the free tier of service, Perplexity users who pay $20 per month can access more capable LLMs such as GPT-4o and Claude 3, so the quality and accuracy of the output can vary dramatically depending on whether a user subscribes or not. The addition of citations to every Perplexity answer allows users to check accuracy—if they take the time to do it.

The move by Perplexity occurs against a backdrop of tensions between AI companies and content creators. Some media outlets, such as The New York Times, have filed lawsuits against AI vendors like OpenAI and Microsoft, alleging copyright infringement in the training of large language models. OpenAI has struck media licensing deals with many publishers as a way to secure access to high-quality training data and avoid future lawsuits.

In this case, Perplexity is not using the licensed articles and content to train AI models but is seeking legal permission to reproduce content from publishers on its website.

AI search engine accused of plagiarism announces publisher revenue-sharing plan Read More »

hackers-exploit-vmware-vulnerability-that-gives-them-hypervisor-admin

Hackers exploit VMware vulnerability that gives them hypervisor admin

AUTHENTICATION NOT REQUIRED —

Create new group called “ESX Admins” and ESXi automatically gives it admin rights.

Hackers exploit VMware vulnerability that gives them hypervisor admin

Getty Images

Microsoft is urging users of VMware’s ESXi hypervisor to take immediate action to ward off ongoing attacks by ransomware groups that give them full administrative control of the servers the product runs on.

The vulnerability, tracked as CVE-2024-37085, allows attackers who have already gained limited system rights on a targeted server to gain full administrative control of the ESXi hypervisor. Attackers affiliated with multiple ransomware syndicates—including Storm-0506, Storm-1175, Octo Tempest, and Manatee Tempest—have been exploiting the flaw for months in numerous post-compromise attacks, meaning after the limited access has already been gained through other means.

Admin rights assigned by default

Full administrative control of the hypervisor gives attackers various capabilities, including encrypting the file system and taking down the servers they host. The hypervisor control can also allow attackers to access hosted virtual machines to either exfiltrate data or expand their foothold inside a network. Microsoft discovered the vulnerability under exploit in the normal course of investigating the attacks and reported it to VMware. VMware parent company Broadcom patched the vulnerability on Thursday.

“Microsoft security researchers identified a new post-compromise technique utilized by ransomware operators like Storm-0506, Storm-1175, Octo Tempest, and Manatee Tempest in numerous attacks,” members of the Microsoft Threat Intelligence team wrote Monday. “In several cases, the use of this technique has led to Akira and Black Basta ransomware deployments.”

The post went on to document an astonishing discovery: escalating hypervisor privileges on ESXi to unrestricted admin was as simple as creating a new domain group named “ESX Admins.” From then on, any user assigned to the domain—including newly created ones—automatically became admin, with no authentication necessary. As the Microsoft post explained:

Further analysis of the vulnerability revealed that VMware ESXi hypervisors joined to an Active Directory domain consider any member of a domain group named “ESX Admins” to have full administrative access by default. This group is not a built-in group in Active Directory and does not exist by default. ESXi hypervisors do not validate that such a group exists when the server is joined to a domain and still treats any members of a group with this name with full administrative access, even if the group did not originally exist. Additionally, the membership in the group is determined by name and not by security identifier (SID).

Creating the new domain group can be accomplished with just two commands:

  • net group “ESX Admins” /domain /add
  • net group “ESX Admins” username /domain /add

They said over the past year, ransomware actors have increasingly targeted ESXi hypervisors in attacks that allow them to mass encrypt data with only a “few clicks” required. By encrypting the hypervisor file system, all virtual machines hosted on it are also encrypted. The researchers also said that many security products have limited visibility into and little protection of the ESXi hypervisor.

The ease of exploitation, coupled with the medium severity rating VMware assigned to the vulnerability, a 6.8 out of a possible 10, prompted criticism from some experienced security professionals.

ESXi is a Type 1 hypervisor, also known as a bare-metal hypervisor, meaning it’s an operating system unto itself that’s installed directly on top of a physical server. Unlike Type 2 hypervisors, Type 1 hypervisors don’t run on top of an operating system such as Windows or Linux. Guest operating systems then run on top. Taking control of the ESXi hypervisor gives attackers enormous power.

The Microsoft researchers described one attack they observed by the Storm-0506 threat group to install ransomware known as Black Basta. As intermediate steps, Storm-0506 installed malware known as Qakbot and exploited a previously fixed Windows vulnerability to facilitate the installation of two hacking tools, one known as Cobalt Strike and the other Mimikatz. The researchers wrote:

Earlier this year, an engineering firm in North America was affected by a Black Basta ransomware deployment by Storm-0506. During this attack, the threat actor used the CVE-2024-37085 vulnerability to gain elevated privileges to the ESXi hypervisors within the organization.

The threat actor gained initial access to the organization via Qakbot infection, followed by the exploitation of a Windows CLFS vulnerability (CVE-2023-28252) to elevate their privileges on affected devices. The threat actor then used Cobalt Strike and Pypykatz (a Python version of Mimikatz) to steal the credentials of two domain administrators and to move laterally to four domain controllers.

On the compromised domain controllers, the threat actor installed persistence mechanisms using custom tools and a SystemBC implant. The actor was also observed attempting to brute force Remote Desktop Protocol (RDP) connections to multiple devices as another method for lateral movement, and then again installing Cobalt Strike and SystemBC. The threat actor then tried to tamper with Microsoft Defender Antivirus using various tools to avoid detection.

Microsoft observed that the threat actor created the “ESX Admins” group in the domain and added a new user account to it, following these actions, Microsoft observed that this attack resulted in encrypting of the ESXi file system and losing functionality of the hosted virtual machines on the ESXi hypervisor.   The actor was also observed to use PsExec to encrypt devices that are not hosted on the ESXi hypervisor. Microsoft Defender Antivirus and automatic attack disruption in Microsoft Defender for Endpoint were able to stop these encryption attempts in devices that had the unified agent for Defender for Endpoint installed.

The attack chain used by Storm-0506.

Enlarge / The attack chain used by Storm-0506.

Microsoft

Anyone with administrative responsibility for ESXi hypervisors should prioritize investigating and patching this vulnerability. The Microsoft post provides several methods for identifying suspicious modifications to the ESX Admins group or other potential signs of this vulnerability being exploited.

Hackers exploit VMware vulnerability that gives them hypervisor admin Read More »

from-sci-fi-to-state-law:-california’s-plan-to-prevent-ai-catastrophe

From sci-fi to state law: California’s plan to prevent AI catastrophe

Adventures in AI regulation —

Critics say SB-1047, proposed by “AI doomers,” could slow innovation and stifle open source AI.

The California state capital building in Sacramento.

Enlarge / The California State Capitol Building in Sacramento.

California’s “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” (a.k.a. SB-1047) has led to a flurry of headlines and debate concerning the overall “safety” of large artificial intelligence models. But critics are concerned that the bill’s overblown focus on existential threats by future AI models could severely limit research and development for more prosaic, non-threatening AI uses today.

SB-1047, introduced by State Senator Scott Wiener, passed the California Senate in May with a 32-1 vote and seems well positioned for a final vote in the State Assembly in August. The text of the bill requires companies behind sufficiently large AI models (currently set at $100 million in training costs and the rough computing power implied by those costs today) to put testing procedures and systems in place to prevent and respond to “safety incidents.”

The bill lays out a legalistic definition of those safety incidents that in turn focuses on defining a set of “critical harms” that an AI system might enable. That includes harms leading to “mass casualties or at least $500 million of damage,” such as “the creation or use of chemical, biological, radiological, or nuclear weapon” (hello, Skynet?) or “precise instructions for conducting a cyberattack… on critical infrastructure.” The bill also alludes to “other grave harms to public safety and security that are of comparable severity” to those laid out explicitly.

An AI model’s creator can’t be held liable for harm caused through the sharing of “publicly accessible” information from outside the model—simply asking an LLM to summarize The Anarchist’s Cookbook probably wouldn’t put it in violation of the law, for instance. Instead, the bill seems most concerned with future AIs that could come up with “novel threats to public safety and security.” More than a human using an AI to brainstorm harmful ideas, SB-1047 focuses on the idea of an AI “autonomously engaging in behavior other than at the request of a user” while acting “with limited human oversight, intervention, or supervision.”

Would California's new bill have stopped WOPR?

Enlarge / Would California’s new bill have stopped WOPR?

To prevent this straight-out-of-science-fiction eventuality, anyone training a sufficiently large model must “implement the capability to promptly enact a full shutdown” and have policies in place for when such a shutdown would be enacted, among other precautions and tests. The bill also focuses at points on AI actions that would require “intent, recklessness, or gross negligence” if performed by a human, suggesting a degree of agency that does not exist in today’s large language models.

Attack of the killer AI?

This kind of language in the bill likely reflects the particular fears of its original drafter, Center for AI Safety (CAIS) co-founder Dan Hendrycks. In a 2023 Time Magazine piece, Hendrycks makes the maximalist existential argument that “evolutionary pressures will likely ingrain AIs with behaviors that promote self-preservation” and lead to “a pathway toward being supplanted as the earth’s dominant species.'”

If Hendrycks is right, then legislation like SB-1047 seems like a common-sense precaution—indeed, it might not go far enough. Supporters of the bill, including AI luminaries Geoffrey Hinton and Yoshua Bengio, agree with Hendrycks’ assertion that the bill is a necessary step to prevent potential catastrophic harm from advanced AI systems.

“AI systems beyond a certain level of capability can pose meaningful risks to democracies and public safety,” wrote Bengio in an endorsement of the bill. “Therefore, they should be properly tested and subject to appropriate safety measures. This bill offers a practical approach to accomplishing this, and is a major step toward the requirements that I’ve recommended to legislators.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers.

Tech policy expert Dr. Nirit Weiss-Blatt

However, critics argue that AI policy shouldn’t be led by outlandish fears of future systems that resemble science fiction more than current technology. “SB-1047 was originally drafted by non-profit groups that believe in the end of the world by sentient machine, like Dan Hendrycks’ Center for AI Safety,” Daniel Jeffries, a prominent voice in the AI community, told Ars. “You cannot start from this premise and create a sane, sound, ‘light touch’ safety bill.”

“If we see any power-seeking behavior here, it is not of AI systems, but of AI doomers,” added tech policy expert Nirit Weiss-Blatt. “With their fictional fears, they try to pass fictional-led legislation, one that, according to numerous AI experts and open source advocates, could ruin California’s and the US’s technological advantage.”

From sci-fi to state law: California’s plan to prevent AI catastrophe Read More »

hang-out-with-ars-in-san-jose-and-dc-this-fall-for-two-infrastructure-events

Hang out with Ars in San Jose and DC this fall for two infrastructure events

Arsmeet! —

Join us as we talk about the next few years in AI & storage, and what to watch for.

Photograph of servers and racks

Enlarge / Infrastructure!

Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had!

This year, we’re back partnering with IBM again and we’re looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we’re going to the coasts—both east and west. Read on for details!

September: San Jose, California

Our first event will be in San Jose on September 18, and it’s titled “Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next.” The idea will be to explore what generative AI means for the future of data management. The topics we’ll be discussing include:

  • Playing the infrastructure long game to address any kind of workload
  • Identifying infrastructure vulnerabilities with today’s AI tools
  • Infrastructure’s environmental footprint: Navigating impacts and responsibilities

We’re getting our panelists locked down right now, and while I don’t have any names to share, many will be familiar to Ars readers from past events—or from the front page.

As a neat added bonus, we’re going to host the event at the Computer History Museum, which any Bay Area Ars reader can attest is an incredibly cool venue. (Just nobody spill anything. I think they’ll kick us out if we break any exhibits!)

October: Washington, DC

Switching coasts, on October 29 we’ll set up shop in our nation’s capital for a similar show. This time, our event title will be “AI in DC: Privacy, Compliance, and Making Infrastructure Smarter.” Given that we’ll be in DC, the tone shifts a bit to some more policy-centric discussions, and the talk track looks like this:

  • The key to compliance with emerging technologies
  • Data security in the age of AI-assisted cyber-espionage
  • The best infrastructure solution for your AI/ML strategy

Same here deal with the speakers as with the September—I can’t name names yet, but the list will be familiar to Ars readers and I’m excited. We’re still considering venues, but hoping to find something that matches our previous events in terms of style and coolness.

Interested in attending?

While it’d be awesome if everyone could come, the old song and dance applies: space, as they say, will be limited at both venues. We’d like to make sure local folks in both locations get priority in being able to attend, so we’re asking anyone who wants a ticket to register for the events at the sign-up pages below. You should get an email immediately confirming we’ve received your info, and we’ll send another note in a couple of weeks with further details on timing and attendance.

On the Ars side, at minimum both our EIC Ken Fisher and I will be in attendance at both events, and we’ll likely have some other Ars staff showing up where we can—free drinks are a strong lure for the weary tech journalist, so there ought to be at least a few appearing at both. Hoping to see you all there!

Hang out with Ars in San Jose and DC this fall for two infrastructure events Read More »

google-claims-math-breakthrough-with-proof-solving-ai-models

Google claims math breakthrough with proof-solving AI models

slow and steady —

AlphaProof and AlphaGeometry 2 solve problems, with caveats on time and human assistance.

An illustration provided by Google.

Enlarge / An illustration provided by Google.

On Thursday, Google DeepMind announced that AI systems called AlphaProof and AlphaGeometry 2 reportedly solved four out of six problems from this year’s International Mathematical Olympiad (IMO), achieving a score equivalent to a silver medal. The tech giant claims this marks the first time an AI has reached this level of performance in the prestigious math competition—but as usual in AI, the claims aren’t as clear-cut as they seem.

Google says AlphaProof uses reinforcement learning to prove mathematical statements in the formal language called Lean. The system trains itself by generating and verifying millions of proofs, progressively tackling more difficult problems. Meanwhile, AlphaGeometry 2 is described as an upgraded version of Google’s previous geometry-solving AI modeI, now powered by a Gemini-based language model trained on significantly more data.

According to Google, prominent mathematicians Sir Timothy Gowers and Dr. Joseph Myers scored the AI model’s solutions using official IMO rules. The company reports its combined system earned 28 out of 42 possible points, just shy of the 29-point gold medal threshold. This included a perfect score on the competition’s hardest problem, which Google claims only five human contestants solved this year.

A math contest unlike any other

The IMO, held annually since 1959, pits elite pre-college mathematicians against exceptionally difficult problems in algebra, combinatorics, geometry, and number theory. Performance on IMO problems has become a recognized benchmark for assessing an AI system’s mathematical reasoning capabilities.

Google states that AlphaProof solved two algebra problems and one number theory problem, while AlphaGeometry 2 tackled the geometry question. The AI model reportedly failed to solve the two combinatorics problems. The company claims its systems solved one problem within minutes, while others took up to three days.

Google says it first translated the IMO problems into formal mathematical language for its AI model to process. This step differs from the official competition, where human contestants work directly with the problem statements during two 4.5-hour sessions.

Google reports that before this year’s competition, AlphaGeometry 2 could solve 83 percent of historical IMO geometry problems from the past 25 years, up from its predecessor’s 53 percent success rate. The company claims the new system solved this year’s geometry problem in 19 seconds after receiving the formalized version.

Limitations

Despite Google’s claims, Sir Timothy Gowers offered a more nuanced perspective on the Google DeepMind models in a thread posted on X. While acknowledging the achievement as “well beyond what automatic theorem provers could do before,” Gowers pointed out several key qualifications.

“The main qualification is that the program needed a lot longer than the human competitors—for some of the problems over 60 hours—and of course much faster processing speed than the poor old human brain,” Gowers wrote. “If the human competitors had been allowed that sort of time per problem they would undoubtedly have scored higher.”

Gowers also noted that humans manually translated the problems into the formal language Lean before the AI model began its work. He emphasized that while the AI performed the core mathematical reasoning, this “autoformalization” step was done by humans.

Regarding the broader implications for mathematical research, Gowers expressed uncertainty. “Are we close to the point where mathematicians are redundant? It’s hard to say. I would guess that we’re still a breakthrough or two short of that,” he wrote. He suggested that the system’s long processing times indicate it hasn’t “solved mathematics” but acknowledged that “there is clearly something interesting going on when it operates.”

Even with these limitations, Gowers speculated that such AI systems could become valuable research tools. “So we might be close to having a program that would enable mathematicians to get answers to a wide range of questions, provided those questions weren’t too difficult—the kind of thing one can do in a couple of hours. That would be massively useful as a research tool, even if it wasn’t itself capable of solving open problems.”

Google claims math breakthrough with proof-solving AI models Read More »

97%-of-crowdstrike-systems-are-back-online;-microsoft-suggests-windows-changes

97% of CrowdStrike systems are back online; Microsoft suggests Windows changes

falcon punch —

Kernel access gives security software a lot of power, but not without problems.

A bad update to CrowdStrike's Falcon security software crashed millions of Windows PCs last week.

Enlarge / A bad update to CrowdStrike’s Falcon security software crashed millions of Windows PCs last week.

CrowdStrike

CrowdStrike CEO George Kurtz said Thursday that 97 percent of all Windows systems running its Falcon sensor software were back online, a week after an update-related outage to the corporate security software delayed flights and took down emergency response systems, among many other disruptions. The update, which caused Windows PCs to throw the dreaded Blue Screen of Death and reboot, affected about 8.5 million systems by Microsoft’s count, leaving roughly 250,000 that still need to be brought back online.

Microsoft VP John Cable said in a blog post that the company has “engaged over 5,000 support engineers working 24×7” to help clean up the mess created by CrowdStrike’s update and hinted at Windows changes that could help—if they don’t run afoul of regulators, anyway.

“This incident shows clearly that Windows must prioritize change and innovation in the area of end-to-end resilience,” wrote Cable. “These improvements must go hand in hand with ongoing improvements in security and be in close cooperation with our many partners, who also care deeply about the security of the Windows ecosystem.”

Cable pointed to VBS enclaves and Azure Attestation as examples of products that could keep Windows secure without requiring kernel-level access, as most Windows-based security products (including CrowdStrike’s Falcon sensor) do now. But he stopped short of outlining what specific changes might be made to Windows, saying only that Microsoft would continue to “harden our platform, and do even more to improve the resiliency of the Windows ecosystem, working openly and collaboratively with the broad security community.”

When running in kernel mode rather than user mode, security software has full access to a system’s hardware and software, which makes it more powerful and flexible; this also means that a bad update like CrowdStrike’s can cause a lot more problems.

Recent versions of macOS have deprecated third-party kernel extensions for exactly this reason, one explanation for why Macs weren’t taken down by the CrowdStrike update. But past efforts by Microsoft to lock third-party security companies out of the Windows kernel—most recently in the Windows Vista era—have been met with pushback from European Commission regulators. That level of skepticism is warranted, given Microsoft’s past (and continuing) record of using Windows’ market position to push its own products and services. Any present-day attempt to restrict third-party vendors’ access to the Windows kernel would be likely to draw similar scrutiny.

Microsoft has also had plenty of its own security problems to deal with recently, to the point that it has promised to restructure the company to make security more of a focus.

CrowdStrike’s aftermath

CrowdStrike has made its own promises in the wake of the outage, including more thorough testing of updates and a phased-rollout system that could prevent a bad update file from causing quite as much trouble as the one last week did. The company’s initial incident report pointed to a lapse in its testing procedures as the cause of the problem.

Meanwhile, recovery continues. Some systems could be fixed simply by rebooting, though they had to do it as many as 15 times—this could give systems a chance to grab a new update file before they could crash. For the rest, IT admins were left to either restore them from backups or delete the bad update file manually. Microsoft published a bootable tool that could help automate the process of deleting that file, but it still required laying hands on every single affected Windows install, whether on a virtual machine or a physical system.

And not all of CrowdStrike’s remediation solutions have been well-received. The company sent out $10 UberEats promo codes to cover some of its partners’ “next cup of coffee or late night snack,” which occasioned some eye-rolling on social media sites (the code was also briefly unusable because Uber flagged it as fraudulent, according to a CrowdStrike representative). For context, analytics company Parametrix Insurance estimated the cost of the outage to Fortune 500 companies somewhere in the realm of $5.4 billion.

97% of CrowdStrike systems are back online; Microsoft suggests Windows changes Read More »

at-the-olympics,-ai-is-watching-you

At the Olympics, AI is watching you

“It’s the eyes of the police multiplied” —

New system foreshadows a future where there are too many CCTV cameras for humans to physically watch.

Police observe the Eiffel Tower from Trocadero ahead of the Paris 2024 Olympic Games.

Enlarge / Police observe the Eiffel Tower from Trocadero ahead of the Paris 2024 Olympic Games on July 22, 2024.

On the eve of the Olympics opening ceremony, Paris is a city swamped in security. Forty thousand barriers divide the French capital. Packs of police officers wearing stab vests patrol pretty, cobbled streets. The river Seine is out of bounds to anyone who has not already been vetted and issued a personal QR code. Khaki-clad soldiers, present since the 2015 terrorist attacks, linger near a canal-side boulangerie, wearing berets and clutching large guns to their chests.

French interior minister Gérald Darmanin has spent the past week justifying these measures as vigilance—not overkill. France is facing the “biggest security challenge any country has ever had to organize in a time of peace,” he told reporters on Tuesday. In an interview with weekly newspaper Le Journal du Dimanche, he explained that “potentially dangerous individuals” have been caught applying to work or volunteer at the Olympics, including 257 radical Islamists, 181 members of the far left, and 95 from the far right. Yesterday, he told French news broadcaster BFM that a Russian citizen had been arrested on suspicion of plotting “large scale” acts of “destabilization” during the Games.

Parisians are still grumbling about road closures and bike lanes that abruptly end without warning, while human rights groups are denouncing “unacceptable risks to fundamental rights.” For the Games, this is nothing new. Complaints about dystopian security are almost an Olympics tradition. Previous iterations have been characterized as Lockdown London, Fortress Tokyo, and the “arms race” in Rio. This time, it is the least-visible security measures that have emerged as some of the most controversial. Security measures in Paris have been turbocharged by a new type of AI, as the city enables controversial algorithms to crawl CCTV footage of transport stations looking for threats. The system was first tested in Paris back in March at two Depeche Mode concerts.

For critics and supporters alike, algorithmic oversight of CCTV footage offers a glimpse of the security systems of the future, where there is simply too much surveillance footage for human operators to physically watch. “The software is an extension of the police,” says Noémie Levain, a member of the activist group La Quadrature du Net, which opposes AI surveillance. “It’s the eyes of the police multiplied.”

Near the entrance of the Porte de Pantin metro station, surveillance cameras are bolted to the ceiling, encased in an easily overlooked gray metal box. A small sign is pinned to the wall above the bin, informing anyone willing to stop and read that they are part of a “video surveillance analysis experiment.” The company which runs the Paris metro RATP “is likely” to use “automated analysis in real time” of the CCTV images “in which you can appear,” the sign explains to the oblivious passengers rushing past. The experiment, it says, runs until March 2025.

Porte de Pantin is on the edge of the park La Villette, home to the Olympics’ Park of Nations, where fans can eat or drink in pavilions dedicated to 15 different countries. The Metro stop is also one of 46 train and metro stations where the CCTV algorithms will be deployed during the Olympics, according to an announcement by the Prefecture du Paris, a unit of the interior ministry. City representatives did not reply to WIRED’s questions on whether there are plans to use AI surveillance outside the transport network. Under a March 2023 law, algorithms are allowed to search CCTV footage in real-time for eight “events,” including crowd surges, abnormally large groups of people, abandoned objects, weapons, or a person falling to the ground.

“What we’re doing is transforming CCTV cameras into a powerful monitoring tool,” says Matthias Houllier, cofounder of Wintics, one of four French companies that won contracts to have their algorithms deployed at the Olympics. “With thousands of cameras, it’s impossible for police officers [to react to every camera].”

At the Olympics, AI is watching you Read More »

openai-hits-google-where-it-hurts-with-new-searchgpt-prototype

OpenAI hits Google where it hurts with new SearchGPT prototype

Cutting through the sludge —

New tool may solve a web-search problem partially caused by AI-generated junk online.

The OpenAI logo on a blue newsprint background.

Benj Edwards / OpenAI

Arguably, few companies have unintentionally contributed more to the increase of AI-generated noise online than OpenAI. Despite its best intentions—and against its terms of service—its AI language models are often used to compose spam, and its pioneering research has inspired others to build AI models that can potentially do the same. This influx of AI-generated content has further reduced the effectiveness of SEO-driven search engines like Google. In 2024, web search is in a sorry state indeed.

It’s interesting, then, that OpenAI is now offering a potential solution to that problem. On Thursday, OpenAI revealed a prototype AI-powered search engine called SearchGPT that aims to provide users with quick, accurate answers sourced from the web. It’s also a direct challenge to Google, which also has tried to apply generative AI to web search (but with little success).

The company says it plans to integrate the most useful aspects of the temporary prototype into ChatGPT in the future. ChatGPT can already perform web searches using Bing, but SearchGPT seems to be a purpose-built interface for AI-assisted web searching.

SearchGPT attempts to streamline the process of finding information online by combining OpenAI’s AI models (like GPT-4o) with real-time web data. Like ChatGPT, users can reportedly ask SearchGPT follow-up questions, with the AI model maintaining context throughout the conversation.

Perhaps most importantly from an accuracy standpoint, the SearchGPT prototype (which we have not tested ourselves) reportedly includes features that attribute web-based sources prominently. Responses include in-line citations and links, while a sidebar displays additional source links.

OpenAI has not yet said how it is obtaining its real-time web data and whether it’s partnering with an existing search engine provider (like it does currently with Bing for ChatGPT) or building its own web-crawling and indexing system.

A way around publishers blocking OpenAI

ChatGPT can already perform web searches using Bing, but since last August when OpenAI revealed a way to block its web crawler, that feature hasn’t been nearly as useful as it could be. Many sites, such as Ars Technica (which blocks the OpenAI crawler as part of our parent company’s policy), won’t show up as results in ChatGPT because of this.

SearchGPT appears to untangle the association between OpenAI’s web crawler for scraping training data and the desire for OpenAI chatbot users to search the web. Notably, in the new SearchGPT announcement, OpenAI says, “Sites can be surfaced in search results even if they opt out of generative AI training.”

Even so, OpenAI says it is working on a way for publishers to manage how they appear in SearchGPT results so that “publishers have more choices.” And the company says that SearchGPT’s ability to browse the web is separate from training OpenAI’s AI models.

An uncertain future for AI-powered search

OpenAI claims SearchGPT will make web searches faster and easier. However, the effectiveness of AI-powered search compared to traditional methods is unknown, as the tech is still in its early stages. But let’s be frank: The most prominent web-search engine right now is pretty terrible.

Over the past year, we’ve seen Perplexity.ai take off as a potential AI-powered Google search replacement, but the service has been hounded by issues with confabulations and accusations of plagiarism among publishers, including Ars Technica parent Condé Nast.

Unlike Perplexity, OpenAI has many content deals lined up with publishers, and it emphasizes that it wants to work with content creators in particular. “We are committed to a thriving ecosystem of publishers and creators,” says OpenAI in its news release. “We hope to help users discover publisher sites and experiences, while bringing more choice to search.”

In a statement for the OpenAI press release, Nicholas Thompson, CEO of The Atlantic (which has a content deal with OpenAI), expressed optimism about the potential of AI search: “AI search is going to become one of the key ways that people navigate the internet, and it’s crucial, in these early days, that the technology is built in a way that values, respects, and protects journalism and publishers,” he said. “We look forward to partnering with OpenAI in the process, and creating a new way for readers to discover The Atlantic.”

OpenAI has experimented with other offshoots of its AI language model technology that haven’t become blockbuster hits (most notably, GPTs come to mind), so time will tell if the techniques behind SearchGPT have staying power—and if it can deliver accurate results without hallucinating. But the current state of web search is inviting new experiments to separate the signal from the noise, and it looks like OpenAI is throwing its hat in the ring.

OpenAI is currently rolling out SearchGPT to a small group of users and publishers for testing and feedback. Those interested in trying the prototype can sign up for a waitlist on the company’s website.

OpenAI hits Google where it hurts with new SearchGPT prototype Read More »

chrome-will-now-prompt-some-users-to-send-passwords-for-suspicious-files

Chrome will now prompt some users to send passwords for suspicious files

SAFE BROWSING —

Google says passwords and files will be deleted shortly after they are deep-scanned.

Chrome will now prompt some users to send passwords for suspicious files

Google is redesigning Chrome malware detections to include password-protected executable files that users can upload for deep scanning, a change the browser maker says will allow it to detect more malicious threats.

Google has long allowed users to switch on the Enhanced Mode of its Safe Browsing, a Chrome feature that warns users when they’re downloading a file that’s believed to be unsafe, either because of suspicious characteristics or because it’s in a list of known malware. With Enhanced Mode turned on, Google will prompt users to upload suspicious files that aren’t allowed or blocked by its detection engine. Under the new changes, Google will prompt these users to provide any password needed to open the file.

Beware of password-protected archives

In a post published Wednesday, Jasika Bawa, Lily Chen, and Daniel Rubery of the Chrome Security team wrote:

Not all deep scans can be conducted automatically. A current trend in cookie theft malware distribution is packaging malicious software in an encrypted archive—a .zip, .7z, or .rar file, protected by a password—which hides file contents from Safe Browsing and other antivirus detection scans. In order to combat this evasion technique, we have introduced two protection mechanisms depending on the mode of Safe Browsing selected by the user in Chrome.

Attackers often make the passwords to encrypted archives available in places like the page from which the file was downloaded, or in the download file name. For Enhanced Protection users, downloads of suspicious encrypted archives will now prompt the user to enter the file’s password and send it along with the file to Safe Browsing so that the file can be opened and a deep scan may be performed. Uploaded files and file passwords are deleted a short time after they’re scanned, and all collected data is only used by Safe Browsing to provide better download protections.

Enter a file password to send an encrypted file for a malware scan

Enlarge / Enter a file password to send an encrypted file for a malware scan

Google

For those who use Standard Protection mode which is the default in Chrome, we still wanted to be able to provide some level of protection. In Standard Protection mode, downloading a suspicious encrypted archive will also trigger a prompt to enter the file’s password, but in this case, both the file and the password stay on the local device and only the metadata of the archive contents are checked with Safe Browsing. As such, in this mode, users are still protected as long as Safe Browsing had previously seen and categorized the malware.

Sending Google an executable casually downloaded from a site advertising a screensaver or media player is likely to generate little if any hesitancy. For more sensitive files such as a password-protected work archive, however, there is likely to be more pushback. Despite the assurances the file and password will be deleted promptly, things sometimes go wrong and aren’t discovered for months or years, if at all. People using Chrome with Enhanced Mode turned on should exercise caution.

A second change Google is making to Safe Browsing is a two-tiered notification system when users are downloading files. They are:

  1. Suspicious files, meaning those Google’s file-vetting engine have given a lower-confidence verdict, with unknown risk of user harm
  2. Dangerous files, or those with a high confidence verdict that they pose a high risk of user harm

The new tiers are highlighted by iconography, color, and text in an attempt to make it easier for users to easily distinguish between the differing levels of risk. “Overall, these improvements in clarity and consistency have resulted in significant changes in user behavior, including fewer warnings bypassed, warnings heeded more quickly, and all in all, better protection from malicious downloads,” the Google authors wrote.

Previously, Safe Browsing notifications looked like this:

Differentiation between suspicious and dangerous warnings.

Enlarge / Differentiation between suspicious and dangerous warnings.

Google

Over the past year, Chrome hasn’t budged on its continued support of third-party cookies, a decision that allows companies large and small to track users of that browser as they navigate from website to website to website. Google’s alternative to tracking cookies, known as the Privacy Sandbox, has also received low marks from privacy advocates because it tracks user interests based on their browser usage.

That said, Chrome has long been a leader in introducing protections, such as a security sandbox that cordons off risky code so it can’t mingle with sensitive data and operating system functions. Those who stick with Chrome should at a minimum keep Standard Mode Safe Browsing on. Users with the experience required to judiciously choose which files to send to Google should consider turning on Enhanced Mode.

Chrome will now prompt some users to send passwords for suspicious files Read More »

secure-boot-is-completely-broken-on-200+-models-from-5-big-device-makers

Secure Boot is completely broken on 200+ models from 5 big device makers

Secure Boot is completely broken on 200+ models from 5 big device makers

sasha85ru | Getty Imates

In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat. The threat was the specter of malware that could infect the BIOS, the firmware that loaded the operating system each time a computer booted up. From there, it could remain immune to detection and removal and could load even before the OS and security apps did.

The threat of such BIOS-dwelling malware was largely theoretical and fueled in large part by the creation of ICLord Bioskit by a Chinese researcher in 2007. ICLord was a rootkit, a class of malware that gains and maintains stealthy root access by subverting key protections built into the operating system. The proof of concept demonstrated that such BIOS rootkits weren’t only feasible; they were also powerful. In 2011, the threat became a reality with the discovery of Mebromi, the first-known BIOS rootkit to be used in the wild.

Keenly aware of Mebromi and its potential for a devastating new class of attack, the Secure Boot architects hashed out a complex new way to shore up security in the pre-boot environment. Built into UEFI—the Unified Extensible Firmware Interface that would become the successor to BIOS—Secure Boot used public-key cryptography to block the loading of any code that wasn’t signed with a pre-approved digital signature. To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.

An unlimited Secure Boot bypass

On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what’s known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon/Ryzen2000_4000.git, and it’s not clear when it was taken down.

The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

“It’s a big problem,” said Martin Smolár, a malware analyst specializing in rootkits who reviewed the Binarly research and spoke to me about it. “It’s basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically… execute any malware or untrusted code during system boot. Of course, privileged access is required, but that’s not a problem in many cases.”

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef: 87: 81: 23: 00: 84: 47: 17:0b:b3:cd: 87:3a:f4. A table appearing at the end of this article lists each one.

The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings “DO NOT SHIP” or “DO NOT TRUST.”

Test certificate provided by AMI.

Enlarge / Test certificate provided by AMI.

Binarly

Secure Boot is completely broken on 200+ models from 5 big device makers Read More »

crowdstrike-blames-testing-bugs-for-security-update-that-took-down-8.5m-windows-pcs

CrowdStrike blames testing bugs for security update that took down 8.5M Windows PCs

oops —

Company says it’s improving testing processes to avoid a repeat.

CrowdStrike's Falcon security software brought down as many as 8.5 million Windows PCs over the weekend.

Enlarge / CrowdStrike’s Falcon security software brought down as many as 8.5 million Windows PCs over the weekend.

CrowdStrike

Security firm CrowdStrike has posted a preliminary post-incident report about the botched update to its Falcon security software that caused as many as 8.5 million Windows PCs to crash over the weekend, delaying flights, disrupting emergency response systems, and generally wreaking havoc.

The detailed post explains exactly what happened: At just after midnight Eastern time, CrowdStrike deployed “a content configuration update” to allow its software to “gather telemetry on possible novel threat techniques.” CrowdStrike says that these Rapid Response Content updates are tested before being deployed, and one of the steps involves checking updates using something called the Content Validator. In this case, “a bug in the Content Validator” failed to detect “problematic content data” in the update responsible for the crashing systems.

CrowdStrike says it is making changes to its testing and deployment processes to prevent something like this from happening again. The company is specifically including “additional validation checks to the Content Validator” and adding more layers of testing to its process.

The biggest change will probably be “a staggered deployment strategy for Rapid Response Content” going forward. In a staggered deployment system, updates are initially released to a small group of PCs, and then availability is slowly expanded once it becomes clear that the update isn’t causing major problems. Microsoft uses a phased rollout for Windows security and feature updates after a couple of major hiccups during the Windows 10 era. To this end, CrowdStrike will “improve monitoring for both sensor and system performance” to help “guide a phased rollout.”

CrowdStrike says it will also give its customers more control over when Rapid Response Content updates are deployed so that updates that take down millions of systems aren’t deployed at (say) midnight when fewer people are around to notice or fix things. Customers will also be able to subscribe to release notes about these updates.

Recovery of affected systems is ongoing. Rebooting systems multiple times (as many as 15, according to Microsoft) can give them enough time to grab a new, non-broken update file before they crash, resolving the issue. Microsoft has also created tools that can boot systems via USB or a network so that the bad update file can be deleted, allowing systems to restart normally.

In addition to this preliminary incident report, CrowdStrike says it will release “the full Root Cause Analysis” once it has finished investigating the issue.

CrowdStrike blames testing bugs for security update that took down 8.5M Windows PCs Read More »

elon-musk-claims-he-is-training-“the-world’s-most-powerful-ai-by-every-metric”

Elon Musk claims he is training “the world’s most powerful AI by every metric”

the biggest, most powerful —

One snag: xAI might not have the electrical power contracts to do it.

Elon Musk, chief executive officer of Tesla Inc., during a fireside discussion on artificial intelligence risks with Rishi Sunak, UK prime minister, in London, UK, on Thursday, Nov. 2, 2023.

Enlarge / Elon Musk, chief executive officer of Tesla Inc., during a fireside discussion on artificial intelligence risks with Rishi Sunak, UK prime minister, in London, UK, on Thursday, Nov. 2, 2023.

On Monday, Elon Musk announced the start of training for what he calls “the world’s most powerful AI training cluster” at xAI’s new supercomputer facility in Memphis, Tennessee. The billionaire entrepreneur and CEO of multiple tech companies took to X (formerly Twitter) to share that the so-called “Memphis Supercluster” began operations at approximately 4: 20 am local time that day.

Musk’s xAI team, in collaboration with X and Nvidia, launched the supercomputer cluster featuring 100,000 liquid-cooled H100 GPUs on a single RDMA fabric. This setup, according to Musk, gives xAI “a significant advantage in training the world’s most powerful AI by every metric by December this year.”

Given issues with xAI’s Grok chatbot throughout the year, skeptics would be justified in questioning whether those claims will match reality, especially given Musk’s tendency for grandiose, off-the-cuff remarks on the social media platform he runs.

Power issues

According to a report by News Channel 3 WREG Memphis, the startup of the massive AI training facility marks a milestone for the city. WREG reports that xAI’s investment represents the largest capital investment by a new company in Memphis’s history. However, the project has raised questions among local residents and officials about its impact on the area’s power grid and infrastructure.

WREG reports that Doug McGowen, president of Memphis Light, Gas and Water (MLGW), previously stated that xAI could consume up to 150 megawatts of power at peak times. This substantial power requirement has prompted discussions with the Tennessee Valley Authority (TVA) regarding the project’s electricity demands and connection to the power system.

The TVA told the local news station, “TVA does not have a contract in place with xAI. We are working with xAI and our partners at MLGW on the details of the proposal and electricity demand needs.”

The local news outlet confirms that MLGW has stated that xAI moved into an existing building with already existing utility services, but the full extent of the company’s power usage and its potential effects on local utilities remain unclear. To address community concerns, WREG reports that MLGW plans to host public forums in the coming days to provide more information about the project and its implications for the city.

For now, Tom’s Hardware reports that Musk is side-stepping power issues by installing a fleet of 14 VoltaGrid natural gas generators that provide supplementary power to the Memphis computer cluster while his company works out an agreement with the local power utility.

As training at the Memphis Supercluster gets underway, all eyes are on xAI and Musk’s ambitious goal of developing the world’s most powerful AI by the end of the year (by which metric, we are uncertain), given the competitive landscape in AI at the moment between OpenAI/Microsoft, Amazon, Apple, Anthropic, and Google. If such an AI model emerges from xAI, we’ll be ready to write about it.

This article was updated on July 24, 2024 at 1: 11 pm to mention Musk installing natural gas generators onsite in Memphis.

Elon Musk claims he is training “the world’s most powerful AI by every metric” Read More »