Biz & IT

ransomware-attackers-quickly-weaponize-php-vulnerability-with-9.8-severity-rating

Ransomware attackers quickly weaponize PHP vulnerability with 9.8 severity rating

FILES LOCKED —

TellYouThePass group opportunistically infects servers that have yet to update.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Ransomware criminals have quickly weaponized an easy-to-exploit vulnerability in the PHP programming language that executes malicious code on web servers, security researchers said.

As of Thursday, Internet scans performed by security firm Censys had detected 1,000 servers infected by a ransomware strain known as TellYouThePass, down from 1,800 detected on Monday. The servers, primarily located in China, no longer display their usual content; instead, many list the site’s file directory, which shows all files have been given a .locked extension, indicating they have been encrypted. An accompanying ransom note demands roughly $6,500 in exchange for the decryption key.

The output of PHP servers infected by TellYouThePass ransomware.

Enlarge / The output of PHP servers infected by TellYouThePass ransomware.

Censys

The accompanying ransom note.

Enlarge / The accompanying ransom note.

Censys

When opportunity knocks

The vulnerability, tracked as CVE-2024-4577 and carrying a severity rating of 9.8 out of 10, stems from errors in the way PHP converts Unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to convert user-supplied input into characters that pass malicious commands to the main PHP application. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.

CVE-2024-4577 affects PHP only when it runs in a mode known as CGI, in which a web server parses HTTP requests and passes them to a PHP script for processing. Even when PHP isn’t set to CGI mode, however, the vulnerability may still be exploitable when PHP executables such as php.exe and php-cgi.exe are in directories that are accessible by the web server. This configuration is extremely rare, with the exception of the XAMPP platform, which uses it by default. An additional requirement appears to be that the Windows locale—used to personalize the OS to the local language of the user—must be set to either Chinese or Japanese.

The critical vulnerability was published on June 6, along with a security patch. Within 24 hours, threat actors were exploiting it to install TellYouThePass, researchers from security firm Imperva reported Monday. The exploits executed code that used the mshta.exe Windows binary to run an HTML application file hosted on an attacker-controlled server. Use of the binary indicated an approach known as living off the land, in which attackers use native OS functionalities and tools in an attempt to blend in with normal, non-malicious activity.

In a post published Friday, Censys researchers said that the exploitation by the TellYouThePass gang started on June 7 and mirrored past incidents that opportunistically mass scan the Internet for vulnerable systems following a high-profile vulnerability and indiscriminately targeting any accessible server. The vast majority of the infected servers have IP addresses geolocated to China, Taiwan, Hong Kong, or Japan, likely stemming from the fact that Chinese and Japanese locales are the only ones confirmed to be vulnerable, Censys researchers said in an email.

Since then, the number of infected sites—detected by observing the public-facing HTTP response serving an open directory listing showing the server’s filesystem, along with the distinctive file-naming convention of the ransom note—has fluctuated from a low of 670 on June 8 to a high of 1,800 on Monday.

Image tracking day-to-day compromises of PHP servers and their geolocation.

Enlarge / Image tracking day-to-day compromises of PHP servers and their geolocation.

Censys

Censys researchers said in an email that they’re not entirely sure what’s causing the changing numbers.

“From our perspective, many of the compromised hosts appear to remain online, but the port running the PHP-CGI or XAMPP service stops responding—hence the drop in detected infections,” they wrote. “Another point to consider is that there are currently no observed ransom payments to the only Bitcoin address listed in the ransom notes (source). Based on these facts, our intuition is that this is likely the result of those services being decommissioned or going offline in some other manner.”

XAMPP used in production, really?

The researchers went on to say that roughly half of the compromises observed show clear signs of running XAMPP, but that estimate is likely an undercount since not all services explicitly show what software they use.

“Given that XAMPP is vulnerable by default, it’s reasonable to guess that most of the infected systems are running XAMPP,” the researchers said. This Censys query lists the infections that are explicitly affecting the platform. The researchers aren’t aware of any specific platforms other than XAMPP that have been compromised.

The discovery of compromised XAMPP servers took Will Dormann, a senior vulnerability analyst at security firm Analygence, by surprise because XAMPP maintainers explicitly say their software isn’t suitable for production systems.

“People choosing to run not-for-production software have to deal with the consequences of that decision,” he wrote in an online interview.

While XAMPP is the only platform confirmed to be vulnerable, people running PHP on any Windows system should install the update as soon as possible. The Imperva post linked above provides IP addresses, file names, and file hashes that administrators can use to determine whether they have been targeted in the attacks.

Ransomware attackers quickly weaponize PHP vulnerability with 9.8 severity rating Read More »

retired-engineer-discovers-55-year-old-bug-in-lunar-lander-computer-game-code

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code

The world’s oldest feature —

A physics simulation flaw in text-based 1969 computer game went unnoticed until today.

Illustration of the Apollo lunar lander Eagle over the Moon.

Enlarge / Illustration of the Apollo lunar lander Eagle over the Moon.

On Friday, a retired software engineer named Martin C. Martin announced that he recently discovered a bug in the original Lunar Lander computer game’s physics code while tinkering with the software. Created by a 17-year-old high school student named Jim Storer in 1969, this primordial game rendered the action only as text status updates on a teletype, but it set the stage for future versions to come.

The legendary game—which Storer developed on a PDP-8 minicomputer in a programming language called FOCAL just months after Neil Armstrong and Buzz Aldrin made their historic moonwalks—allows players to control a lunar module’s descent onto the Moon’s surface. Players must carefully manage their fuel usage to achieve a gentle landing, making critical decisions every ten seconds to burn the right amount of fuel.

In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.

Lunar Lander game, provided by Jim Storer.” height=”524″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/lunar_lander_teletype_output-640×524.jpg” width=”640″>

Enlarge / A scan of printed teletype output from the original Lunar Lander game, provided by Jim Storer.

Jim Storer

Fast forward to 2024, when Martin—an AI expert, game developer, and former postdoctoral associate at MIT—stumbled upon a bug in Storer’s high school code while exploring what he believed was the optimal strategy for landing the module with maximum fuel efficiency—a technique known among Kerbal Space Program enthusiasts as the “suicide burn.” This method involves falling freely to build up speed and then igniting the engines at the last possible moment to slow down just enough to touch down safely. He also tried another approach—a more gentle landing.

“I recently explored the optimal fuel burn schedule to land as gently as possible and with maximum remaining fuel,” Martin wrote on his blog. “Surprisingly, the theoretical best strategy didn’t work. The game falsely thinks the lander doesn’t touch down on the surface when in fact it does. Digging in, I was amazed by the sophisticated physics and numerical computing in the game. Eventually I found a bug: a missing ‘divide by two’ that had seemingly gone unnoticed for nearly 55 years.”

A matter of division

Diagram of launch escape system on top of the Apollo capsule.

Enlarge / Diagram of launch escape system on top of the Apollo capsule.

NASA

Despite applying what should have been a textbook landing strategy, Martin found that the game inconsistently reported that the lander had missed the Moon’s surface entirely. Intrigued by the anomaly, Martin dug into the game’s source code and discovered that the landing algorithm was based on highly sophisticated physics for its time, including the Tsiolkovsky rocket equation and a Taylor series expansion.

As mentioned in the quote above, the root of the problem was a simple computational oversight—a missing division by two in the formula used to calculate the lander’s trajectory. This seemingly minor error had big consequences, causing the simulation to underestimate the time until the lander reached its lowest trajectory point and miscalculate the landing.

Despite the bug, Martin was impressed that Storer, then a high school senior, managed to incorporate advanced mathematical concepts into his game, a feat that remains impressive even by today’s standards. Martin reached out to Storer himself, and the Lunar Lander author told Martin that his father was a physicist who helped him derive the equations used in the game simulation.

While people played and enjoyed Storer’s game for years with the bug in place, it goes to show that realism isn’t always the most important part of a compelling interactive experience. And thankfully for Aldrin and Armstrong, the real Apollo lunar landing experience didn’t suffer from the same issue.

You can read more about Martin’s exciting debugging adventure over on his blog.

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code Read More »

report:-apple-isn’t-paying-openai-for-chatgpt-integration-into-oses

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

in the pocket —

Apple thinks pushing OpenAI’s brand to hundreds of millions is worth more than money.

The OpenAI and Apple logos together.

OpenAI / Apple / Benj Edwards

On Monday, Apple announced it would be integrating OpenAI’s ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google’s multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT’s placement on its devices as compensation enough.

“Apple isn’t paying OpenAI as part of the partnership,” writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. “Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT’s capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

And there’s another angle at play. Currently, OpenAI offers subscriptions (ChatGPT Plus, Enterprise, Team) that unlock additional features. If users subscribe to OpenAI through the ChatGPT app on an Apple device, the process will reportedly use Apple’s payment platform, which may give Apple a significant cut of the revenue. According to the report, Apple hopes to negotiate additional revenue-sharing deals with AI vendors in the future.

Why OpenAI

The rise of ChatGPT in the public eye over the past 18 months has made OpenAI a power player in the tech industry, allowing it to strike deals with publishers for AI training content—and ensure continued support from Microsoft in the form of investments that trade vital funding and compute for access to OpenAI’s large language model (LLM) technology like GPT-4.

Still, Apple’s choice of ChatGPT as Apple’s first external AI integration has led to widespread misunderstanding, especially since Apple buried the lede about its own in-house LLM technology that powers its new “Apple Intelligence” platform.

On Apple’s part, CEO Tim Cook told The Washington Post that it chose OpenAI as its first third-party AI partner because he thinks the company controls the leading LLM technology at the moment: “I think they’re a pioneer in the area, and today they have the best model,” he said. “We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

Apple’s choice also brings risk. OpenAI’s record isn’t spotless, racking up a string of public controversies over the past month that include an accusation from actress Scarlett Johansson that the company intentionally imitated her voice, resignations from a key scientist and safety personnel, the revelation of a restrictive NDA for ex-employees that prevented public criticism, and accusations against OpenAI CEO Sam Altman of “psychological abuse” related by a former member of the OpenAI board.

Meanwhile, critics of privacy issues related to gathering data for training AI models—including OpenAI foe Elon Musk, who took to X on Monday to spread misconceptions about how the ChatGPT integration might work—also worried that the Apple-OpenAI deal might expose personal data to the AI company, although both companies strongly deny that will be the case.

Looking ahead, Apple’s deal with OpenAI is not exclusive, and the company is already in talks to offer Google’s Gemini chatbot as an additional option later this year. Apple has also reportedly held talks with Anthropic (maker of Claude 3) as a potential chatbot partner, signaling its intention to provide users with a range of AI services, much like how the company offers various search engine options in Safari.

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes Read More »

turkish-student-creates-custom-ai-device-for-cheating-university-exam,-gets-arrested

Turkish student creates custom AI device for cheating university exam, gets arrested

spy hard —

Elaborate scheme involved hidden camera and an earpiece to hear answers.

A photo illustration of what a shirt-button camera <em>could</em> look like. ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/shirt-button-camera-800×450.jpg”></img><figcaption>
<p><a data-height=Enlarge / A photo illustration of what a shirt-button camera could look like.

Aurich Lawson | Getty Images

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person’s eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a “router” (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

A video released by the Isparta police demonstrated how the cheating system functioned. In the video, a police officer scans a question, and the AI software provides the correct answer through the earpiece.

In addition to the student, Turkish police detained another individual for assisting the student during the exam. The police discovered a mobile phone that could allegedly relay spoken sounds to the other person, allowing for two-way communication.

A history of calling on computers for help

The recent arrest recalls other attempts to cheat using wireless communications and computers, such as the famous case of the Eudaemons in the late 1970s. The Eudaemons were a group of physics graduate students from the University of California, Santa Cruz, who developed a wearable computer device designed to predict the outcome of roulette spins in casinos.

The Eudaemons’ device consisted of a shoe with a computer built into it, connected to a timing device operated by the wearer’s big toe. The wearer would click the timer when the ball and the spinning roulette wheel were in a specific position, and the computer would calculate the most likely section of the wheel where the ball would land. This prediction would be transmitted to an earpiece worn by another team member, who would quickly place bets on the predicted section.

While the Eudaemons’ plan didn’t involve a university exam, it shows that the urge to call upon remote computational powers greater than oneself is apparently timeless.

Turkish student creates custom AI device for cheating university exam, gets arrested Read More »

ridiculed-stable-diffusion-3-release-excels-at-ai-generated-body-horror

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror

unstable diffusion —

Users react to mangled SD3 generations and ask, “Is this release supposed to be a joke?”

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, “Is this release supposed to be a joke? [SD3-2B],” details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, “Why is SD3 so bad at generating girls lying on the grass?” shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

“It wasn’t too long ago that StableDiffusion was competing with Midjourney, now it just looks like a joke in comparison. At least our datasets are safe and ethical!” wrote one Reddit user.

  • An AI-generated image created using Stable Diffusion 3 Medium.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “woman wearing a dress on the beach.”

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “photograph of a person napping in a living room.”

AI image fans are so far blaming the Stable Diffusion 3’s anatomy fails on Stability’s insistence on filtering out adult content (often called “NSFW” content) from the SD3 training data that teaches the model how to generate images. “Believe it or not, heavily censoring a model also gets rid of human anatomy, so… that’s what happened,” wrote one Reddit user in the thread.

Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.

The release of Stable Diffusion 2.0 in 2022 suffered from similar problems in depicting humans well, and AI researchers soon discovered that censoring adult content that contains nudity can severely hamper an AI model’s ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by strongly filtering NSFW content.

Another issue that can occur during model pre-training is that sometimes the NSFW filter researchers use remove adult images from the dataset is too picky, accidentally removing images that might not be offensive and depriving the model of depictions of humans in certain situations. “[SD3] works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw,” wrote one Redditor on the topic.

Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt “a man showing his hands” returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

  • A SD3 Medium example we generated with the prompt “A woman lying on the beach.”

  • A SD3 Medium example we generated with the prompt “A man showing his hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “A woman showing her hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting.”

  • A SD3 Medium example we generated with the prompt “A cat in a car holding a can of beer.”

Stability first announced Stable Diffusion 3 in February, and the company has planned to make it available in a variety of different model sizes. Today’s release is for the “Medium” version, which is a 2 billion-parameter model. In addition to the weights being available on Hugging Face, they are also available for experimentation through the company’s Stability Platform. The weights are available for download and use for free under a non-commercial license only.

Soon after its February announcement, delays in releasing the SD3 model weights inspired rumors that the release was being held back due to technical issues or mismanagement. Stability AI as a company fell into a tailspin recently with the resignation of its founder and CEO, Emad Mostaque, in March and then a series of layoffs. Just prior to that, three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—left the company. And its troubles go back even farther, with news of the company’s dire financial position lingering since 2023.

To some Stable Diffusion fans, the failures with Stable Diffusion 3 Medium are a visual manifestation of the company’s mismanagement—and an obvious sign of things falling apart. Although the company has not filed for bankruptcy, some users made dark jokes about the possibility after seeing SD3 Medium:

“I guess now they can go bankrupt in a safe and ethically [sic] way, after all.”

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror Read More »

one-of-the-major-sellers-of-detailed-driver-behavioral-data-is-shutting-down

One of the major sellers of detailed driver behavioral data is shutting down

Products driving products —

Selling “hard braking event” data seems less lucrative after public outcry.

Interior of car with different aspects of it highlighted, as if by a camera or AI

Getty Images

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a “Driving Behavior Data History Report” to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times’ Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every “hard acceleration” or “hard braking event,” among other data points.

While the data was purportedly coming from an opt-in “Smart Driver” program in GM cars, many customers reported having no memory of opting in to the program or believing that dealership salespeople activated it themselves or rushed them through the process. The Mozilla Foundation considers cars to be “the worst product category we have ever reviewed for privacy,” given the overly broad privacy policies owners must agree to, extensive data gathering, and general lack of safeguards or privacy guarantees available for US car buyers.

GM quickly announced a halt to data sharing in late March, days after the Times’ reporting sparked considerable outcry. GM had been sending data to both Verisk and LexisNexis Risk Solutions, the latter of which is not signaling any kind of retreat from the telematics pipeline. LexisNexis’ telematics page shows logos for carmakers Kia, Mitsubishi, and Subaru.

Ars contacted LexisNexis for comment and will update this post with new information.

Disclosure of GM’s stealthily authorized data sharing has sparked numerous lawsuits, investigations from California and Texas agencies, and interest from Congress and the Federal Trade Commission.

One of the major sellers of detailed driver behavioral data is shutting down Read More »

china-state-hackers-infected-20,000-fortinet-vpns,-dutch-spy-service-says

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

DISCLOSURE FUBAR —

Critical code-execution flaw was under exploitation 2 months before company disclosed it.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

On Monday, officials with the Military Intelligence and Security Service (MIVD) and the General Intelligence and Security Service in the Netherlands said that to date, Chinese state hackers have used the critical vulnerability to infect more than 20,000 FortiGate VPN appliances sold by Fortinet. Targets include dozens of Western government agencies, international organizations, and companies within the defense industry.

“Since then, the MIVD has conducted further investigation and has shown that the Chinese cyber espionage campaign appears to be much more extensive than previously known,” Netherlands officials with the National Cyber Security Center wrote. “The NCSC therefore calls for extra attention to this campaign and the abuse of vulnerabilities in edge devices.”

Monday’s report said that exploitation of the vulnerability started two months before Fortinet first disclosed it and that 14,000 servers were backdoored during this zero-day period. The officials warned that the Chinese threat group likely still has access to many victims because CoatHanger is so hard to detect and remove.

Netherlands government officials wrote in Monday’s report:

Since the publication in February, the MIVD has continued to investigate the broader Chinese cyber espionage campaign. This revealed that the state actor gained access to at least 20,000 FortiGate systems worldwide within a few months in both 2022 and 2023 through the vulnerability with the identifier CVE-2022-42475 . Furthermore, research shows that the state actor behind this campaign was already aware of this vulnerability in FortiGate systems at least two months before Fortinet announced the vulnerability. During this so-called ‘zero-day’ period, the actor alone infected 14,000 devices. Targets include dozens of (Western) governments, international organizations and a large number of companies within the defense industry.

The state actor installed malware at relevant targets at a later date. This gave the state actor permanent access to the systems. Even if a victim installs security updates from FortiGate, the state actor continues to have this access.

It is not known how many victims actually have malware installed. The Dutch intelligence services and the NCSC consider it likely that the state actor could potentially expand its access to hundreds of victims worldwide and carry out additional actions such as stealing data.

Even with the technical report on the COATHANGER malware, infections from the actor are difficult to identify and remove. The NCSC and the Dutch intelligence services therefore state that it is likely that the state actor still has access to systems of a significant number of victims.

Fortinet’s failure to timely disclose is particularly acute given the severity of the vulnerability. Disclosures are crucial because they help users prioritize the installation of patches. When a new version fixes minor bugs, many organizations often wait to install it. When it fixes a vulnerability with a 9.8 severity rating, they’re much more likely to expedite the update process. Given the vulnerability was being exploited even before Fortinet fixed it, the disclosure likely wouldn’t have prevented all of the infections, but it stands to reason it could have stopped some.

Fortinet officials have never explained why they didn’t disclose the critical vulnerability when it was fixed. They have also declined to disclose what the company policy is for the disclosure of security vulnerabilities. Company representatives didn’t immediately respond to an email seeking comment for this post.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says Read More »

apple-and-openai-currently-have-the-most-misunderstood-partnership-in-tech

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn’t using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered “Apple Intelligence” during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we’ve seen confusion on social media about why Apple didn’t develop a cutting-edge GPT-4-like chatbot internally. Despite Apple’s year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple’s lack of innovation.

“This is really strange. Surely Apple could train a very good competing LLM if they wanted? They’ve had a year,” wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misinformation about it—saying things like, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”

While Apple has developed many technologies internally, it has also never been shy about integrating outside tech when necessary in various ways, from acquisitions to built-in clients—in fact, Siri was initially developed by an outside company. But by making a deal with a company like OpenAI, which has been the source of a string of tech controversies recently, it’s understandable that some people don’t understand why Apple made the call—and what it might entail for the privacy of their on-device data.

“Our customers want something with world knowledge some of the time”

While Apple Intelligence largely utilizes its own Apple-developed LLMs, Apple also realized that there may be times when some users want to use what the company considers the current “best” existing LLM—OpenAI’s GPT-4 family. In an interview with The Washington Post, Apple CEO Tim Cook explained the decision to integrate OpenAI first:

“I think they’re a pioneer in the area, and today they have the best model,” he said. “And I think our customers want something with world knowledge some of the time. So we considered everything and everyone. And obviously we’re not stuck on one person forever or something. We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

The proposed benefit of Apple integrating ChatGPT into various experiences within iOS, iPadOS, and macOS is that it allows AI users to access ChatGPT’s capabilities without the need to switch between different apps—either through the Siri interface or through Apple’s integrated “Writing Tools.” Users will also have the option to connect their paid ChatGPT account to access extra features.

As an answer to privacy concerns, Apple says that before any data is sent to ChatGPT, the OS asks for the user’s permission, and the entire ChatGPT experience is optional. According to Apple, requests are not stored by OpenAI, and users’ IP addresses are hidden. Apparently, communication with OpenAI servers happens through API calls similar to using the ChatGPT app on iOS, and there is reportedly no deeper OS integration that might expose user data to OpenAI without the user’s permission.

We can only take Apple’s word for it at the moment, of course, and solid details about Apple’s AI privacy efforts will emerge once security experts get their hands on the new features later this year.

Apple’s history of tech integration

So you’ve seen why Apple chose OpenAI. But why look to outside companies for tech? In some ways, Apple building an external LLM client into its operating systems isn’t too different from what it has previously done with streaming video (the YouTube app on the original iPhone), Internet search (Google search integration), and social media (integrated Twitter and Facebook sharing).

The press has positioned Apple’s recent AI moves as Apple “catching up” with competitors like Google and Microsoft in terms of chatbots and generative AI. But playing it slow and cool has long been part of Apple’s M.O.—not necessarily introducing the bleeding edge of technology but improving existing tech through refinement and giving it a better user interface.

Apple and OpenAI currently have the most misunderstood partnership in tech Read More »

nasty-bug-with-very-simple-exploit-hits-php-just-in-time-for-the-weekend

Nasty bug with very simple exploit hits PHP just in time for the weekend

WORST FIT EVER —

With PoC code available and active Internet scans, speed is of the essence.

Nasty bug with very simple exploit hits PHP just in time for the weekend

A critical vulnerability in the PHP programming language can be trivially exploited to execute malicious code on Windows devices, security researchers warned as they urged those affected to take action before the weekend starts.

Within 24 hours of the vulnerability and accompanying patch being published, researchers from the nonprofit security organization Shadowserver reported Internet scans designed to identify servers that are susceptible to attacks. That—combined with (1) the ease of exploitation, (2) the availability of proof-of-concept attack code, (3) the severity of remotely executing code on vulnerable machines, and (4) the widely used XAMPP platform being vulnerable by default—has prompted security practitioners to urge admins check to see if their PHP servers are affected before starting the weekend.

When “Best Fit” isn’t

“A nasty bug with a very simple exploit—perfect for a Friday afternoon,” researchers with security firm WatchTowr wrote.

CVE-2024-4577, as the vulnerability is tracked, stems from errors in the way PHP converts unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to pass user-supplied input into commands executed by an application, in this case, PHP. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.

“While implementing PHP, the team did not notice the Best-Fit feature of encoding conversion within the Windows operating system,” researchers with Devcore, the security firm that discovered CVE-2024-4577, wrote. “This oversight allows unauthenticated attackers to bypass the previous protection of CVE-2012-1823 by specific character sequences. Arbitrary code can be executed on remote PHP servers through the argument injection attack.”

CVE-2024-4577 affects PHP only when it runs in a mode known as CGI, in which a web server parses HTTP requests and passes them to a PHP script for processing. Even when PHP isn’t set to CGI mode, however, the vulnerability may still be exploitable when PHP executables such as php.exe and php-cgi.exe are in directories that are accessible by the web server. This configuration is set by default in XAMPP for Windows, making the platform vulnerable unless it has been modified.

One example, WatchTowr noted, occurs when queries are parsed and sent through a command line. The result: a harmless request such as http://host/cgi.php?foo=bar could be converted into php.exe cgi.php foo=bar, a command that would be executed by the main PHP engine.

No escape

Like many other languages, PHP converts certain types of user input to prevent it from being interpreted as a command for execution. This is a process known as escaping. For example, in HTML, the < and > characters are often escaped by converting them into their unicode hex value equivalents < and > to prevent them from being interpreted as HTML tags by a browser.

The WatchTowr researchers demonstrate how Best Fit fails to escape characters such as a soft hyphen (with unicode value 0xAD) and instead converts it to an unescaped regular hyphen (0x2D), a character that’s instrumental in many code syntaxes.

The researchers went on to explain:

It turns out that, as part of unicode processing, PHP will apply what’s known as a ‘best fit’ mapping, and helpfully assume that, when the user entered a soft hyphen, they actually intended to type a real hyphen, and interpret it as such. Herein lies our vulnerability—if we supply a CGI handler with a soft hyphen (0xAD), the CGI handler won’t feel the need to escape it, and will pass it to PHP. PHP, however, will interpret it as if it were a real hyphen, which allows an attacker to sneak extra command line arguments, which begin with hyphens, into the PHP process.

This is remarkably similar to an older PHP bug (when in CGI mode), CVE-2012-1823, and so we can borrow some exploitation techniques developed for this older bug and adapt them to work with our new bug. A helpful writeup advises that, to translate our injection into RCE, we should aim to inject the following arguments:

-d allow_url_include=1 -d auto_prepend_file=php://input  

This will accept input from our HTTP request body, and process it using PHP. Straightforward enough – let’s try a version of this equipped with our 0xAD ‘soft hyphen’ instead of the usual hyphen. Maybe it’s enough to slip through the escaping?

POST /test.php?%ADd+allow_url_include%3d1+%ADd+auto_prepend_file%3dphp://input HTTP/1.1  Host: host  User-Agent: curl/8.3.0  Accept: */Content-Length: 23  Content-Type: application/x-www-form-urlencoded  Connection: keep-alive       

Oh joy—we’re rewarded with a phpinfo page, showing us we have indeed achieved RCE.

The vulnerability was discovered by Devcore researcher Orange Tsai, who said: “The bug is incredibly simple, but that’s also what makes it interesting.”

The Devcore writeup said that the researchers have confirmed that XAMPP is vulnerable when Windows is configured to use the locales for Traditional Chinese, Simplified Chinese, or Japanese. In Windows, a locale is a set of user preference information related to the user’s language, environment, and/or cultural conventions. The researchers haven’t tested other locales and have urged people using them to perform a comprehensive asset assessment to test their usage scenarios.

CVE-2024-4577 affects all versions of PHP running on a Windows device. That includes version branches 8.3 prior to 8.3.8, 8.2 prior to 8.2.20, and 8.1 prior to 8.1.29.

The 8.0, 7, and 5 version branches are also vulnerable, but since they’re no longer supported, admins will have to follow mitigation advice since patches aren’t available. One option is to apply what are known as rewrite rules such as:

RewriteEngine On  RewriteCond %QUERY_STRING ^%ad [NC]  RewriteRule .? - [F,L]

The researchers caution these rules have been tested only for the three locales they have confirmed as vulnerable.

XAMPP for Windows had yet to release a fix at the time this post went live. For admins without the need for PHP CGI, they can turn it off using the following Apache HTTP Server configuration:

C:/xampp/apache/conf/extra/httpd-xampp.conf

Locating the corresponding lines:

ScriptAlias /php-cgi/ "C:/xampp/php/"  

And comment it out:

# ScriptAlias /php-cgi/ "C:/xampp/php/"  

Additional analysis of the vulnerability is available here.

Nasty bug with very simple exploit hits PHP just in time for the weekend Read More »

vmware-customers-may-stay,-but-broadcom-could-face-backlash-“for-years-to-come”

VMware customers may stay, but Broadcom could face backlash “for years to come”

“The emotional shock has started to metabolize” —

300 director-level IT workers making VMware decisions were questioned.

VMware customers may stay, but Broadcom could face backlash “for years to come”

After acquiring VMware, Broadcom swiftly enacted widespread changes that resulted in strong public backlash. A new survey of 300 director-level IT workers at companies that are customers of North American VMware provides insight into the customer reaction to Broadcom’s overhaul.

The survey released Thursday doesn’t provide feedback from every VMware customer, but it’s the first time we’ve seen responses from IT decision-makers working for companies paying for VMware products. It echos concerns expressed at the announcement of some of Broadcom’s more controversial changes to VMware, like the end of perpetual licenses and growing costs.

CloudBolt Software commissioned Wakefield Research, a market research agency, to run the study from May 9 through May 23. The “CloudBolt Industry Insights Reality Report: VMware Acquisition Aftermath” includes responses from workers at 150 companies with fewer than 1,000 workers and 150 companies with more than 1,000 workers. Survey respondents were invited via email and took the survey online, with the report authors writing that results are subject to sampling variation of ±5.7 percentage points at a 95 percent confidence level.

Notably, Amazon Web Services (AWS) commissioned the report in partnership with CloudBolt. AWS’s partnership with VMware hit a road bump last month when Broadcom stopped allowing AWS to resell the VMware Cloud on AWS offering—a move that AWS said “disappointed it.” Kyle Campos, CloudBolt CTPO, told Ars Technica that the full extent to which AWS was involved in this report was helping underwrite the cost of research. But you can see why AWS would have interest in customer dissatisfaction with VMware.

Widespread worry

Every person surveyed said that they expect VMware prices to rise under Broadcom. In a March “User Group Town Hall,” attendees complained about “price rises of 500 and 600 percent,” according to The Register. We heard in February from ServeTheHome that “smaller” cloud service providers were claiming to see costs grow tenfold. In this week’s survey, 73 percent of respondents said they expect VMware prices to more than double. Twelve percent of respondents expect a price hike of 301 to 500 percent. Only 1 percent anticipate price hikes of 501 to 1,000 percent.

“At this juncture post-acquisition, most larger enterprises seem to have a clear understanding of how their next procurement cycle with Broadcom will be impacted from a pricing and packaging standpoint,” the report noted.

Further, 95 percent of survey respondents said they view Broadcom buying VMware as disruptive to their IT strategy, with 46 percent considering it extremely or very disruptive.

Widespread concerns about cost and IT strategy help explain why 99 percent of the 300 respondents said they are concerned about Broadcom owning VMware, with 46 percent being “very concerned” and 30 percent “extremely concerned.”

Broadcom didn’t respond to Ars’ request for comment.

Not jumping ship yet

Despite widespread anxiety over Broadcom’s VMware, most of the respondents said they will likely stay with VMware either partially (43 percent of respondents) or fully (40 percent). A smaller percentage of respondents said they would move more workloads to the public cloud (38 percent) or a different hypervisor (34 percent) or move entirely to the public cloud (33 percent). This is with 69 percent of respondents having at least one contract expiring with VMware within the next 12 months.

Many companies have already migrated easy-to-move workloads to the public cloud, CloudBolt’s Campos said in a statement. For many firms surveyed, what’s left in the data center “is a mixture of workloads requiring significant modernization or compliance bound to the data center,” including infrastructure components that have been in place for decades. Campos noted that many mission-critical workloads remain in the data center, and moving them is “daunting with unclear ROI.”

“The emotional shock has started to metabolize inside of the Broadcom customer base, but it’s metabolized in the form of strong commitment to mitigating the negative impacts of the Broadcom VMware acquisition,” Campos told Ars Technica.

Resistance to ditching VMware reflects how “embedded” VMware is within customer infrastructures, the CloudBolt exec told Ars, adding:

In many cases, the teams responsible for purchasing, implementing, and operating VMware have never even considered an alternative prior to this acquisition; it’s the only operating reality they know and they are used to buying out of this problem.

Top reasons cited for considering abandoning VMware partially or totally were uncertainty about Broadcom’s plans, concerns about support quality under Broadcom, and changes to relationships with channel partners (each named by 36 percent of respondents).

Following closely was the shift to subscription licensing (34 percent), expected price bumps (33 percent), and personal negative experiences with Broadcom (33 percent). Broadcom’s history with big buys like Symantec and CA Technologies also has 32 percent of people surveyed considering leaving VMware.

Although many firms seem to be weighing their options before potentially leaving VMware, Campos warned that Broadcom could see backlash continue “for months and even years to come,” considering the areas of concern cited in the survey and how all VMware offerings are near-equal candidates for eventual nixing.

VMware customers may stay, but Broadcom could face backlash “for years to come” Read More »

duckduckgo-offers-“anonymous”-access-to-ai-chatbots-through-new-service

DuckDuckGo offers “anonymous” access to AI chatbots through new service

anonymous confabulations —

DDG offers LLMs from OpenAI, Anthropic, Meta, and Mistral for factually-iffy conversations.

DuckDuckGo's AI Chat promotional image.

DuckDuckGo

On Thursday, DuckDuckGo unveiled a new “AI Chat” service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity. While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.

DuckDuckGo’s AI Chat currently features access to OpenAI’s GPT-3.5 Turbo, Anthropic’s Claude 3 Haiku, and two open source models, Meta’s Llama 3 and Mistral’s Mixtral 8x7B. The service is currently free to use within daily limits. Users can access AI Chat through the DuckDuckGo search engine, direct links to the site, or by using “!ai” or “!chat” shortcuts in the search field. AI Chat can also be disabled in the site’s settings for users with accounts.

According to DuckDuckGo, chats on the service are anonymized, with metadata and IP address removed to prevent tracing back to individuals. The company states that chats are not used for AI model training, citing its privacy policy and terms of use.

“We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days,” says DuckDuckGo, “and that none of the chats made on our platform can be used to train or improve the models.”

An example of DuckDuckGo AI Chat with GPT-3.5 answering a silly question in an inaccurate way.

Enlarge / An example of DuckDuckGo AI Chat with GPT-3.5 answering a silly question in an inaccurate way.

Benj Edwards

However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user’s inputs to remote servers for processing over the Internet. Given certain inputs (i.e., “Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill”), a user could still potentially be identified if such an extreme need arose.

While the service appears to work well for us, there’s a question about its utility. For example, while GPT-3.5 initially wowed people when it launched with ChatGPT in 2022, it also confabulated a lot—and it still does. GPT-4 was the first major LLM to get confabulations under control to a point where the bot became more reasonably useful for some tasks (though this itself is a controversial point), but that more capable model isn’t present in DuckDuckGo’s AI Chat. Also missing are similar GPT-4-level models like Claude Opus or Google’s Gemini Ultra, likely because they are far more expensive to run. DuckDuckGo says it may roll out paid plans in the future, and those may include higher daily usage limits or access to “more advanced models.”)

It’s true that the other three models generally (and subjectively) pass GPT-3.5 in capability for coding with lower hallucinations, but they can still make things up, too. With DuckDuckGo AI Chat as it stands, the company is left with a chatbot novelty with a decent interface and the promise that your conversations with it will remain private. But what use are fully private AI conversations if they are full of errors?

Mixtral 8x7B on DuckDuckGo AI Chat when asked about the author. Everything in red boxes is sadly incorrect, but it provides an interesting fantasy scenario. It's a good example of an LLM plausibly filling gaps between concepts that are underrepresented in its training data, called confabulation. For the record, Llama 3 gives a more accurate answer.

Enlarge / Mixtral 8x7B on DuckDuckGo AI Chat when asked about the author. Everything in red boxes is sadly incorrect, but it provides an interesting fantasy scenario. It’s a good example of an LLM plausibly filling gaps between concepts that are underrepresented in its training data, called confabulation. For the record, Llama 3 gives a more accurate answer.

Benj Edwards

As DuckDuckGo itself states in its privacy policy, “By its very nature, AI Chat generates text with limited information. As such, Outputs that appear complete or accurate because of their detail or specificity may not be. For example, AI Chat cannot dynamically retrieve information and so Outputs may be outdated. You should not rely on any Output without verifying its contents using other sources, especially for professional advice (like medical, financial, or legal advice).”

So, have fun talking to bots, but tread carefully. They’ll easily “lie” to your face because they don’t understand what they are saying and are tuned to output statistically plausible information, not factual references.

DuckDuckGo offers “anonymous” access to AI chatbots through new service Read More »

nvidia-jumps-ahead-of-itself-and-reveals-next-gen-“rubin”-ai-chips-in-keynote-tease

Nvidia jumps ahead of itself and reveals next-gen “Rubin” AI chips in keynote tease

Swing beat —

“I’m not sure yet whether I’m going to regret this,” says CEO Jensen Huang at Computex 2024.

Nvidia's CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024.

Enlarge / Nvidia’s CEO Jensen Huang delivers his keystone speech ahead of Computex 2024 in Taipei on June 2, 2024.

On Sunday, Nvidia CEO Jensen Huang reached beyond Blackwell and revealed the company’s next-generation AI-accelerating GPU platform during his keynote at Computex 2024 in Taiwan. Huang also detailed plans for an annual tick-tock-style upgrade cycle of its AI acceleration platforms, mentioning an upcoming Blackwell Ultra chip slated for 2025 and a subsequent platform called “Rubin” set for 2026.

Nvidia’s data center GPUs currently power a large majority of cloud-based AI models, such as ChatGPT, in both development (training) and deployment (inference) phases, and investors are keeping a close watch on the company, with expectations to keep that run going.

During the keynote, Huang seemed somewhat hesitant to make the Rubin announcement, perhaps wary of invoking the so-called Osborne effect, whereby a company’s premature announcement of the next iteration of a tech product eats into the current iteration’s sales. “This is the very first time that this next click as been made,” Huang said, holding up his presentation remote just before the Rubin announcement. “And I’m not sure yet whether I’m going to regret this or not.”

Nvidia Keynote at Computex 2023.

The Rubin AI platform, expected in 2026, will use HBM4 (a new form of high-bandwidth memory) and NVLink 6 Switch, operating at 3,600GBps. Following that launch, Nvidia will release a tick-tock iteration called “Rubin Ultra.” While Huang did not provide extensive specifications for the upcoming products, he promised cost and energy savings related to the new chipsets.

During the keynote, Huang also introduced a new ARM-based CPU called “Vera,” which will be featured on a new accelerator board called “Vera Rubin,” alongside one of the Rubin GPUs.

Much like Nvidia’s Grace Hopper architecture, which combines a “Grace” CPU and a “Hopper” GPU to pay tribute to the pioneering computer scientist of the same name, Vera Rubin refers to Vera Florence Cooper Rubin (1928–2016), an American astronomer who made discoveries in the field of deep space astronomy. She is best known for her pioneering work on galaxy rotation rates, which provided strong evidence for the existence of dark matter.

A calculated risk

Nvidia CEO Jensen Huang reveals the

Enlarge / Nvidia CEO Jensen Huang reveals the “Rubin” AI platform for the first time during his keynote at Computex 2024 on June 2, 2024.

Nvidia’s reveal of Rubin is not a surprise in the sense that most big tech companies are continuously working on follow-up products well in advance of release, but it’s notable because it comes just three months after the company revealed Blackwell, which is barely out of the gate and not yet widely shipping.

At the moment, the company seems to be comfortable leapfrogging itself with new announcements and catching up later; Nvidia just announced that its GH200 Grace Hopper “Superchip,” unveiled one year ago at Computex 2023, is now in full production.

With Nvidia stock rising and the company possessing an estimated 70–95 percent of the data center GPU market share, the Rubin reveal is a calculated risk that seems to come from a place of confidence. That confidence could turn out to be misplaced if a so-called “AI bubble” pops or if Nvidia misjudges the capabilities of its competitors. The announcement may also stem from pressure to continue Nvidia’s astronomical growth in market cap with nonstop promises of improving technology.

Accordingly, Huang has been eager to showcase the company’s plans to continue pushing silicon fabrication tech to its limits and widely broadcast that Nvidia plans to keep releasing new AI chips at a steady cadence.

“Our company has a one-year rhythm. Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm, and we push everything to technology limits,” Huang said during Sunday’s Computex keynote.

Despite Nvidia’s recent market performance, the company’s run may not continue indefinitely. With ample money pouring into the data center AI space, Nvidia isn’t alone in developing accelerator chips. Competitors like AMD (with the Instinct series) and Intel (with Guadi 3) also want to win a slice of the data center GPU market away from Nvidia’s current command of the AI-accelerator space. And OpenAI’s Sam Altman is trying to encourage diversified production of GPU hardware that will power the company’s next generation of AI models in the years ahead.

Nvidia jumps ahead of itself and reveals next-gen “Rubin” AI chips in keynote tease Read More »