Biz & IT

openai’s-sora-2-lets-users-insert-themselves-into-ai-videos-with-sound

OpenAI’s Sora 2 lets users insert themselves into AI videos with sound

On Tuesday, OpenAI announced Sora 2, its second-generation video-synthesis AI model that can now generate videos in various styles with synchronized dialogue and sound effects, which is a first for the company. OpenAI also launched a new iOS social app that allows users to insert themselves into AI-generated videos through what OpenAI calls “cameos.”

OpenAI showcased the new model in an AI-generated video that features a photorealistic version of OpenAI CEO Sam Altman talking to the camera in a slightly unnatural-sounding voice amid fantastical backdrops, like a competitive ride-on duck race and a glowing mushroom garden.

Regarding that voice, the new model can create what OpenAI calls “sophisticated background soundscapes, speech, and sound effects with a high degree of realism.” In May, Google’s Veo 3 became the first video-synthesis model from a major AI lab to generate synchronized audio as well as video. Just a few days ago, Alibaba released Wan 2.5, an open-weights video model that can generate audio as well. Now OpenAI has joined the audio party with Sora 2.

OpenAI demonstrates Sora 2’s capabilities in a launch video.

The model also features notable visual consistency improvements over OpenAI’s previous video model, and it can also follow more complex instructions across multiple shots while maintaining coherency between them. The new model represents what OpenAI describes as its “GPT-3.5 moment for video,” comparing it to the ChatGPT breakthrough during the evolution of its text-generation models over time.

Sora 2 appears to demonstrate improved physical accuracy over the original Sora model from February 2024, with OpenAI claiming the model can now simulate complex physical movements like Olympic gymnastics routines and triple axels while maintaining realistic physics. Last year, shortly after the launch of Sora 1 Turbo, we saw several notable failures of similar video-generation tasks that OpenAI claims to have addressed with the new model.

“Prior video models are overoptimistic—they will morph objects and deform reality to successfully execute upon a text prompt,” OpenAI wrote in its announcement. “For example, if a basketball player misses a shot, the ball may spontaneously teleport to the hoop. In Sora 2, if a basketball player misses a shot, it will rebound off the backboard.”

OpenAI’s Sora 2 lets users insert themselves into AI videos with sound Read More »

california’s-newly-signed-ai-law-just-gave-big-tech-exactly-what-it-wanted

California’s newly signed AI law just gave Big Tech exactly what it wanted

On Monday, California Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act into law, requiring AI companies to disclose their safety practices while stopping short of mandating actual safety testing. The law requires companies with annual revenues of at least $500 million to publish safety protocols on their websites and report incidents to state authorities, but it lacks the stronger enforcement teeth of the bill Newsom vetoed last year after tech companies lobbied heavily against it.

The legislation, S.B. 53, replaces Senator Scott Wiener’s previous attempt at AI regulation, known as S.B. 1047, that would have required safety testing and “kill switches” for AI systems. Instead, the new law asks companies to describe how they incorporate “national standards, international standards, and industry-consensus best practices” into their AI development, without specifying what those standards are or requiring independent verification.

“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” Newsom said in a statement, though the law’s actual protective measures remain largely voluntary beyond basic reporting requirements.

According to the California state government, the state houses 32 of the world’s top 50 AI companies, and more than half of global venture capital funding for AI and machine learning startups went to Bay Area companies last year. So while the recently signed bill is state-level legislation, what happens in California AI regulation will have a much wider impact, both by legislative precedent and by affecting companies that craft AI systems used around the world.

Transparency instead of testing

Where the vetoed SB 1047 would have mandated safety testing and kill switches for AI systems, the new law focuses on disclosure. Companies must report what the state calls “potential critical safety incidents” to California’s Office of Emergency Services and provide whistleblower protections for employees who raise safety concerns. The law defines catastrophic risk narrowly as incidents potentially causing 50+ deaths or $1 billion in damage through weapons assistance, autonomous criminal acts, or loss of control. The attorney general can levy civil penalties of up to $1 million per violation for noncompliance with these reporting requirements.

California’s newly signed AI law just gave Big Tech exactly what it wanted Read More »

can-ai-detect-hedgehogs-from-space?-maybe-if-you-find-brambles-first.

Can AI detect hedgehogs from space? Maybe if you find brambles first.

“It took us about 20 seconds to find the first one in an area indicated by the model,” wrote Jaffer in a blog post documenting the field test. Starting at Milton Community Centre, where the model showed high confidence of brambles near the car park, the team systematically visited locations with varying prediction levels.

The research team locating their first bramble.

The research team locating their first bramble. Credit: Sadiq Jaffer

At Milton Country Park, every high-confidence area they checked contained substantial bramble growth. When they investigated a residential hotspot, they found an empty plot overrun with brambles. Most amusingly, a major prediction in North Cambridge led them to Bramblefields Local Nature Reserve. True to its name, the area contained extensive bramble coverage.

The model reportedly performed best when detecting large, uncovered bramble patches visible from above. Smaller brambles under tree cover showed lower confidence scores—a logical limitation given the satellite’s overhead perspective. “Since TESSERA is learned representation from remote sensing data, it would make sense that bramble partially obscured from above might be harder to spot,” Jaffer explained.

An early experiment

While the researchers expressed enthusiasm over the early results, the bramble detection work represents a proof-of-concept that is still under active research. The model has not yet been published in a peer-reviewed journal, and the field validation described here was an informal test rather than a scientific study. The Cambridge team acknowledges these limitations and plans more systematic validation.

However, it’s still a relatively positive research application of neural network techniques that reminds us that the field of artificial intelligence is much larger than just generative AI models, such as ChatGPT, or video synthesis models.

Should the team’s research pan out, the simplicity of the bramble detector offers some practical advantages. Unlike more resource-intensive deep learning models, the system could potentially run on mobile devices, enabling real-time field validation. The team considered developing a phone-based active learning system that would enable field researchers to improve the model while verifying its predictions.

In the future, similar AI-based approaches combining satellite remote sensing with citizen science data could potentially map invasive species, track agricultural pests, or monitor changes in various ecosystems. For threatened species like hedgehogs, rapidly mapping critical habitat features becomes increasingly valuable during a time when climate change and urbanization are actively reshaping the places that hedgehogs like to call home.

Can AI detect hedgehogs from space? Maybe if you find brambles first. Read More »

experts-urge-caution-about-using-chatgpt-to-pick-stocks

Experts urge caution about using ChatGPT to pick stocks

“AI models can be brilliant,” Dan Moczulski, UK managing director at eToro, told Reuters. “The risk comes when people treat generic models like ChatGPT or Gemini as crystal balls.” He noted that general AI models “can misquote figures and dates, lean too hard on a pre-established narrative, and overly rely on past price action to attempt to predict the future.”

The hazards of AI stock picking

Using AI to trade stocks at home feels like it might be the next step in a long series of technological advances that have democratized individual retail investing, for better or for worse. Computer-based stock trading for individuals dates back to 1984, when Charles Schwab introduced electronic trading services for dial-up customers. E-Trade launched in 1992, and by the late 1990s, online brokerages had transformed retail investing, dropping commission fees from hundreds of dollars per trade to under $10.

The first “robo-advisors” appeared after the 2008 financial crisis, which began the rise of automated online services that use algorithms to manage and rebalance portfolios based on a client’s goals. Services like Betterment launched in 2010, and Wealthfront followed in 2011, using algorithms to automatically rebalance portfolios. By the end of 2015, robo-advisors from nearly 100 companies globally were managing $60 billion in client assets.

The arrival of ChatGPT in November 2022 arguably marked a new phase where retail investors could directly query an AI model for stock picks rather than relying on pre-programmed algorithms. But Leung acknowledged that ChatGPT cannot access data behind paywalls, potentially missing crucial analyses available through professional services. To get better results, he creates specific prompts like “assume you’re a short analyst, what is the short thesis for this stock?” or “use only credible sources, such as SEC filings.”

Beyond chatbots, reliance on financial algorithms is growing. The “robo-advisory” market, which includes all companies providing automated, algorithm-driven financial advice from fintech startups to established banks, is forecast to grow roughly 600 percent by 2029, according to data-analysis firm Research and Markets.

But as more retail investors turn to AI tools for investment decisions, it’s also potential trouble waiting to happen.

“If people get comfortable investing using AI and they’re making money, they may not be able to manage in a crisis or downturn,” Leung warned Reuters. The concern extends beyond individual losses to whether retail investors using AI tools understand risk management or have strategies for when markets turn bearish.

Experts urge caution about using ChatGPT to pick stocks Read More »

as-many-as-2-million-cisco-devices-affected-by-actively-exploited-0-day

As many as 2 million Cisco devices affected by actively exploited 0-day

As many as 2 million Cisco devices are susceptible to an actively exploited zero-day that can remotely crash or execute code on vulnerable systems.

Cisco said Wednesday that the vulnerability, tracked as CVE-2025-20352, was present in all supported versions of Cisco IOS and Cisco IOS XE, the operating system that powers a wide variety of the company’s networking devices. The vulnerability can be exploited by low-privileged users to create a denial-of-service attack or by higher-privileged users to execute code that runs with unfettered root privileges. It carries a severity rating of 7.7 out of a possible 10.

Exposing SNMP to the Internet? Yep

“The Cisco Product Security Incident Response Team (PSIRT) became aware of successful exploitation of this vulnerability in the wild after local Administrator credentials were compromised,” Wednesday’s advisory stated. “Cisco strongly recommends that customers upgrade to a fixed software release to remediate this vulnerability.”

The vulnerability is the result of a stack overflow bug in the IOS component that handles SNMP (simple network management protocol), which routers and other devices use to collect and handle information about devices inside a network. The vulnerability is exploited by sending crafted SNMP packets.

To execute malicious code, the remote attacker must have possession of read-only community string, an SNMP-specific form of authentication for accessing managed devices. Frequently, such strings ship with devices. Even when modified by an administrator, read-only community strings are often widely known inside an organization. The attacker would also require privileges on the vulnerable systems. With that, the attacker can obtain RCE (remote code execution) capabilities that run as root.

As many as 2 million Cisco devices affected by actively exploited 0-day Read More »

why-does-openai-need-six-giant-data-centers?

Why does OpenAI need six giant data centers?

Training next-generation AI models compounds the problem. On top of running existing AI models like those that power ChatGPT, OpenAI is constantly working on new technology in the background. It’s a process that requires thousands of specialized chips running continuously for months.

The circular investment question

The financial structure of these deals between OpenAI, Oracle, and Nvidia has drawn scrutiny from industry observers. Earlier this week, Nvidia announced it would invest up to $100 billion as OpenAI deploys Nvidia systems. As Bryn Talkington of Requisite Capital Management told CNBC: “Nvidia invests $100 billion in OpenAI, which then OpenAI turns back and gives it back to Nvidia.”

Oracle’s arrangement follows a similar pattern, with a reported $30 billion-per-year deal where Oracle builds facilities that OpenAI pays to use. This circular flow, which involves infrastructure providers investing in AI companies that become their biggest customers, has raised eyebrows about whether these represent genuine economic investments or elaborate accounting maneuvers.

The arrangements are becoming even more convoluted. The Information reported this week that Nvidia is discussing leasing its chips to OpenAI rather than selling them outright. Under this structure, Nvidia would create a separate entity to purchase its own GPUs, then lease them to OpenAI, which adds yet another layer of circular financial engineering to this complicated relationship.

“NVIDIA seeds companies and gives them the guaranteed contracts necessary to raise debt to buy GPUs from NVIDIA, even though these companies are horribly unprofitable and will eventually die from a lack of any real demand,” wrote tech critic Ed Zitron on Bluesky last week about the unusual flow of AI infrastructure investments. Zitron was referring to companies like CoreWeave and Lambda Labs, which have raised billions in debt to buy Nvidia GPUs based partly on contracts from Nvidia itself. It’s a pattern that mirrors OpenAI’s arrangements with Oracle and Nvidia.

So what happens if the bubble pops? Even Altman himself warned last month that “someone will lose a phenomenal amount of money” in what he called an AI bubble. If AI demand fails to meet these astronomical projections, the massive data centers built on physical soil won’t simply vanish. When the dot-com bubble burst in 2001, fiber optic cable laid during the boom years eventually found use as Internet demand caught up. Similarly, these facilities could potentially pivot to cloud services, scientific computing, or other workloads, but at what might be massive losses for investors who paid AI-boom prices.

Why does OpenAI need six giant data centers? Read More »

supermicro-server-motherboards-can-be-infected-with-unremovable-malware

Supermicro server motherboards can be infected with unremovable malware

Servers running on motherboards sold by Supermicro contain high-severity vulnerabilities that can allow hackers to remotely install malicious firmware that runs even before the operating system, making infections impossible to detect or remove without unusual protections in place.

One of the two vulnerabilities is the result of an incomplete patch Supermicro released in January, said Alex Matrosov, founder and CEO of Binarly, the security firm that discovered it. He said that the insufficient fix was meant to patch CVE-2024-10237, a high-severity vulnerability that enabled attackers to reflash firmware that runs while a machine is booting. Binarly discovered a second critical vulnerability that allows the same sort of attack.

“Unprecedented persistence”

Such vulnerabilities can be exploited to install firmware similar to ILObleed, an implant discovered in 2021 that infected HP Enterprise servers with wiper firmware that permanently destroyed data stored on hard drives. Even after administrators reinstalled the operating system, swapped out hard drives, or took other common disinfection steps, ILObleed would remain intact and reactivate the disk-wiping attack. The exploit the attackers used in that campaign had been patched by HP four years earlier but wasn’t installed in the compromised devices.

“Both issues provide unprecedented persistence power across significant Supermicro device fleets including [in] AI data centers,” Matrasov wrote to Ars in an online interview, referring to the two latest vulnerabilities Binarly discovered. “After they patched [the earlier vulnerability], we looked at the rest of the attack surface and found even worse security problems.”

The two new vulnerabilities—tracked as CVE-2025-7937 and CVE-2025-6198—reside inside silicon soldered onto Supermicro motherboards that run servers inside data centers. Baseboard management controllers (BMCs) allow administrators to remotely perform tasks such as installing updates, monitoring hardware temperatures, and setting fan speeds accordingly. BMCs also enable some of the most sensitive operations, such as reflashing the firmware for the UEFI (Unified Extensible Firmware Interface) that’s responsible for loading the server OS when booting. BMCs provide these capabilities and more, even when the servers they’re connected to are turned off.

Supermicro server motherboards can be infected with unremovable malware Read More »

when-“no”-means-“yes”:-why-ai-chatbots-can’t-process-persian-social-etiquette

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

If an Iranian taxi driver waves away your payment, saying, “Be my guest this time,” accepting their offer would be a cultural disaster. They expect you to insist on paying—probably three times—before they’ll take your money. This dance of refusal and counter-refusal, called taarof, governs countless daily interactions in Persian culture. And AI models are terrible at it.

New research released earlier this month titled “We Politely Insist: Your LLM Must Learn the Persian Art of Taarof” shows that mainstream AI language models from OpenAI, Anthropic, and Meta fail to absorb these Persian social rituals, correctly navigating taarof situations only 34 to 42 percent of the time. Native Persian speakers, by contrast, get it right 82 percent of the time. This performance gap persists across large language models such as GPT-4o, Claude 3.5 Haiku, Llama 3, DeepSeek V3, and Dorna, a Persian-tuned variant of Llama 3.

A study led by Nikta Gohari Sadr of Brock University, along with researchers from Emory University and other institutions, introduces “TAAROFBENCH,” the first benchmark for measuring how well AI systems reproduce this intricate cultural practice. The researchers’ findings show how recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.

“Cultural missteps in high-consequence settings can derail negotiations, damage relationships, and reinforce stereotypes,” the researchers write. For AI systems increasingly used in global contexts, that cultural blindness could represent a limitation that few in the West realize exists.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance.

A taarof scenario diagram from TAAROFBENCH, devised by the researchers. Each scenario defines the environment, location, roles, context, and user utterance. Credit: Sadr et al.

“Taarof, a core element of Persian etiquette, is a system of ritual politeness where what is said often differs from what is meant,” the researchers write. “It takes the form of ritualized exchanges: offering repeatedly despite initial refusals, declining gifts while the giver insists, and deflecting compliments while the other party reaffirms them. This ‘polite verbal wrestling’ (Rafiee, 1991) involves a delicate dance of offer and refusal, insistence and resistance, which shapes everyday interactions in Iranian culture, creating implicit rules for how generosity, gratitude, and requests are expressed.”

When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette Read More »

broadcom’s-prohibitive-vmware-prices-create-a-learning-“barrier,”-it-pro-says

Broadcom’s prohibitive VMware prices create a learning “barrier,” IT pro says

Broadcom didn’t respond to Ars Technica’s request for comment for this article.

Compatibility problems

Migrating off of VMware hasn’t only resulted in delayed projects for the Indiana school district; it has also brought complications for its HCI hardware. The district’s IT director told Ars that Dell won’t provide long-term support for the hardware if it’s not running VMware. This is despite Dell reportedly touting a “10-year lifespan” on the devices when the district first bought in, in 2019, per the IT professional.

“They’re basically holding our service contract hostage if we don’t buy VMware,” the IT director told Ars.

Put in a bind, the IT team is trying to repurpose the hardware without Dell support, noting that the district had already invested $250,000 in the system over six years.

“It’s made us have to go back to the drawing board for the next three to four years, essentially,” the IT leader said.

The Indiana IT director said Dell suggested that the district could buy an entirely new stack of server hardware with new support, but budget limits, especially over the coming years, make this unreasonable.

“New IT balloons very quickly, and [Dell workers] don’t really seem to understand that I can’t just spend that amount of money randomly,” the director said.

The Indiana district is now using the unsupported hardware, too.

“We are currently flying blind,” the IT director said.

Ars reached out to Dell Technologies about the school district’s situation and the impact that higher VMware prices have on organizations that have relied on Dell technology tied to VMware. A spokesperson shared the following statement:

Dell Technologies remains committed to supporting all VxRail customers with active support agreements. VxRail continues to deliver value for thousands of organizations globally, and we work closely with customers to ensure they can maximize their investment. Dell has a long history of offering choice through a broad portfolio of technology partners and solutions, helping organizations to select the path that best aligns to their strategy, infrastructure needs, and long-term IT goals.

Over in Idaho, VMware was part of Idaho Falls School District 91’s IT setup since at least 2008. The school district operated about 80 VMs running on four ESXi hosts, all managed centrally through vCenter. The VMs hosted mission-critical systems, including the student information system, key databases, and other applications that directly support teaching and learning, Donovan Gregory, the district’s IT SysNet administrator, told Ars.

Broadcom’s prohibitive VMware prices create a learning “barrier,” IT pro says Read More »

here’s-how-potent-atomic-credential-stealer-is-finding-its-way-onto-macs

Here’s how potent Atomic credential stealer is finding its way onto Macs

Ads prominently displayed on search engines are impersonating a wide range of online services in a bid to infect Macs with a potent credential stealer, security companies have warned. The latest reported target is users of the LastPass password manager.

Late last week, LastPass said it detected a widespread campaign that used search engine optimization to display ads for LastPass macOS apps at the top of search results returned by search engines, including Google and Bing. The ads led to one of two fraudulent GitHub sites targeting LastPass, both of which have been taken down. The pages provided links promising to install LastPass on MacBooks. In fact, they installed a macOS credential stealer known as Atomic Stealer, or alternatively, Amos Stealer.

Dozens targeted

“We are writing this blog post to raise awareness of the campaign and protect our customers while we continue to actively pursue takedown and disruption efforts, and to also share indicators of compromise (IoCs) to help other security teams detect cyber threats,” LastPass said in the post.

LastPass is hardly alone in seeing its well-known brand exploited in such ads. The compromise indicators LastPass provided listed other software or services being impersonated as 1Password, Basecamp, Dropbox, Gemini, Hootsuite, Notion, Obsidian, Robinhood, Salesloft, SentinelOne, Shopify, Thunderbird, and TweetDeck. Typically, the ads offer the software in prominent fonts. When clicked, the ads lead to GitHub pages that install versions of Atomic that are disguised as the official software being falsely advertised.

Here’s how potent Atomic credential stealer is finding its way onto Macs Read More »

two-of-the-kremlin’s-most-active-hack-groups-are-collaborating,-eset-says

Two of the Kremlin’s most active hack groups are collaborating, ESET says

But ESET said its most likely hypothesis is that Turla and Gamaredon were working together. “Given that both groups are part of the Russian FSB (though in two different Centers), Gamaredon provided access to Turla operators so that they could issue commands on a specific machine to restart Kazuar, and deploy Kazuar v2 on some others,” the company said.

Friday’s post noted that Gamaredon has been seen collaborating with other hack groups previously, specifically in 2020 with a group ESET tracks under the name InvisiMole.

In February, ESET said, company researchers spotted four distinct Gamaredon-Turla co-compromises in Ukraine. On all of the machines, Gamaredon deployed a wide range of tools, including those tracked under the names PteroLNK, PteroStew, PteroOdd, PteroEffigy, and PteroGraphin. Turla, for its part, installed version 3 of its proprietary malware Kazuar.

ESET software installed on one of the compromised devices observed Turla issuing commands through the Gamaredon implants.

“PteroGraphin was used to restart Kazuar, possibly after Kazuar crashed or was not launched automatically,” ESET said. “Thus, PteroGraphin was probably used as a recovery method by Turla. This is the first time that we have been able to link these two groups together via technical indicators (see First chain: First chain: Restart of Kazuar v3).”

Then, in April and again in June, ESET said it detected Kazuar v2 installers being deployed by Gamaredon malware. In all the cases, ESET software was installed after the compromises, so it wasn’t possible to recover the payloads. Nonetheless, the firm said it believes an active collaboration between the groups is the most likely explanation.

“All those elements, and the fact that Gamaredon is compromising hundreds if not thousands of machines, suggest that Turla is interested only in specific machines, probably ones containing highly sensitive intelligence,” ESET speculated.

Two of the Kremlin’s most active hack groups are collaborating, ESET says Read More »

two-uk-teens-charged-in-connection-to-scattered-spider-ransomware-attacks

Two UK teens charged in connection to Scattered Spider ransomware attacks

Federal prosecutors charged a UK teenager with conspiracy to commit computer fraud and other crimes in connection with the network intrusions of 47 US companies that generated more than $115 million in ransomware payments over a three-year span.

A criminal complaint unsealed on Thursday (PDF) said that Thalha Jubair, 19, of London, was part of Scattered Spider, the name of an English-language-speaking group that has breached the networks of scores of companies worldwide. After obtaining data, the group demanded that the victims pay hefty ransoms or see their confidential data published or sold.

Bitcoin paid by victims recovered

The unsealing of the document, filed in US District Court of the District of New Jersey, came the same day Jubair and another alleged Scattered Spider member—Owen Flowers, 18, from Walsall, West Midlands—were charged by UK prosecutors in connection with last year’s cyberattack on Transport for London. The agency, which oversees London’s public transit system, faced a monthslong recovery effort as a result of the breach.

Both men were arrested at their homes on Thursday and appeared later in the day at Westminster Magistrates Court, where they were remanded to appear in Crown Court on October 16, Britain’s National Crime Agency said. Flowers was previously arrested in connection with the Transport for London attack in September 2024 and later released. NCA prosecutors said that besides the attack on the transit agency, Flowers and other conspirators were responsible for a cyberattack on SSM Health Care and attempting to breach Sutter Health, both of which are located in the US. Jubair was also charged with offenses related to his refusal to turn over PIN codes and passwords for devices seized from him.

Two UK teens charged in connection to Scattered Spider ransomware attacks Read More »