Enlarge/ A screenshot of Taylor Swift’s Kamala Harris Instagram post, captured on September 11, 2024.
On Tuesday night, Taylor Swift endorsed Vice President Kamala Harris for US President on Instagram, citing concerns over AI-generated deepfakes as a key motivator. The artist’s warning aligns with current trends in technology, especially in an era where AI synthesis models can easily create convincing fake images and videos.
“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” she wrote in her Instagram post. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”
In August 2024, former President Donald Trump posted AI-generated images on Truth Social falsely suggesting Swift endorsed him, including a manipulated photo depicting Swift as Uncle Sam with text promoting Trump. The incident sparked Swift’s fears about the spread of misinformation through AI.
This isn’t the first time Swift and generative AI have appeared together in the news. In February, we reported that a flood of explicit AI-generated images of Swift originated from a 4chan message board where users took part in daily challenges to bypass AI image generator filters.
On Friday, Roblox announced plans to introduce an open source generative AI tool that will allow game creators to build 3D environments and objects using text prompts, reports MIT Tech Review. The feature, which is still under development, may streamline the process of creating game worlds on the popular online platform, potentially opening up more aspects of game creation to those without extensive 3D design skills.
Roblox has not announced a specific launch date for the new AI tool, which is based on what it calls a “3D foundational model.” The company shared a demo video of the tool where a user types, “create a race track,” then “make the scenery a desert,” and the AI model creates a corresponding model in the proper environment.
The system will also reportedly let users make modifications, such as changing the time of day or swapping out entire landscapes, and Roblox says the multimodal AI model will ultimately accept video and 3D prompts, not just text.
A video showing Roblox’s generative AI model in action.
The 3D environment generator is part of Roblox’s broader AI integration strategy. The company reportedly uses around 250 AI models across its platform, including one that monitors voice chat in real time to enforce content moderation, which is not always popular with players.
Next-token prediction in 3D
Roblox’s 3D foundational model approach involves a custom next-token prediction model—a foundation not unlike the large language models (LLMs) that power ChatGPT. Tokens are fragments of text data that LLMs use to process information. Roblox’s system “tokenizes” 3D blocks by treating each block as a numerical unit, which allows the AI model to predict the most likely next structural 3D element in a sequence. In aggregate, the technique can build entire objects or scenery.
Anupam Singh, vice president of AI and growth engineering at Roblox, told MIT Tech Review about the challenges in developing the technology. “Finding high-quality 3D information is difficult,” Singh said. “Even if you get all the data sets that you would think of, being able to predict the next cube requires it to have literally three dimensions, X, Y, and Z.”
According to Singh, lack of 3D training data can create glitches in the results, like a dog with too many legs. To get around this, Roblox is using a second AI model as a kind of visual moderator to catch the mistakes and reject them until the proper 3D element appears. Through iteration and trial and error, the first AI model can create the proper 3D structure.
Notably, Roblox plans to open-source its 3D foundation model, allowing developers and even competitors to use and modify it. But it’s not just about giving back—open source can be a two-way street. Choosing an open source approach could also allow the company to utilize knowledge from AI developers if they contribute to the project and improve it over time.
The ongoing quest to capture gaming revenue
News of the new 3D foundational model arrived at the 10th annual Roblox Developers Conference in San Jose, California, where the company also announced an ambitious goal to capture 10 percent of global gaming content revenue through the Roblox ecosystem, and the introduction of “Party,” a new feature designed to facilitate easier group play among friends.
In March 2023, we detailed Roblox’s early foray into AI-powered game development tools, as revealed at the Game Developers Conference. The tools included a Code Assist beta for generating simple Lua functions from text descriptions, and a Material Generator for creating 2D surfaces with associated texture maps.
At the time, Roblox Studio head Stef Corazza described these as initial steps toward “democratizing” game creation with plans for AI systems that are now coming to fruition. The 2023 tools focused on discrete tasks like code snippets and 2D textures, laying the groundwork for the more comprehensive 3D foundational model announced at this year’s Roblox Developer’s Conference.
The upcoming AI tool could potentially streamline content creation on the platform, possibly accelerating Roblox’s path toward its revenue goal. “We see a powerful future where Roblox experiences will have extensive generative AI capabilities to power real-time creation integrated with gameplay,” Roblox said in a statement. “We’ll provide these capabilities in a resource-efficient way, so we can make them available to everyone on the platform.”
Enlarge/ “Hmm, no signal here. I’m trying to figure it out, but nothing comes to mind …”
Getty Images
One issue in getting office buildings networked that you don’t typically face at home is concrete—and lots of it. Concrete walls are an average of 8 inches thick inside most commercial real estate.
Keeping a network running through them is not merely a matter of running cord. Not everybody has the knowledge or tools to punch through that kind of wall. Even if they do, you can’t just put a hole in something that might be load-bearing or part of a fire control system without imaging, permits, and contractors. The bandwidths that can work through these walls, like 3G, are being phased out, and the bandwidths that provide enough throughput for modern systems, like 5G, can’t make it through.
That’s what WaveCore, from Airvine Scientific, aims to fix, and I can’t help but find it fascinating after originally seeing it on The Register. The company had previously taken on lesser solid obstructions, like plaster and thick glass, with its WaveTunnel. Two WaveCore units on either side of a wall (or on different floors) can push through a stated 12 inches of concrete. In their in-house testing, Airvine reports pushing just under 4Gbps through 12 inches of garage concrete, and it can bend around corners, even 90 degrees. Your particular cement and aggregate combinations may vary, of course.
The WaveCore device, installed in a garage space during Airvine Scientific’s testing.
Concept drawing of how WaveCore punches through concrete walls (kind of).
Airvine Scientific
The spec sheet shows that a 6 GHz radio is the part that, through “beam steering,” blasts through concrete, with a 2.4 GHz radio for control functions. There’s PoE or barrel connector power, and RJ45 ethernet in the 1, 2.5, 5, and 10Gbps sizes.
6 GHz concrete fidelity (Con-Fi? Crete-Fi?) is just one of the slightly uncommon connections that may or may not be making their way into office spaces soon. LiFi, standardized as 802.11bb, is seeking to provide an intentionally limited scope to connectivity, whether for security restrictions or radio frequency safety. And Wi-Fi 7, certified earlier this year, aims to multiply data rates by bonding connections over the various bands already in place.
Researchers have discovered more than 280 malicious apps for Android that use optical character recognition to steal cryptocurrency wallet credentials from infected devices.
The apps masquerade as official ones from banks, government services, TV streaming services, and utilities. In fact, they scour infected phones for text messages, contacts, and all stored images and surreptitiously send them to remote servers controlled by the app developers. The apps are available from malicious sites and are distributed in phishing messages sent to targets. There’s no indication that any of the apps were available through Google Play.
A high level of sophistication
The most notable thing about the newly discovered malware campaign is that the threat actors behind it are employing optical character recognition software in an attempt to extract cryptocurrency wallet credentials that are shown in images stored on infected devices. Many wallets allow users to protect their wallets with a series of random words. The mnemonic credentials are easier for most people to remember than the jumble of characters that appear in the private key. Words are also easier for humans to recognize in images.
SangRyol Ryu, a researcher at security firm McAfee, made the discovery after obtaining unauthorized access to the servers that received the data stolen by the malicious apps. That access was the result of weak security configurations made when the servers were deployed. With that, Ryu was able to read pages available to server administrators.
One page, displayed in the image below, was of particular interest. It showed a list of words near the top and a corresponding image, taken from an infected phone, below. The words represented visually in the image corresponded to the same words.
“Upon examining the page, it became clear that a primary goal of the attackers was to obtain the mnemonic recovery phrases for cryptocurrency wallets,” Ryu wrote. “This suggests a major emphasis on gaining entry to and possibly depleting the crypto assets of victims.”
Optical character recognition is the process of converting images of typed, handwritten, or printed text into machine-encoded text. OCR has existed for years and has grown increasingly common to transform characters captured in images into characters that can be read and manipulated by software.
Ryu continued:
This threat utilizes Python and Javascript on the server-side to process the stolen data. Specifically, images are converted to text using optical character recognition (OCR) techniques, which are then organized and managed through an administrative panel. This process suggests a high level of sophistication in handling and utilizing the stolen information.
Enlarge/ Python code for converting text shown in images to machine-readable text.
McAfee
People who are concerned they may have installed one of the malicious apps should check the McAfee post for a list of associated websites and cryptographic hashes.
The malware has received multiple updates over time. Whereas it once used HTTP to communicate with control servers, it now connects through WebSockets, a mechanism that’s harder for security software to parse. WebSockets have the added benefit of being a more versatile channel.
Developers have also updated the apps to better obfuscate their malicious functionality. Obfuscation methods include encoding the strings inside the code so they’re not easily read by humans, the addition of irrelevant code, and the renaming of functions and variables, all of which confuse analysts and make detection harder. While the malware is mostly restricted to South Korea, it has recently begun to spread within the UK.
“This development is significant as it shows that the threat actors are expanding their focus both demographically and geographically,” Ryu wrote. “The move into the UK points to a deliberate attempt by the attackers to broaden their operations, likely aiming at new user groups with localized versions of the malware.”
The cost of renting cloud services using Nvidia’s leading artificial intelligence chips is lower in China than in the US, a sign that the advanced processors are easily reaching the Chinese market despite Washington’s export restrictions.
Four small-scale Chinese cloud providers charge local tech groups roughly $6 an hour to use a server with eight Nvidia A100 processors in a base configuration, companies and customers told the Financial Times. Small cloud vendors in the US charge about $10 an hour for the same setup.
The low prices, according to people in the AI and cloud industry, are an indication of plentiful supply of Nvidia chips in China and the circumvention of US measures designed to prevent access to cutting-edge technologies.
The A100 and H100, which is also readily available, are among Nvidia’s most powerful AI accelerators and are used to train the large language models that power AI applications. The Silicon Valley company has been banned from shipping the A100 to China since autumn 2022 and has never been allowed to sell the H100 in the country.
Chip resellers and tech startups said the products were relatively easy to procure. Inventories of the A100 and H100 are openly advertised for sale on Chinese social media and ecommerce sites such as Xiaohongshu and Alibaba’s Taobao, as well as in electronics markets, at slight markups to pricing abroad.
China’s larger cloud operators such as Alibaba and ByteDance, known for their reliability and security, charge double to quadruple the price of smaller local vendors for similar Nvidia A100 servers, according to pricing from the two operators and customers.
After discounts, both Chinese tech giants offer packages for prices comparable to Amazon Web Services, which charges $15 to $32 an hour. Alibaba and ByteDance did not respond to requests for comment.
“The big players have to think about compliance, so they are at a disadvantage. They don’t want to use smuggled chips,” said a Chinese startup founder. “Smaller vendors are less concerned.”
He estimated there were more than 100,000 Nvidia H100 processors in the country based on their widespread availability in the market. The Nvidia chips are each roughly the size of a book, making them relatively easy for smugglers to ferry across borders, undermining Washington’s efforts to limit China’s AI progress.
“We bought our H100s from a company that smuggled them in from Japan,” said a startup founder in the automation field who paid about 500,000 yuan ($70,000) for two cards this year. “They etched off the serial numbers.”
Nvidia said it sold its processors “primarily to well-known partners … who work with us to ensure that all sales comply with US export control rules”.
“Our pre-owned products are available through many second-hand channels,” the company added. “Although we cannot track products after they are sold, if we determine that any customer is violating US export controls, we will take appropriate action.”
The head of a small Chinese cloud vendor said low domestic costs helped offset the higher prices that providers paid for smuggled Nvidia processors. “Engineers are cheap, power is cheap, and competition is fierce,” he said.
In Shenzhen’s Huaqiangbei electronics market, salespeople speaking to the FT quoted the equivalent of $23,000–$30,000 for Nvidia’s H100 plug-in cards. Online sellers quote the equivalent of $31,000–$33,000.
Nvidia charges customers $20,000–$23,000 for H100 chips after recently cutting prices, according to Dylan Patel of SemiAnalysis.
One data center vendor in China said servers made by Silicon Valley’s Supermicro and fitted with eight H100 chips hit a peak selling price of 3.2 million yuan after the Biden administration tightened export restrictions in October. He said prices had since fallen to 2.5 million yuan as supply constraints eased.
Several people involved in the trade said merchants in Malaysia, Japan, and Indonesia often shipped Supermicro servers or Nvidia processors to Hong Kong before bringing them across the border to Shenzhen.
The black market trade depends on difficult-to-counter workarounds to Washington’s export regulations, experts said.
For example, while subsidiaries of Chinese companies are banned from buying advanced AI chips outside the country, their executives could establish new companies in countries such as Japan or Malaysia to make the purchases.
“It’s hard to completely enforce export controls beyond the US border,” said an American sanctions expert. “That’s why the regulations create obligations for the shipper to look into end users and [the] commerce [department] adds companies believed to be flouting the rules to the [banned] entity list.”
Additional reporting by Michael Acton in San Francisco.
Federal prosecutors on Thursday unsealed an indictment charging six Russian nationals with conspiracy to hack into the computer networks of the Ukrainian government and its allies and steal or destroy sensitive data on behalf of the Kremlin.
The indictment, filed in US District Court for the District of Maryland, said that five of the men were officers in Unit 29155 of the Russian Main Intelligence Directorate (GRU), a military intelligence agency of the General Staff of the Armed Forces. Along with a sixth defendant, prosecutors alleged, they engaged in a conspiracy to hack, exfiltrate data, leak information, and destroy computer systems associated with the Ukrainian government in advance of the Russian invasion of Ukraine in February 2022.
Targeting critical infrastructure with WhisperGate
The indictment, which supersedes one filed earlier, comes 32 months after Microsoft documented its discovery of a destructive piece of malware, dubbed WhisperGate, had infected dozens of Ukrainian government, nonprofit, and IT organizations. WhisperGate masqueraded as ransomware, but in actuality was malware that permanently destroyed computers and the data stored on them by wiping the master boot record—a part of the hard drive needed to start the operating system during bootup.
In April 2022, three months after publishing the report, Microsoft published a new one that said WhisperGate was part of a much broader campaign that aimed to coordinate destructive cyberattacks against critical infrastructure and other targets in Ukraine with kinetic military operations waged by Russian forces. Thursday’s indictment incorporated much of the factual findings reported by Microsoft.
“The GRU’s WhisperGate campaign, including targeting Ukrainian critical infrastructure and government systems of no military value, is emblematic of Russia’s abhorrent disregard for innocent civilians as it wages its unjust invasion,” Assistant Attorney General Matthew G. Olsen of the National Security Division said in a statement. “Today’s indictment underscores that the Justice Department will use every available tool to disrupt this kind of malicious cyber activity and hold perpetrators accountable for indiscriminate and destructive targeting of the United States and our allies.”
Later in the campaign, the Russian operatives targeted computer systems in countries around the world that were providing support to Ukraine, including the United States and 25 other NATO countries.
The six defendants are:
Yuriy Denisov, a colonel in the Russian military and commanding officer of Cyber Operations for Unit 29155
Vladislav Borokov, a lieutenant in Unit 29155 who works in cyber operations
Denis Denisenko, a lieutenant in Unit 29155 who works in cyber operations
Dmitriy Goloshubov, a lieutenant in Unit 29155 who works in cyber operations
Nikolay Korchagin, a lieutenant in Unit 29155 who works in cyber operations
Amin Stigal, an alleged civilian co-conspirator, who was indicted in June for his role in WhisperGate activities
Federal prosecutors said the conspiracy started no later than December 2020 and remained ongoing. The defendants and additional unindicted co-conspirators, the indictment alleged, scanned computers of potential targets around the world, including in the US, in search of vulnerabilities and exploited them to gain unauthorized access to many of the systems. The defendants allegedly would then infect the networks with wiper malware and, in some cases, exfiltrate the stored data.
Thursday’s charges came a day after Justice Department officials announced the indictments of two Russian media executives accused of funneling millions of dollars from the Kremlin to a company responsible for creating and publishing propaganda videos in the US that racked up millions of views on social media. Federal prosecutors said the objective was to covertly influence public opinion and deepen social divisions, including over Russia’s war in Ukraine.
Also on Wednesday, federal officials took other legal actions to counter what they said were other Russian psychological operations. The actions included seizing 32 Internet domains they said were being used to spread anti-Ukraine propaganda, sanctioning Russian individuals and entities accused of spreading Russian propaganda and indicting two individuals accused of conspiring to aid a Russian broadcaster violating US sanctions.
Unit 29155 is a covert part of the GRU that carries out coup attempts, sabotage, and assassinations outside Russia. According to WIRED, Unit 29155 recently acquired its own active team of cyberwarfare operators in a move that signals the fusing of physical and digital tactics by Russia more tightly than in the past. WIRED said that the unit is distinct from others within the GRU that employ more recognized Russian-state hacking groups such as Fancy Bear or APT28, and Sandworm.
The Justice Department announced a $10 million reward in exchange for any of the suspects’ locations or cyber activity. The wanted poster and Thursday’s indictment displayed photos of all six defendants. The move is intended to limit the travel options for the men and discourage other Russians from following their example.
AT&T filed a lawsuit against Broadcom on August 29 accusing it of seeking to “retroactively change existing VMware contracts to match its new corporate strategy.” The lawsuit, spotted by Channel Futures, concerns claims that Broadcom is not letting AT&T renew support services for previously purchased perpetual VMware software licenses unless AT&T meets certain conditions.
AT&T uses VMware software to run 75,000 virtual machines (VMs) across about 8,600 servers, per the complaint filed at the Supreme Court of the State of New York [PDF]. It reportedly uses the VMs to support customer service operations and for operations management efficiency.
AT&T feels it should be granted a one-year renewal for VMware support services, which it claimed would be the second of three one-year renewals to which its contract entitles it. According to AT&T, support services are critical in case of software errors and for upkeep, like security patches, software upgrades, and daily maintenance. Without support, “an error or software glitch” could result in disruptive failure, AT&T said.
AT&T claims Broadcom refuses to renew support and plans to terminate AT&T’s VMware support services on September 9. It asked the court to stop Broadcom from cutting VMware support services and for “further relief” deemed necessary. The New York Supreme Court has told Broadcom to respond within 20 days of the complaint’s filing.
In a statement to Ars Technica, an AT&T spokesperson said: “We have filed this complaint to preserve continuity in the services we provide and protect the interests of our customers.”
AT&T accuses Broadcom of trying to make it spend millions on unwanted software
AT&T’s lawsuit claims that Broadcom has refused to renew support services for AT&T’s perpetual licenses unless AT&T agrees to what it deems are unfair conditions that would cost it “tens of millions more than the price of the support services alone.”
The lawsuit reads:
Specifically, Broadcom is threatening to withhold essential support services for previously purchased VMware perpetually licensed software unless AT&T capitulates to Broadcom’s demands that AT&T purchase hundreds of millions of dollars’ worth of bundled subscription software and services, which AT&T does not want.
After buying VMware, Broadcom consolidated VMware’s offering from about 8,000 SKUs to four bundles, per Channel Futures. AT&T claims these subscription offerings “would impose significant additional contractual and technological obligations.” AT&T claims it might have to invest millions to “develop its network to accommodate the new software.”
VMware and AT&T’s agreement precludes “Broadcom’s attempt to bully AT&T into paying a king’s ransom for subscriptions AT&T does not want or need, or risk widespread network outages,” AT&T reckons.
In its lawsuit, AT&T claims “bullying tactics” were expected from Broadcom post-acquisition. Quoting Ars Technica reporting, the lawsuit claims that “Broadcom wasted no time strong-arming customers into highly unfavorable subscription models marked by ‘steeply increased prices[,]’ ‘refusing to maintain security conditions for perpetual license[d] [software,]’ and threatening to cut off support for existing products already licensed by customers—exactly as it has done here.'”
“Without the Support Services, the more than 75,000 virtual machines operated by AT&T⸺impacting millions of its customers worldwide⸺would all be just an error or software glitch away from failing,” AT&T’s lawsuit says.
Broadcom’s response
In the lawsuit, Broadcom alleges that AT&T is not eligible to renew support services for a year because it believes AT&T was supposed to renew all three one-year support service plans by the end of 2023.
In a statement to Ars Technica, a Broadcom company spokesperson said:
Broadcom strongly disagrees with the allegations and is confident we will prevail in the legal process. VMware has been moving to a subscription model, the standard for the software industry, for several years – beginning before the acquisition by Broadcom. Our focus will continue to be providing our customers choice and flexibility while helping them address their most complex technology challenges.
Communications for Office of the President, first responders could be affected
AT&T’s lawsuit emphasizes that should it lose support for VMware offerings, communications for the Office of the President and first responders would be at risk. AT&T claims that about 22,000 of its 75,000 VMs relying on VMware “are used in some way to support AT&T’s provision of services to millions of police officers, firefighters, paramedics, emergency workers and incident response team members nationwide… for use in connection with matters of public safety and/or national security.”
When reached for comment, AT&T’s spokesperson declined to comment on AT&T’s backup plan for minimizing disruption should it lose VMware support in a few days.
Ultimately, the case centers on “multiple documents involved, and resolution of the dispute will require interpretation as to which clauses prevail,” Benjamin B. Kabak, a partner practicing in technology and outsourcing at the Loeb & Loeb LLP New York law firm, points out.
Over the weekend, the nonprofit National Novel Writing Month organization (NaNoWriMo) published an FAQ outlining its position on AI, calling categorical rejection of AI writing technology “classist” and “ableist.” The statement caused a backlash online, prompted four members of the organization’s board to step down, and prompted a sponsor to withdraw its support.
“We believe that to categorically condemn AI would be to ignore classist and ableist issues surrounding the use of the technology,” wrote NaNoWriMo, “and that questions around the use of AI tie to questions around privilege.”
NaNoWriMo, known for its annual challenge where participants write a 50,000-word manuscript in November, argued in its post that condemning AI would ignore issues of class and ability, suggesting the technology could benefit those who might otherwise need to hire human writing assistants or have differing cognitive abilities.
Writers react
After word of the FAQ spread, many writers on social media platforms voiced their opposition to NaNoWriMo’s position. Generative AI models are commonly trained on vast amounts of existing text, including copyrighted works, without attribution or compensation to the original authors. Critics say this raises major ethical questions about using such tools in creative writing competitions and challenges.
“Generative AI empowers not the artist, not the writer, but the tech industry. It steals content to remake content, graverobbing existing material to staple together its Frankensteinian idea of art and story,” wrote Chuck Wendig, the author of Star Wars: Aftermath, in a post about NaNoWriMo on his personal blog.
Daniel José Older, a lead story architect for Star Wars: The High Republic and one of the board members who resigned, wrote on X, “Hello @NaNoWriMo, this is me DJO officially stepping down from your Writers Board and urging every writer I know to do the same. Never use my name in your promo again in fact never say my name at all and never email me again. Thanks!”
In particular, NaNoWriMo’s use of words like “classist” and “ableist” to defend the potential use of generative AI particularly touched a nerve with opponents of generative AI, some of whom say they are disabled themselves.
“A huge middle finger to @NaNoWriMo for this laughable bullshit. Signed, a poor, disabled and chronically ill writer and artist. Miss me by a wide margin with that ableist and privileged bullshit,” wrote one X user. “Other people’s work is NOT accessibility.”
This isn’t the first time the organization has dealt with controversy. Last year, NaNoWriMo announced that it would accept AI-assisted submissions but noted that using AI for an entire novel “would defeat the purpose of the challenge.” Many critics also point out that a NaNoWriMo moderator faced accusations related to child grooming in 2023, which lessened their trust in the organization.
NaNoWriMo doubles down
In response to the backlash, NaNoWriMo updated its FAQ post to address concerns about AI’s impact on the writing industry and to mention “bad actors in the AI space who are doing harm to writers and who are acting unethically.”
We want to make clear that, though we find the categorical condemnation for AI to be problematic for the reasons stated below, we are troubled by situational abuse of AI, and that certain situational abuses clearly conflict with our values. We also want to make clear that AI is a large umbrella technology and that the size and complexity of that category (which includes both non-generative and generative AI, among other uses) contributes to our belief that it is simply too big to categorically endorse or not endorse.
Over the past few years, we’ve received emails from disabled people who frequently use generative AI tools, and we have interviewed a disabled artist, Claire Silver, who uses image synthesis prominently in her work. Some writers with disabilities use tools like ChatGPT to assist them with composition when they have cognitive issues and need assistance expressing themselves.
In June, on Reddit, one user wrote, “As someone with a disability that makes manually typing/writing and wording posts challenging, ChatGPT has been invaluable. It assists me in articulating my thoughts clearly and efficiently, allowing me to participate more actively in various online communities.”
A person with Chiari malformation wrote on Reddit in November 2023 that they use ChatGPT to help them develop software using their voice. “These tools have fundamentally empowered me. The course of my life, my options, opportunities—they’re all better because of this tool,” they wrote.
To opponents of generative AI, the potential benefits that might come to disabled persons do not outweigh what they see as mass plagiarism from tech companies. Also, some artists do not want the time and effort they put into cultivating artistic skills to be devalued for anyone’s benefit.
“All these bullshit appeals from people appropriating social justice language saying, ‘but AI lets me make art when I’m not privileged enough to have the time to develop those skills’ highlights something that needs to be said: you are not entitled to being talented,” posted a writer named Carlos Alonzo Morales on Sunday.
Despite the strong takes, NaNoWriMo has so far stuck to its position of accepting generative AI as a set of potential writing tools in a way that is consistent with its “overall position on nondiscrimination with respect to approaches to creativity, writer’s resources, and personal choice.”
“We absolutely do not condemn AI,” NaNoWriMo wrote in the FAQ post, “and we recognize and respect writers who believe that AI tools are right for them. We recognize that some members of our community stand staunchly against AI for themselves, and that’s perfectly fine. As individuals, we have the freedom to make our own decisions.”
Networking hardware-maker Zyxel is warning of nearly a dozen vulnerabilities in a wide array of its products. If left unpatched, some of them could enable the complete takeover of the devices, which can be targeted as an initial point of entry into large networks.
The most serious vulnerability, tracked as CVE-2024-7261, can be exploited to “allow an unauthenticated attacker to execute OS commands by sending a crafted cookie to a vulnerable device,” Zyxel warned. The flaw, with a severity rating of 9.8 out of 10, stems from the “improper neutralization of special elements in the parameter ‘host’ in the CGI program” of vulnerable access points and security routers. Nearly 30 Zyxel devices are affected. As is the case with the remaining vulnerabilities in this post, Zyxel is urging customers to patch them as soon as possible.
But wait… there’s more
The hardware manufacturer warned of seven additional vulnerabilities affecting firewall series including the ATP, USG-FLEX, and USG FLEX 50(W)/USG20(W)-VPN. The vulnerabilities carry severity ratings ranging from 4.9 to 8.1. The vulnerabilities are:
CVE-2024-6343: a buffer overflow vulnerability in the CGI program that could allow an authenticated attacker with administrator privileges to wage denial-of-service by sending crafted HTTP requests.
CVE-2024-7203: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to run OS commands by executing a crafted CLI command.
CVE-2024-42057: A command injection vulnerability in the IPSec VPN feature that could allow an unauthenticated attacker to run OS commands by sending a crafted username. The attack would be successful only if the device was configured in User-Based-PSK authentication mode and a valid user with a long username exceeding 28 characters exists.
CVE-2024-42058: A null pointer dereference vulnerability in some firewall versions that could allow an unauthenticated attacker to wage DoS attacks by sending crafted packets.
CVE-2024-42059: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to run OS commands on an affected device by uploading a crafted compressed language file via FTP.
CVE-2024-42060: A post-authentication command injection vulnerability that could allow an authenticated attacker with administrator privileges to execute OS commands by uploading a crafted internal user agreement file to the vulnerable device.
CVE-2024-42061: A reflected cross-site scripting vulnerability in the CGI program “dynamic_script.cgi” that could allow an attacker to trick a user into visiting a crafted URL with the XSS payload. The attacker could obtain browser-based information if the malicious script is executed on the victim’s browser.
The remaining vulnerability is CVE-2024-5412 with a severity rating of 7.5. It resides in 50 Zyxel product models, including a range of customer premises equipment, fiber optical network terminals, and security routers. A buffer overflow vulnerability in the “libclinkc” library of affected devices could allow an unauthenticated attacker to wage denial-of-service attacks by sending a crafted HTTP request.
In recent years, vulnerabilities in Zyxel devices have regularly come under activeattack. Many of the patches are available for download at links listed in the advisories. In a small number of cases, the patches are available through the cloud. Patches for some products are available only by privately contacting the company’s support team.
Enlarge/ An ABC handout promotional image for “AI and the Future of Us: An Oprah Winfrey Special.”
On Thursday, ABC announced an upcoming TV special titled, “AI and the Future of Us: An Oprah Winfrey Special.” The one-hour show, set to air on September 12, aims to explore AI’s impact on daily life and will feature interviews with figures in the tech industry, like OpenAI CEO Sam Altman and Bill Gates. Soon after the announcement, some AI critics began questioning the guest list and the framing of the show in general.
“Sure is nice of Oprah to host this extended sales pitch for the generative AI industry at a moment when its fortunes are flagging and the AI bubble is threatening to burst,” tweeted author Brian Merchant, who frequently criticizes generative AI technology in op-eds, social media, and through his “Blood in the Machine” AI newsletter.
“The way the experts who are not experts are presented as such 💀 what a train wreck,” replied artist Karla Ortiz, who is a plaintiff in a lawsuit against several AI companies. “There’s still PLENTY of time to get actual experts and have a better discussion on this because yikes.”
The trailer for Oprah’s upcoming TV special on AI.
On Friday, Ortiz created a lengthy viral thread on X that detailed her potential issues with the program, writing, “This event will be the first time many people will get info on Generative AI. However it is shaping up to be a misinformed marketing event starring vested interests (some who are under a litany of lawsuits) who ignore the harms GenAi inflicts on communities NOW.”
Critics of generative AI like Ortiz question the utility of the technology, its perceived environmental impact, and what they see as blatant copyright infringement. In training AI language models, tech companies like Meta, Anthropic, and OpenAI commonly use copyrighted material gathered without license or owner permission. OpenAI claims that the practice is “fair use.”
Oprah’s guests
According to ABC, the upcoming special will feature “some of the most important and powerful people in AI,” which appears to roughly translate to “famous and publicly visible people related to tech.” Microsoft co-founder Bill Gates, who stepped down as Microsoft CEO 24 years ago, will appear on the show to explore the “AI revolution coming in science, health, and education,” ABC says, and warn of “the once-in-a-century type of impact AI may have on the job market.”
As a guest representing ChatGPT-maker OpenAI, Sam Altman will explain “how AI works in layman’s terms” and discuss “the immense personal responsibility that must be borne by the executives of AI companies.” Karla Ortiz specifically criticized Altman in her thread by saying, “There are far more qualified individuals to speak on what GenAi models are than CEOs. Especially one CEO who recently said AI models will ‘solve all physics.’ That’s an absurd statement and not worthy of your audience.”
In a nod to present-day content creation, YouTube creator Marques Brownlee will appear on the show and reportedly walk Winfrey through “mind-blowing demonstrations of AI’s capabilities.”
Brownlee’s involvement received special attention from some critics online. “Marques Brownlee should be absolutely ashamed of himself,” tweeted PR consultant and frequent AI critic Ed Zitron, who frequently heaps scorn on generative AI in his own newsletter. “What a disgraceful thing to be associated with.”
Other guests include Tristan Harris and Aza Raskin from the Center for Humane Technology, who aim to highlight “emerging risks posed by powerful and superintelligent AI,” an existential risk topic that has its own critics. And FBI Director Christopher Wray will reveal “the terrifying ways criminals and foreign adversaries are using AI,” while author Marilynne Robinson will reflect on “AI’s threat to human values.”
Going only by the publicized guest list, it appears that Oprah does not plan to give voice to prominent non-doomer critics of AI. “This is really disappointing @Oprah and frankly a bit irresponsible to have a one-sided conversation on AI without informed counterarguments from those impacted,” tweeted TV producer Theo Priestley.
Others on the social media network shared similar criticism about a perceived lack of balance in the guest list, including Dr. Margaret Mitchell of Hugging Face. “It could be beneficial to have an AI Oprah follow-up discussion that responds to what happens in [the show] and unpacks generative AI in a more grounded way,” she said.
Oprah’s AI special will air on September 12 on ABC (and a day later on Hulu) in the US, and it will likely elicit further responses from the critics mentioned above. But perhaps that’s exactly how Oprah wants it: “It may fascinate you or scare you,” Winfrey said in a promotional video for the special. “Or, if you’re like me, it may do both. So let’s take a breath and find out more about it.”
Enlarge/ Rust never sleeps. But Rust, the programming language, can be held at bay if enough kernel programmers aren’t interested in seeing it implemented.
Getty Images
The Linux kernel is not a place to work if you’re not ready for some, shall we say, spirited argument. Still, one key developer in the project to expand Rust’s place inside the largely C-based kernel feels the “nontechnical nonsense” is too much, so he’s retiring.
Wedson Almeida Filho, a leader in the Rust for Linux project, wrote to the Linux kernel mailing list last week to remove himself as the project’s maintainer. “After almost 4 years, I find myself lacking the energy and enthusiasm I once had to respond to some of the nontechnical nonsense, so it’s best to leave it up to those who still have it in them,” Filho wrote. While thanking his teammates, he noted that he believed the future of kernels “is with memory-safe languages,” such as Rust. “I am no visionary but if Linux doesn’t internalize this, I’m afraid some other kernel will do to it what it did to Unix,” Filho wrote.
Filho also left a “sample for context,” a link to a moment during a Linux conference talk in which an off-camera voice, identified by Filho in a Register interview as kernel maintainer Ted Ts’o, emphatically interjects: “Here’s the thing: you’re not going to force all of us to learn Rust.” In the context of Filho’s request that Linux’s file system implement Rust bindings, Ts’o says that while he knows he must fix all the C code for any change he makes, he cannot or will not fix the Rust bindings that may be affected.
“They just want to keep their C code”
Asahi Lina, developer on the Asahi Linux project, posted on Mastodon late last week: “I regretfully completely understand Wedson’s frustrations.” Noting that “a subset of C kernel developers just seem determined to make the lives of Rust maintainers as difficult as possible,” Lina detailed the memory safety issues they ran into writing Direct Rendering Manager (DRM) scheduler abstractions. Lina tried to push small fixes that would make the C code “more robust and the lifetime requirements sensible,” but was blocked by the maintainer. Bugs in that DRM scheduler’s C code are the only causes of kernel panics in Lina’s Apple GPU driver, she wrote—”Because I wrote it in Rust.”
“But I get the feeling that some Linux kernel maintainers just don’t care about future code quality, or about stability or security any more,” Lina wrote. “They just want to keep their C code and wish us Rust folks would go away. And that’s really sad… and isn’t helping make Linux better.”
Drew DeVault, founder of SourceHut, blogged about Rust’s attempts to find a place inside the Kernel. In theory the kernel should welcome enthusiastic input from motivated newcomers. “In practice, the Linux community is the wild wild west, and sweeping changes are infamously difficult to achieve consensus on, and this is by far the broadest sweeping change ever proposed for the project,” DeVault writes. “Every subsystem is a private fiefdom, subject to the whims of each one of Linux’s 1,700+ maintainers, almost all of whom have a dog in this race. It’s herding cats: introducing Rust effectively is one part coding work and ninety-nine parts political work – and it’s a lot of coding work.”
Rather than test their patience with the kernel’s politics, DeVault suggests Rust developers build a Linux-compatible kernel from scratch. “Freeing yourselves of the [Linux Kernel Mailing List] political battles would probably be a big win for the ambitions of bringing Rust into kernel space,” DeVault writes.
Torvalds understands why Rust uptake is slow
You might be wondering what lead maintainer Linus Torvalds thinks about all this. He took a “wait and see” approach in 2021, hoping Rust would first make itself known in relatively isolated device drivers. At an appearance late last month, Torvalds… essentially agreed with the Rust-minded developer complaints, albeit from a much greater remove.
“I was expecting [Rust] updates to be faster, but part of the problem is that old-time kernel developers are used to C and don’t know Rust,” Torvalds said. “They’re not exactly excited about having to learn a new language that is, in some respects, very different. So there’s been some pushback on Rust.” Torvalds added, however, that “another reason has been the Rust infrastructure itself has not been super stable.”
The Linux kernel is a high-stakes project in which hundreds or thousands of developers have a stake; conflict is perhaps inevitable. Time will tell how long C will remain the primary way of coding for, and thinking about, such a large yet always-moving, codebase.
Ars has reached out to both Filho and Ts’o for comment and will update this post with response.
The YubiKey 5, the most widely used hardware token for two-factor authentication based on the FIDO standard, contains a cryptographic flaw that makes the finger-size device vulnerable to cloning when an attacker gains brief physical access to it, researchers said Tuesday.
The cryptographic flaw, known as a side channel, resides in a small microcontroller used in a large number of other authentication devices, including smartcards used in banking, electronic passports, and the accessing of secure areas. While the researchers have confirmed all YubiKey 5 series models can be cloned, they haven’t tested other devices using the microcontroller, such as the SLE78 made by Infineon and successor microcontrollers known as the Infineon Optiga Trust M and the Infineon Optiga TPM. The researchers suspect that any device using any of these three microcontrollers and the Infineon cryptographic library contains the same vulnerability.
Patching not possible
YubiKey-maker Yubico issued an advisory in coordination with a detailed disclosure report from NinjaLab, the security firm that reverse-engineered the YubiKey 5 series and devised the cloning attack. All YubiKeys running firmware prior to version 5.7—which was released in May and replaces the Infineon cryptolibrary with a custom one—are vulnerable. Updating key firmware on the YubiKey isn’t possible. That leaves all affected YubiKeys permanently vulnerable.
“An attacker could exploit this issue as part of a sophisticated and targeted attack to recover affected private keys,” the advisory confirmed. “The attacker would need physical possession of the YubiKey, Security Key, or YubiHSM, knowledge of the accounts they want to target and specialized equipment to perform the necessary attack. Depending on the use case, the attacker may also require additional knowledge including username, PIN, account password, or authentication key.”
Side channels are the result of clues left in physical manifestations such as electromagnetic emanations, data caches, or the time required to complete a task that leaks cryptographic secrets. In this case, the side channel is the amount of time taken during a mathematical calculation known as a modular inversion. The Infineon cryptolibrary failed to implement a common side-channel defense known as constant time as it performs modular inversion operations involving the Elliptic Curve Digital Signature Algorithm. Constant time ensures the time sensitive cryptographic operations execute is uniform rather than variable depending on the specific keys.
More precisely, the side channel is located in the Infineon implementation of the Extended Euclidean Algorithm, a method for, among other things, computing the modular inverse. By using an oscilloscope to measure the electromagnetic radiation while the token is authenticating itself, the researchers can detect tiny execution time differences that reveal a token’s ephemeral ECDSA key, also known as a nonce. Further analysis allows the researchers to extract the secret ECDSA key that underpins the entire security of the token.
In Tuesday’s report, NinjaLab co-founder Thomas Roche wrote:
In the present work, NinjaLab unveils a new side-channel vulnerability in the ECDSA implementation of Infineon 9 on any security microcontroller family of the manufacturer.This vulnerability lies in the ECDSA ephemeral key (or nonce) modular inversion, and, more precisely, in the Infineon implementation of the Extended Euclidean Algorithm (EEA for short). To our knowledge, this is the first time an implementation of the EEA is shown to be vulnerable to side-channel analysis (contrarily to the EEA binary version). The exploitation of this vulnerability is demonstrated through realistic experiments and we show that an adversary only needs to have access to the device for a few minutes. The offline phase took us about 24 hours; with more engineering work in the attack development, it would take less than one hour.
After a long phase of understanding Infineon implementation through side-channel analysis on a Feitian 10 open JavaCard smartcard, the attack is tested on a YubiKey 5Ci, a FIDO hardware token from Yubico. All YubiKey 5 Series (before the firmware update 5.7 11 of May 6th, 2024) are affected by the attack. In fact all products relying on the ECDSA of Infineon cryptographic library running on an Infineon security microcontroller are affected by the attack. We estimate that the vulnerability exists for more than 14 years in Infineon top secure chips. These chips and the vulnerable part of the cryptographic library went through about 80 CC certification evaluations of level AVA VAN 4 (for TPMs) or AVA VAN 5 (for the others) from 2010 to 2024 (and a bit less than 30 certificate maintenances).