Biz & IT

nvidia-ousts-intel-from-dow-jones-index-after-25-year-run

Nvidia ousts Intel from Dow Jones Index after 25-year run

Changing winds in the tech industry

The Dow Jones Industrial Average serves as a benchmark of the US stock market by tracking 30 large, publicly owned companies that represent major sectors of the US economy, and being a member of the Index has long been considered a sign of prestige among American companies.

However, S&P regularly makes changes to the index to better reflect current realities and trends in the marketplace, so deletion from the Index likely marks a new symbolic low point for Intel.

While the rise of AI has caused a surge in several tech stocks, it has delivered tough times for chipmaker Intel, which is perhaps best known for manufacturing CPUs that power Windows-based PCs.

Intel recently withdrew its forecast to sell over $500 million worth of AI-focused Gaudi chips in 2024, a target CEO Pat Gelsinger had promoted after initially pushing his team to project $1 billion in sales. The setback follows Intel’s pattern of missed opportunities in AI, with Reuters reporting that Bank of America analyst Vivek Arya questioned the company’s AI strategy during a recent earnings call.

In addition, Intel has faced challenges as device manufacturers increasingly use Arm-based alternatives that power billions of smartphone devices and from symbolic blows like Apple’s transition away from Intel processors for Macs to its own custom-designed chips based on the Arm architecture.

Whether the historic tech company will rebound is yet to be seen, but investors will undoubtedly keep a close watch on Intel as it attempts to reorient itself in the face of changing trends in the tech industry.

Nvidia ousts Intel from Dow Jones Index after 25-year run Read More »

thousands-of-hacked-tp-link-routers-used-in-years-long-account-takeover-attacks

Thousands of hacked TP-Link routers used in years-long account takeover attacks

Hackers working on behalf of the Chinese government are using a botnet of thousands of routers, cameras, and other Internet-connected devices to perform highly evasive password spray attacks against users of Microsoft’s Azure cloud service, the company warned Thursday.

The malicious network, made up almost entirely of TP-Link routers, was first documented in October 2023 by a researcher who named it Botnet-7777. The geographically dispersed collection of more than 16,000 compromised devices at its peak got its name because it exposes its malicious malware on port 7777.

Account compromise at scale

In July and again in August of this year, security researchers from Serbia and Team Cymru reported the botnet was still operational. All three reports said that Botnet-7777 was being used to skillfully perform password spraying, a form of attack that sends large numbers of login attempts from many different IP addresses. Because each individual device limits the login attempts, the carefully coordinated account-takeover campaign is hard to detect by the targeted service.

On Thursday, Microsoft reported that CovertNetwork-1658—the name Microsoft uses to track the botnet—is being used by multiple Chinese threat actors in an attempt to compromise targeted Azure accounts. The company said the attacks are “highly evasive” because the botnet—now estimated at about 8,000 strong on average—takes pains to conceal the malicious activity.

“Any threat actor using the CovertNetwork-1658 infrastructure could conduct password spraying campaigns at a larger scale and greatly increase the likelihood of successful credential compromise and initial access to multiple organizations in a short amount of time,” Microsoft officials wrote. “This scale, combined with quick operational turnover of compromised credentials between CovertNetwork-1658 and Chinese threat actors, allows for the potential of account compromises across multiple sectors and geographic regions.

Some of the characteristics that make detection difficult are:

  • The use of compromised SOHO IP addresses
  • The use of a rotating set of IP addresses at any given time. The threat actors had thousands of available IP addresses at their disposal. The average uptime for a CovertNetwork-1658 node is approximately 90 days.
  • The low-volume password spray process; for example, monitoring for multiple failed sign-in attempts from one IP address or to one account will not detect this activity.

Thousands of hacked TP-Link routers used in years-long account takeover attacks Read More »

colorado-scrambles-to-change-voting-system-passwords-after-accidental-leak

Colorado scrambles to change voting-system passwords after accidental leak


BIOS passwords on website

“The goal is to complete the password updates by this evening,” government says.

Colorado Secretary of State Jena Griswold holds press conference with Matt Crane, Executive Director of the Colorado County Clerks Association, at her office in Denver on Thursday, October 24, 2024. Credit: Getty Images | Hyoung Chang

The Colorado Department of State said it accidentally posted a spreadsheet containing “partial passwords” for voting systems. The department said there is no “immediate security threat” because two passwords are needed for each component, but it is trying to complete password changes by the end of today. There were reportedly hundreds of BIOS passwords accessible on the website for over two months before being removed last week.

A government statement issued Tuesday said the agency “is aware that a spreadsheet located on the Department’s website improperly included a hidden tab including partial passwords to certain components of Colorado voting systems. This does not pose an immediate security threat to Colorado’s elections, nor will it impact how ballots are counted.”

Secretary of State Jena Griswold told Colorado Public Radio that “we do not think there is an immediate security threat to Colorado elections, in part because partial passwords don’t get you anywhere. Two unique passwords are needed for every election equipment component. Physical access is needed. And under Colorado law, voting equipment is stored in secure rooms that require secure ID badges. There’s 24/7 video cameras. There’s restricted access to the secure ballot areas, strict chain of custody, and it’s a felony to access voting equipment without authorization.”

Griswold said her office learned about the spreadsheet upload at the end of last week and “immediately contacted federal partners and then we began our investigation.”

The department’s statement said the two passwords for each component “are kept in separate places and held by different parties” and that the “passwords can only be used with physical in-person access to a voting system.” Additionally, “clerks are required to maintain restricted access to secure ballot areas, and may only share access information with background-checked individuals. No person may be present in a secure area unless they are authorized to do so or are supervised by an authorized and background-checked employee.”

The department also cited “strict chain of custody requirements that track when a voting systems component has been accessed and by whom,” and it said that each “Colorado voter votes on a paper ballot, which is then audited during the Risk Limiting Audit to verify that ballots were counted according to voter intent.”

Goal is to change all passwords by this evening

Griswold described the upload as an accident and said the mistake was made by a civil servant who no longer works for the department. “Out of an abundance of caution, we have people in the field working to reset passwords and review access logs for affected counties,” she said.

Gov. Jared Polis and Griswold, who are both Democrats, issued a joint update about the password changes today. The Polis administration is providing support “to complete changes to all the impacted passwords and review logs to ensure that no tampering occurred.”

“The Secretary of State will deputize certain state employees, who have cybersecurity and technology expertise and have undergone appropriate background checks and training,” the statement said. “In addition to the Department of State Employees and in coordination with county clerks, these employees will only enter badged areas in pairs to update the passwords for election equipment in counties and will be directly observed by local elections officials from the county clerk’s office. The goal is to complete the password updates by this evening and verify the security of the voting components, which are secured behind locked doors by county clerks.”

Griswold said she is “thankful to the Governor for his support to quickly resolve this unfortunate mistake.” Griswold told Colorado Public Radio that her department has no reason to believe the passwords were posted with malicious intent, but said that “a personnel investigation will be conducted by an outside party to look into the particulars of how this occurred.”

GOP slams Griswold

The Colorado Republican Party criticized Griswold this week after receiving an affidavit from someone who said they accessed the BIOS passwords on the publicly available spreadsheet three times between August 8 and October 23. The file “contained over 600 BIOS passwords for voting system components in 63 of the state’s 64 counties” before being removed on October 24, the state GOP said.

The affidavit described how to reveal the passwords in the VotingSystemInventory.xlsx file. It said that right-clicking a worksheet tab and selecting “unhide” would reveal “a dialog box where the application user can select from one, several, or all four listed hidden worksheets contained in the file.” Three of these worksheets “appear to list Basic Input Output System (BIOS) passwords” for hundreds of individual voting system components, the affidavit said.

The state GOP accused Griswold of downplaying the security risk, saying that only one password is needed for BIOS access. “BIOS passwords are highly confidential, allowing broad access for knowledgeable users to fundamentally manipulate systems and data and to remove any trace of doing so,” the GOP said. The “passwords were not encrypted or otherwise protected,” the GOP said.

State GOP Chairman Dave Williams said the incident “represents significant incompetence and negligence, and it raises huge questions about password management and other basic security protocols at the highest levels within Griswold’s office.” He also claimed the breach could put “the entire Colorado election results for the vast majority of races, including the tabulation for the Presidential race in Colorado, in jeopardy unless all of the machines can meet the standards of a ‘Trusted Build’ before next Tuesday.”

US Rep. Lauren Boebert (R-Colo.) and other Republicans called on Griswold to resign. Griswold said she would stay on the job.

Griswold: “I’m going to keep doing my job”

Republicans in the state House “and Congresswoman Lauren Boebert are the same folks who have spread conspiracies and lies about our election systems over and over and over again,” Griswold told Colorado Public Radio. “Ultimately, a civil servant made a serious mistake and we’re actively working to address it.” Griswold added, “I have faced conspiracy theories from elected Republicans in this state, and I have not been stopped by any of their efforts and I’m going to keep on doing my job.”

Colorado previously had a voting-system breach orchestrated by former county clerk Tina Peters of Mesa County, who was sentenced to nine years in prison in early October. Peters, who promoted former President Donald Trump’s election conspiracy theories, oversaw a leak of voting-system BIOS passwords. Griswold said after the Peters conviction that “Tina Peters willfully compromised her own election equipment trying to prove Trump’s big lie.”

Testimony from the Peters case was cited in the GOP’s criticism of Griswold this week. “In the Tina Peters trial, a senior State official even testified that release of these passwords in a single county represented a grave threat. Here, they have been released for the whole state,” the state GOP said.

The Trump campaign called on Griswold to halt the processing of mail ballots and re-scan all mailed ballots that were already scanned.

Photo of Jon Brodkin

Jon is a Senior IT Reporter for Ars Technica. He covers the telecom industry, Federal Communications Commission rulemakings, broadband consumer affairs, court cases, and government regulation of the tech industry.

Colorado scrambles to change voting-system passwords after accidental leak Read More »

dropbox-lays-off-20%-of-staff,-says-it-overinvested-and-underperformed

Dropbox lays off 20% of staff, says it overinvested and underperformed

Dropbox is laying off 528 employees in a move that will reduce its global workforce by 20 percent, CEO Drew Houston announced today.

Houston wrote that Dropbox’s core file sync and sharing “business has matured, and we’ve been working to build our next phase of growth with products like Dash,” an “AI-powered universal search” product targeted to business customers. The company’s “current structure and investment levels” are “no longer sustainable,” according to Houston.

“We continue to see softening demand and macro headwinds in our core business,” Houston wrote. “But external factors are only part of the story. We’ve heard from many of you that our organizational structure has become overly complex, with excess layers of management slowing us down.”

Dropbox previously cut 500 employees in an April 2023 round of layoffs. At the time, Houston said that Dropbox’s business was profitable but growth was slowing.

Today, Houston said that Dropbox is “still not delivering at the level our customers deserve or performing in line with industry peers. So we’re making more significant cuts in areas where we’re over-invested or underperforming while designing a flatter, more efficient team structure overall.”

In a Securities and Exchange Commission filing, Dropbox said it expects to “make total cash expenditures of approximately $63 million to $68 million in connection with the reduction in force, primarily consisting of severance payments, employee benefits and related costs.” Laid-off employees are eligible for 16 weeks of pay, plus one additional week of pay for each year of tenure, Houston wrote. He also said the laid-off workers “will receive their Q4 equity vest” and will be eligible for a pro-rated payment equivalent to their 2024 bonus target.

Dropbox lays off 20% of staff, says it overinvested and underperformed Read More »

android-trojan-that-intercepts-voice-calls-to-banks-just-got-more-stealthy

Android Trojan that intercepts voice calls to banks just got more stealthy

Much of the new obfuscation is the result of hiding malicious code in a dynamically decrypted and loaded .dex file of the apps. As a result, Zimperium initially believed the malicious apps they were analyzing were part of a previously unknown malware family. Then the researchers dumped the .dex file from an infected device’s memory and performed static analysis on it.

“As we delved deeper, a pattern emerged,” Ortega wrote. “The services, receivers, and activities closely resembled those from an older malware variant with the package name com.secure.assistant.” That package allowed the researchers to link it to the FakeCall Trojan.

Many of the new features don’t appear to be fully implemented yet. Besides the obfuscation, other new capabilities include:

Bluetooth Receiver

This receiver functions primarily as a listener, monitoring Bluetooth status and changes. Notably, there is no immediate evidence of malicious behavior in the source code, raising questions about whether it serves as a placeholder for future functionality.

Screen Receiver

Similar to the Bluetooth receiver, this component only monitors the screen’s state (on/off) without revealing any malicious activity in the source code.

Accessibility Service

The malware incorporates a new service inherited from the Android Accessibility Service, granting it significant control over the user interface and the ability to capture information displayed on the screen. The decompiled code shows methods such as onAccessibilityEvent() and onCreate() implemented in native code, obscuring their specific malicious intent.

While the provided code snippet focuses on the service’s lifecycle methods implemented in native code, earlier versions of the malware give us clues about possible functionality:

  • Monitoring Dialer Activity: The service appears to monitor events from the com.skt.prod.dialer package (the stock dialer app), potentially allowing it to detect when the user is attempting to make calls using apps other than the malware itself.
  • Automatic Permission Granting: The service seems capable of detecting permission prompts from the com.google.android.permissioncontroller (system permission manager) and com.android.systemui (system UI). Upon detecting specific events (e.g., TYPE_WINDOW_STATE_CHANGED), it can automatically grant permissions for the malware, bypassing user consent.
  • Remote Control: The malware enables remote attackers to take full control of the victim’s device UI, allowing them to simulate user interactions, such as clicks, gestures, and navigation across apps. This capability enables the attacker to manipulate the device with precision.

Phone Listener Service

This service acts as a conduit between the malware and its Command and Control (C2) server, allowing the attacker to issue commands and execute actions on the infected device. Like its predecessor, the new variant provides attackers with a comprehensive set of capabilities (see the table below). Some functionalities have been moved to native code, while others are new additions, further enhancing the malware’s ability to compromise devices.

The Kaspersky post from 2022 said that the only language supported by FakeCall was Korean and that the Trojan appeared to target several specific banks in South Korea. Last year, researchers from security firm ThreatFabric said the Trojan had begun supporting English, Japanese, and Chinese, although there were no indications people speaking those languages were actually targeted.

Android Trojan that intercepts voice calls to banks just got more stealthy Read More »

downey-jr.-plans-to-fight-ai-re-creations-from-beyond-the-grave

Downey Jr. plans to fight AI re-creations from beyond the grave

Robert Downey Jr. has declared that he will sue any future Hollywood executives who try to re-create his likeness using AI digital replicas, as reported by Variety. His comments came during an appearance on the “On With Kara Swisher” podcast, where he discussed AI’s growing role in entertainment.

“I intend to sue all future executives just on spec,” Downey told Swisher when discussing the possibility of studios using AI or deepfakes to re-create his performances after his death. When Swisher pointed out he would be deceased at the time, Downey responded that his law firm “will still be very active.”

The Oscar winner expressed confidence that Marvel Studios would not use AI to re-create his Tony Stark character, citing his trust in decision-makers there. “I am not worried about them hijacking my character’s soul because there’s like three or four guys and gals who make all the decisions there anyway and they would never do that to me,” he said.

Downey currently performs on Broadway in McNeal, a play that examines corporate leaders in AI technology. During the interview, he freely critiqued tech executives—Variety pointed out a particular quote from the interview where he criticized tech leaders who potentially do negative things but seek positive attention.

Downey Jr. plans to fight AI re-creations from beyond the grave Read More »

hospitals-adopt-error-prone-ai-transcription-tools-despite-warnings

Hospitals adopt error-prone AI transcription tools despite warnings

In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.

Why Whisper confabulates

The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, “Researchers aren’t certain why Whisper and similar tools hallucinate,” but that isn’t true. We know exactly why Transformer-based AI models like Whisper behave this way.

Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.

The transcription output from Whisper is a prediction of what is most likely, not what is most accurate. Accuracy in Transformer-based outputs is typically proportional to the presence of relevant accurate data in the training dataset, but it is never guaranteed. If there is ever a case where there isn’t enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it “knows” about the relationships between sounds and words it has learned from its training data.

Hospitals adopt error-prone AI transcription tools despite warnings Read More »

removal-of-russian-coders-spurs-debate-about-linux-kernel’s-politics

Removal of Russian coders spurs debate about Linux kernel’s politics

“Remove some entries due to various compliance requirements. They can come back in the future if sufficient documentation is provided.”

That two-line comment, submitted by major Linux kernel maintainer Greg Kroah-Hartman, accompanied a patch that removed about a dozen names from the kernle’s MAINTAINERS file. “Some entries” notably had either Russian names or .ru email addresses. “Various compliance requirements” was, in this case, sanctions against Russia and Russian companies, stemming from that country’s invasion of Ukraine.

This merge did not go unnoticed. Replies on the kernel mailing list asked about this “very vague” patch. Kernel developer James Bottomley wrote that “we” (seemingly speaking for Linux maintainers) had “actual advice” from Linux Foundation counsel. Employees of companies on the Treasury Department’s Office of Foreign Assets Control list of Specially Designated Nationals and Blocked Persons (OFAC SDN), or connected to them, will have their collaborations “subject to restrictions,” and “cannot be in the MAINTAINERS file.” “Sufficient documentation” would mean evidence that someone does not work for an OFAC SDN entity, Bottomley wrote.

There followed a number of messages questioning the legitimacy, suddenness, potentially US-forced, and non-reviewed nature of the commit, along with broader questions about the separation of open source code from international politics. Linux creator Linus Torvalds entered the thread with, “Ok, lots of Russian trolls out and about.” He wrote: “It’s entirely clear why the change was done” and noted that “Russian troll factories” will not revert it and that “the ‘various compliance requirements’ are not just a US thing.

Removal of Russian coders spurs debate about Linux kernel’s politics Read More »

phone-tracking-tool-lets-government-agencies-follow-your-every-move

Phone tracking tool lets government agencies follow your every move

Both operating systems will display a list of apps and whether they are permitted access always, never, only while the app is in use, or to prompt for permission each time. Both also allow users to choose whether the app sees precise locations down to a few feet or only a coarse-grained location.

For most users, there’s usefulness in allowing an app for photos, transit or maps to access a user’s precise location. For other classes of apps—say those for Internet jukeboxes at bars and restaurants—it can be helpful for them to have an approximate location, but giving them precise, fine-grained access is likely overkill. And for other apps, there’s no reason for them ever to know the device’s location. With a few exceptions, there’s little reason for apps to always have location access.

Not surprisingly, Android users who want to block intrusive location gathering have more settings to change than iOS users. The first thing to do is access Settings > Security & Privacy > Ads and choose “Delete advertising ID.” Then, promptly ignore the long, scary warning Google provides and hit the button confirming the decision at the bottom. If you don’t see that setting, good for you. It means you already deleted it. Google provides documentation here.

iOS, by default, doesn’t give apps access to “Identifier for Advertisers,” Apple’s version of the unique tracking number assigned to iPhones, iPads, and AppleTVs. Apps, however, can display a window asking that the setting be turned on, so it’s useful to check. iPhone users can do this by accessing Settings > Privacy & Security > Tracking. Any apps with permission to access the unique ID will appear. While there, users should also turn off the “Allow Apps to Request to Track” button. While in iOS Privacy & Security, users should navigate to Apple Advertising and ensure Personalized Ads is turned off.

Additional coverage of Location X from Haaretz and NOTUS is here and here. The New York Times, the other publication given access to the data, hadn’t posted an article at the time this Ars post went live.

Phone tracking tool lets government agencies follow your every move Read More »

at-ted-ai-2024,-experts-grapple-with-ai’s-growing-pains

At TED AI 2024, experts grapple with AI’s growing pains


A year later, a compelling group of TED speakers move from “what’s this?” to “what now?”

The opening moments of TED AI 2024 in San Francisco on October 22, 2024.

The opening moments of TED AI 2024 in San Francisco on October 22, 2024. Credit: Benj Edwards

SAN FRANCISCO—On Tuesday, TED AI 2024 kicked off its first day at San Francisco’s Herbst Theater with a lineup of speakers that tackled AI’s impact on science, art, and society. The two-day event brought a mix of researchers, entrepreneurs, lawyers, and other experts who painted a complex picture of AI with fairly minimal hype.

The second annual conference, organized by Walter and Sam De Brouwer, marked a notable shift from last year’s broad existential debates and proclamations of AI as being “the new electricity.” Rather than sweeping predictions about, say, looming artificial general intelligence (although there was still some of that, too), speakers mostly focused on immediate challenges: battles over training data rights, proposals for hardware-based regulation, debates about human-AI relationships, and the complex dynamics of workplace adoption.

The day’s sessions covered a wide breadth: physicist Carlo Rovelli explored consciousness and time, Project CETI researcher Patricia Sharma demonstrated attempts to use AI to decode whale communication, Recording Academy CEO Harvey Mason Jr. outlined music industry adaptation strategies, and even a few robots made appearances.

The shift from last year’s theoretical discussions to practical concerns was particularly evident during a presentation from Ethan Mollick of the Wharton School, who tackled what he called “the productivity paradox”—the disconnect between AI’s measured impact and its perceived benefits in the workplace. Already, organizations are moving beyond the gee-whiz period after ChatGPT’s introduction and into the implications of widespread use.

Sam De Brouwer and Walter De Brouwer organized TED AI and selected the speakers. Benj Edwards

Drawing from research claiming AI users complete tasks faster and more efficiently, Mollick highlighted a peculiar phenomenon: While one-third of Americans reported using AI in August of this year, managers often claim “no one’s using AI” in their organizations. Through a live demonstration using multiple AI models simultaneously, Mollick illustrated how traditional work patterns must evolve to accommodate AI’s capabilities. He also pointed to the emergence of what he calls “secret cyborgs“—employees quietly using AI tools without management’s knowledge. Regarding the future of jobs in the age of AI, he urged organizations to view AI as an opportunity for expansion rather than merely a cost-cutting measure.

Some giants in the AI field made an appearance. Jakob Uszkoreit, one of the eight co-authors of the now-famous “Attention is All You Need” paper that introduced Transformer architecture, reflected on the field’s rapid evolution. He distanced himself from the term “artificial general intelligence,” suggesting people aren’t particularly general in their capabilities. Uszkoreit described how the development of Transformers sidestepped traditional scientific theory, comparing the field to alchemy. “We still do not know how human language works. We do not have a comprehensive theory of English,” he noted.

Stanford professor Surya Ganguli presenting at TED AI 2024. Benj Edwards

And refreshingly, the talks went beyond AI language models. For example, Isomorphic Labs Chief AI Officer Max Jaderberg, who previously worked on Google DeepMind’s AlphaFold 3, gave a well-received presentation on AI-assisted drug discovery. He detailed how AlphaFold has already saved “1 billion years of research time” by discovering the shapes of proteins and showed how AI agents are now capable of running thousands of parallel drug design simulations that could enable personalized medicine.

Danger and controversy

While hype was less prominent this year, some speakers still spoke of AI-related dangers. Paul Scharre, executive vice president at the Center for a New American Security, warned about the risks of advanced AI models falling into malicious hands, specifically citing concerns about terrorist attacks with AI-engineered biological weapons. Drawing parallels to nuclear proliferation in the 1960s, Scharre argued that while regulating software is nearly impossible, controlling physical components like specialized chips and fabrication facilities could provide a practical framework for AI governance.

ReplikaAI founder Eugenia Kuyda cautioned that AI companions could become “the most dangerous technology if not done right,” suggesting that the existential threat from AI might come not from science fiction scenarios but from technology that isolates us from human connections. She advocated for designing AI systems that optimize for human happiness rather than engagement, proposing a “human flourishing metric” to measure its success.

Ben Zhao, a University of Chicago professor associated with the Glaze and Nightshade projects, painted a dire picture of AI’s impact on art, claiming that art schools were seeing unprecedented enrollment drops and galleries were closing at an accelerated rate due to AI image generators, though we have yet to dig through the supporting news headlines he momentarily flashed up on the screen.

Some of the speakers represented polar opposites of each other, policy-wise. For example, copyright attorney Angela Dunning offered a defense of AI training as fair use, drawing from historical parallels in technological advancement. A litigation partner at Cleary Gottlieb, which has previously represented the AI image generation service Midjourney in a lawsuit, Dunning quoted Mark Twin saying “there is no such thing as a new idea” and argued that copyright law allows for building upon others’ ideas while protecting specific expressions. She compared current AI debates to past technological disruptions, noting how photography, once feared as a threat to traditional artists, instead sparked new artistic movements like abstract art and pointillism. “Art and science can only remain free if we are free to build on the ideas of those that came before,” Dunning said, challenging more restrictive views of AI training.

Copyright lawyer Angela Dunning quoted Mark Twain in her talk about fair use and AI. Benj Edwards

Dunning’s presentation stood in direct opposition to Ed Newton-Rex, who had earlier advocated for mandatory licensing of training data through his nonprofit Fairly Trained. In fact, the same day, Newton-Rex’s organization unveiled a “Statement on AI training” signed by many artists that says, “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” The issue has not yet been legally settled in US courts, but clearly, the battle lines have been drawn, and no matter which side you take, TED AI did a good job of giving both perspectives to the audience.

Looking forward

Some speakers explored potential new architectures for AI. Stanford professor Surya Ganguli highlighted the contrast between AI and human learning, noting that while AI models require trillions of tokens to train, humans learn language from just millions of exposures. He proposed “quantum neuromorphic computing” as a potential bridge between biological and artificial systems, suggesting a future where computers could potentially match the energy efficiency of the human brain.

Also, Guillaume Verdon, founder of Extropic and architect of the Effective Accelerationism (often called “E/Acc”) movement, presented what he called “physics-based intelligence” and claimed his company is “building a steam engine for AI,” potentially offering energy efficiency improvements up to 100 million times better than traditional systems—though he acknowledged this figure ignores cooling requirements for superconducting components. The company had completed its first room-temperature chip tape-out just the previous week.

The Day One sessions closed out with predictions about the future of AI from OpenAI’s Noam Brown, who emphasized the importance of scale in expanding future AI capabilities, and University of Washington professor Pedro Domingos spoke about “co-intelligence,” saying, “People are smart, organizations are stupid” and proposing that AI could be used to bridge that gap by drawing on the collective intelligence of an organization.

When attended TED AI last year, some obvious questions emerged: Is this current wave of AI a fad? Will there be a TED AI next year? I think the second TED AI answered these questions well—AI isn’t going away, and there are still endless angles to explore as the field expands rapidly.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

At TED AI 2024, experts grapple with AI’s growing pains Read More »

fortigate-admins-report-active-exploitation-0-day-vendor-isn’t-talking.

FortiGate admins report active exploitation 0-day. Vendor isn’t talking.

Citing the Reddit comment, Beaumont took to Mastodon to explain: “People are quite openly posting what is happening on Reddit now, threat actors are registering rogue FortiGates into FortiManager with hostnames like ‘localhost’ and using them to get RCE.”

Beaumont wasn’t immediately available to elaborate. In the same thread, another user said that based on the brief description, it appears attackers are somehow stealing digital certificates authenticating a device to a customer network, loading it onto a FortiGate device they own, and then registering the device into the customer network.

The person continued:

From there, they can configure their way into your network or possibly take other admin actions (eg. possibly sync configs from trustworthy managed devices to their own?) It’s not super clear from these threads. The mitigation to prevent unknown serial numbers suggests that a speedbump to fast onboarding prevents even a cert-bearing(?) device from being included into the fortimanager.

Beaumont went on to say that based on evidence he’s seen, China-state hackers have “been hopping into internal networks using this one since earlier in the year, looks like.”

60,000 devices exposed

After this post went live on Ars, Beaumont published a post that said the vulnerability likely resides in the FortiGate to FortiManager protocol. FGFM is the language that allows Fortigate firewall devices to communicate with the manager over port 541. As Beaumont pointed out, the Shodan search engine shows more than 60,000 such connections exposed to the Internet.

Beaumont wrote:

There’s one requirement for an attacker: you need a valid certificate to connect. However, you can just take a certificate from a FortiGate box and reuse it. So, effectively, there’s no barrier to registering.

Once registered, there’s a vulnerability which allows remote code execution on the FortiManager itself via the rogue FortiGate connection.

From the FortiManager, you can then manage the legit downstream FortiGate firewalls, view config files, take credentials and alter configurations. Because MSPs — Managed Service Providers — often use FortiManager, you can use this to enter internal networks downstream.

Because of the way FGFM is designed — NAT traversal situations — it also means if you gain access to a managed FortiGate firewall you then can traverse up to the managing FortiManager device… and then back down to other firewalls and networks.

To make matters harder for FortiGate customers and defenders, the company’s support portal was returning connection errors at the time this post went live on Ars that prevented people from accessing the site.

FortiGate admins report active exploitation 0-day. Vendor isn’t talking. Read More »

basecamp-maker-37signals-says-its-“cloud-exit”-will-save-it-$10m-over-5-years

Basecamp-maker 37Signals says its “cloud exit” will save it $10M over 5 years

Lots of pointing at clouds

AWS made data transfer out of AWS free for customers who were moving off their servers in March, spurred in part by European regulations. Trade publications are full of trend stories about rising cloud costs and explainers on why companies are repatriating. Stories of major players’ cloud reversals, like that of Dropbox, have become talking points for the cloud-averse.

Not everyone believes the sky is falling. Lydia Leong, a cloud computing analyst at Gartner, wrote on her own blog about how “the myth of cloud repatriation refuses to die.” A large part of this, Leong writes, is in how surveys and anecdotal news stories confuse various versions of “repatriation” from managed service providers to self-hosted infrastructure.

“None of these things are in any way equivalent to the notion that there’s a broad or even common movement of workloads from the cloud back on-premises, though, especially for those customers who have migrated entire data centers or the vast majority of their IT estate to the cloud,” writes Leong.

Both Leong and Rich Hoyer, director of the FinOps group at SADA, suggest that framing the issue as simply “cloud versus on-premises” is too simplistic. A poorly architected split between cloud and on-prem, vague goals and measurements of cloud “cost” and “success,” and fuzzy return-on-investment math, Hoyer writes, are feeding alarmist takes on cloud costs.

For its part, AWS has itself testified that it faces competition from the on-premises IT movement, although it did so as part of a “Cloud Services Market Investigation” by UK market competition authorities. Red Hat and Citrix have suggested that, at a minimum, hybrid approaches have regained ground after a period of cloud primacy.

Those kinds of measured approaches don’t have the same broad reach as declaring an “exit” and putting a very round number on it, but it’s another interesting data point.

Ars has reached out to AWS and will update this post with comment.

Basecamp-maker 37Signals says its “cloud exit” will save it $10M over 5 years Read More »