San Francisco

at-ted-ai-2024,-experts-grapple-with-ai’s-growing-pains

At TED AI 2024, experts grapple with AI’s growing pains


A year later, a compelling group of TED speakers move from “what’s this?” to “what now?”

The opening moments of TED AI 2024 in San Francisco on October 22, 2024.

The opening moments of TED AI 2024 in San Francisco on October 22, 2024. Credit: Benj Edwards

SAN FRANCISCO—On Tuesday, TED AI 2024 kicked off its first day at San Francisco’s Herbst Theater with a lineup of speakers that tackled AI’s impact on science, art, and society. The two-day event brought a mix of researchers, entrepreneurs, lawyers, and other experts who painted a complex picture of AI with fairly minimal hype.

The second annual conference, organized by Walter and Sam De Brouwer, marked a notable shift from last year’s broad existential debates and proclamations of AI as being “the new electricity.” Rather than sweeping predictions about, say, looming artificial general intelligence (although there was still some of that, too), speakers mostly focused on immediate challenges: battles over training data rights, proposals for hardware-based regulation, debates about human-AI relationships, and the complex dynamics of workplace adoption.

The day’s sessions covered a wide breadth: physicist Carlo Rovelli explored consciousness and time, Project CETI researcher Patricia Sharma demonstrated attempts to use AI to decode whale communication, Recording Academy CEO Harvey Mason Jr. outlined music industry adaptation strategies, and even a few robots made appearances.

The shift from last year’s theoretical discussions to practical concerns was particularly evident during a presentation from Ethan Mollick of the Wharton School, who tackled what he called “the productivity paradox”—the disconnect between AI’s measured impact and its perceived benefits in the workplace. Already, organizations are moving beyond the gee-whiz period after ChatGPT’s introduction and into the implications of widespread use.

Sam De Brouwer and Walter De Brouwer organized TED AI and selected the speakers. Benj Edwards

Drawing from research claiming AI users complete tasks faster and more efficiently, Mollick highlighted a peculiar phenomenon: While one-third of Americans reported using AI in August of this year, managers often claim “no one’s using AI” in their organizations. Through a live demonstration using multiple AI models simultaneously, Mollick illustrated how traditional work patterns must evolve to accommodate AI’s capabilities. He also pointed to the emergence of what he calls “secret cyborgs“—employees quietly using AI tools without management’s knowledge. Regarding the future of jobs in the age of AI, he urged organizations to view AI as an opportunity for expansion rather than merely a cost-cutting measure.

Some giants in the AI field made an appearance. Jakob Uszkoreit, one of the eight co-authors of the now-famous “Attention is All You Need” paper that introduced Transformer architecture, reflected on the field’s rapid evolution. He distanced himself from the term “artificial general intelligence,” suggesting people aren’t particularly general in their capabilities. Uszkoreit described how the development of Transformers sidestepped traditional scientific theory, comparing the field to alchemy. “We still do not know how human language works. We do not have a comprehensive theory of English,” he noted.

Stanford professor Surya Ganguli presenting at TED AI 2024. Benj Edwards

And refreshingly, the talks went beyond AI language models. For example, Isomorphic Labs Chief AI Officer Max Jaderberg, who previously worked on Google DeepMind’s AlphaFold 3, gave a well-received presentation on AI-assisted drug discovery. He detailed how AlphaFold has already saved “1 billion years of research time” by discovering the shapes of proteins and showed how AI agents are now capable of running thousands of parallel drug design simulations that could enable personalized medicine.

Danger and controversy

While hype was less prominent this year, some speakers still spoke of AI-related dangers. Paul Scharre, executive vice president at the Center for a New American Security, warned about the risks of advanced AI models falling into malicious hands, specifically citing concerns about terrorist attacks with AI-engineered biological weapons. Drawing parallels to nuclear proliferation in the 1960s, Scharre argued that while regulating software is nearly impossible, controlling physical components like specialized chips and fabrication facilities could provide a practical framework for AI governance.

ReplikaAI founder Eugenia Kuyda cautioned that AI companions could become “the most dangerous technology if not done right,” suggesting that the existential threat from AI might come not from science fiction scenarios but from technology that isolates us from human connections. She advocated for designing AI systems that optimize for human happiness rather than engagement, proposing a “human flourishing metric” to measure its success.

Ben Zhao, a University of Chicago professor associated with the Glaze and Nightshade projects, painted a dire picture of AI’s impact on art, claiming that art schools were seeing unprecedented enrollment drops and galleries were closing at an accelerated rate due to AI image generators, though we have yet to dig through the supporting news headlines he momentarily flashed up on the screen.

Some of the speakers represented polar opposites of each other, policy-wise. For example, copyright attorney Angela Dunning offered a defense of AI training as fair use, drawing from historical parallels in technological advancement. A litigation partner at Cleary Gottlieb, which has previously represented the AI image generation service Midjourney in a lawsuit, Dunning quoted Mark Twin saying “there is no such thing as a new idea” and argued that copyright law allows for building upon others’ ideas while protecting specific expressions. She compared current AI debates to past technological disruptions, noting how photography, once feared as a threat to traditional artists, instead sparked new artistic movements like abstract art and pointillism. “Art and science can only remain free if we are free to build on the ideas of those that came before,” Dunning said, challenging more restrictive views of AI training.

Copyright lawyer Angela Dunning quoted Mark Twain in her talk about fair use and AI. Benj Edwards

Dunning’s presentation stood in direct opposition to Ed Newton-Rex, who had earlier advocated for mandatory licensing of training data through his nonprofit Fairly Trained. In fact, the same day, Newton-Rex’s organization unveiled a “Statement on AI training” signed by many artists that says, “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” The issue has not yet been legally settled in US courts, but clearly, the battle lines have been drawn, and no matter which side you take, TED AI did a good job of giving both perspectives to the audience.

Looking forward

Some speakers explored potential new architectures for AI. Stanford professor Surya Ganguli highlighted the contrast between AI and human learning, noting that while AI models require trillions of tokens to train, humans learn language from just millions of exposures. He proposed “quantum neuromorphic computing” as a potential bridge between biological and artificial systems, suggesting a future where computers could potentially match the energy efficiency of the human brain.

Also, Guillaume Verdon, founder of Extropic and architect of the Effective Accelerationism (often called “E/Acc”) movement, presented what he called “physics-based intelligence” and claimed his company is “building a steam engine for AI,” potentially offering energy efficiency improvements up to 100 million times better than traditional systems—though he acknowledged this figure ignores cooling requirements for superconducting components. The company had completed its first room-temperature chip tape-out just the previous week.

The Day One sessions closed out with predictions about the future of AI from OpenAI’s Noam Brown, who emphasized the importance of scale in expanding future AI capabilities, and University of Washington professor Pedro Domingos spoke about “co-intelligence,” saying, “People are smart, organizations are stupid” and proposing that AI could be used to bridge that gap by drawing on the collective intelligence of an organization.

When attended TED AI last year, some obvious questions emerged: Is this current wave of AI a fad? Will there be a TED AI next year? I think the second TED AI answered these questions well—AI isn’t going away, and there are still endless angles to explore as the field expands rapidly.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

At TED AI 2024, experts grapple with AI’s growing pains Read More »

san-francisco-to-pay-$212-million-to-end-reliance-on-5.25-inch-floppy-disks

San Francisco to pay $212 million to end reliance on 5.25-inch floppy disks

The San Francisco Municipal Transportation Agency (SFMTA) board has agreed to spend $212 million to get its Muni Metro light rail off floppy disks.

The Muni Metro’s Automatic Train Control System (ATCS) has required 5¼-inch floppy disks since 1998, when it was installed at San Francisco’s Market Street subway station. The system uses three floppy disks for loading DOS software that controls the system’s central servers. Michael Roccaforte, an SFMTA spokesperson, gave further details on how the light rail operates to Ars Technica in April, saying: “When a train enters the subway, its onboard computer connects to the train control system to run the train in automatic mode, where the trains drive themselves while the operators supervise. When they exit the subway, they disconnect from the ATCS and return to manual operation on the street.” After starting initial planning in 2018, the SFMTA originally expected to move to a floppy-disk-free train control system by 2028. But with COVID-19 preventing work for 18 months, the estimated completion date was delayed.

On October 15, the SFMTA moved closer to ditching floppies when its board approved a contract with Hitachi Rail for implementing a new train control system that doesn’t use floppy disks, the San Francisco Chronicle reported. Hitachi Rail tech is said to power train systems, including Japan’s bullet train, in more than 50 countries. The $212 million contract includes support services from Hitachi for “20 to 25 years,” the Chronicle said.

The new control system is supposed to be five generations ahead of what Muni is using now, Muni director Julie Kirschbaum said, per the Chronicle. Further illustrating the light rail’s dated tech, the current ATCS was designed to last 20 to 25 years, meaning its expected expiration date was in 2023. The system still works fine, but the risk of floppy disk data degradation and challenges in maintaining expertise in 1990s programming languages have further encouraged the SFMTA to seek upgrades.

San Francisco to pay $212 million to end reliance on 5.25-inch floppy disks Read More »

self-driving-waymo-cars-keep-sf-residents-awake-all-night-by-honking-at-each-other

Self-driving Waymo cars keep SF residents awake all night by honking at each other

The ghost in the machine —

Haunted by glitching algorithms, self-driving cars disturb the peace in San Francisco.

A Waymo self-driving car in front of Google's San Francisco headquarters, San Francisco, California, June 7, 2024.

Enlarge / A Waymo self-driving car in front of Google’s San Francisco headquarters, San Francisco, California, June 7, 2024.

Silicon Valley’s latest disruption? Your sleep schedule. On Saturday, NBC Bay Area reported that San Francisco’s South of Market residents are being awakened throughout the night by Waymo self-driving cars honking at each other in a parking lot. No one is inside the cars, and they appear to be automatically reacting to each other’s presence.

Videos provided by residents to NBC show Waymo cars filing into the parking lot and attempting to back into spots, which seems to trigger honking from other Waymo vehicles. The automatic nature of these interactions—which seem to peak around 4 am every night—has left neighbors bewildered and sleep-deprived.

NBC Bay Area’s report: “Waymo cars keep SF neighborhood awake.”

According to NBC, the disturbances began several weeks ago when Waymo vehicles started using a parking lot off 2nd Street near Harrison Street. Residents in nearby high-rise buildings have observed the autonomous vehicles entering the lot to pause between rides, but the cars’ behavior has become a source of frustration for the neighborhood.

Christopher Cherry, who lives in an adjacent building, told NBC Bay Area that he initially welcomed Waymo’s presence, expecting it to enhance local security and tranquility. However, his optimism waned as the frequency of honking incidents increased. “We started out with a couple of honks here and there, and then as more and more cars started to arrive, the situation got worse,” he told NBC.

The lack of human operators in the vehicles has complicated efforts to address the issue directly since there is no one they can ask to stop honking. That lack of accountability forced residents to report their concerns to Waymo’s corporate headquarters, which had not responded to the incidents until NBC inquired as part of its report. A Waymo spokesperson told NBC, “We are aware that in some scenarios our vehicles may briefly honk while navigating our parking lots. We have identified the cause and are in the process of implementing a fix.”

The absurdity of the situation prompted tech author and journalist James Vincent to write on X, “current tech trends are resistant to satire precisely because they satirize themselves. a car park of empty cars, honking at one another, nudging back and forth to drop off nobody, is a perfect image of tech serving its own prerogatives rather than humanity’s.”

Self-driving Waymo cars keep SF residents awake all night by honking at each other Read More »

waymo-is-suing-people-who-allegedly-smashed-and-slashed-its-robotaxis

Waymo is suing people who allegedly smashed and slashed its robotaxis

Waymo car is vandalized in San Francisco

The people of San Francisco haven’t always been kind to Waymo’s growing fleet of driverless taxis. The autonomous vehicles, which provide tens of thousands of rides each week, have been torched, stomped on, and verbally berated in recent months. Now Waymo is striking back—in the courts.

This month, the Silicon Valley company filed a pair of lawsuits, neither of which have been previously reported, that demand hundreds of thousands of dollars in damages from two alleged vandals. Waymo attorneys said in court papers that the alleged vandalism, which ruined dozens of tires and a tail end, are a significant threat to the company’s reputation. Riding in a vehicle in which the steering wheel swivels on its own can be scary enough. Having to worry about attackers allegedly targeting the rides could undermine Waymo’s ride-hailing business before it even gets past its earliest stage.

Waymo, which falls under the umbrella of Google parent Alphabet, operates a ride-hailing service in San Francisco, Phoenix, and Los Angeles that is comparable to Uber and Lyft except with sensors and software controlling the driving. While its cars haven’t contributed to any known deadly crashes, US regulators continue to probe their sometimes erratic driving. Waymo spokesperson Sandy Karp says the company always prioritizes safety and that the lawsuits reflect that strategy. She declined further comment for this story.

In a filing last week in the California Superior Court of San Francisco County, Waymo sued a Tesla Model 3 driver whom it alleges intentionally rear-ended one of its autonomous Jaguar crossovers. According to the suit, the driver, Konstantine Nikka-Sher Piterman, claimed in a post on X that “Waymo just rekt me” before going on to ask Tesla CEO Elon Musk for a job. The other lawsuit from this month, filed in the same court, targets Ronaile Burton, who allegedly slashed the tires of at least 19 Waymo vehicles. San Francisco prosecutors have filed criminal charges against her to which she has pleaded not guilty. A hearing is scheduled for Tuesday.

Burton’s public defender, Adam Birka-White, says in a statement that Burton “is someone in need of help and not jail” and that prosecutors continue “to prioritize punishing poor people at the behest of corporations, in this case involving a tech company that is under federal investigation for creating dangerous conditions on our streets.”

An attorney for Burton in the civil case hasn’t been named in court records, and Burton is currently in jail and couldn’t be reached for comment. Piterman didn’t respond to a voicemail, a LinkedIn message, and emails seeking comment. He hasn’t responded in court to the accusations.

Based on available records from courts in San Francisco and Phoenix, it appears that Waymo hasn’t previously filed similar lawsuits.

In the Tesla case, Piterman “unlawfully, maliciously, and intentionally” sped his car past a stop sign and into a Waymo car in San Francisco on March 19, according to the company’s suit. When the Waymo tried to pull over, Piterman allegedly drove the Tesla into the Waymo car again. He then allegedly entered the Waymo and later threatened a Waymo representative who responded to the scene in person. San Francisco police cited Piterman, according to the lawsuit. The police didn’t respond to WIRED’s request for comment.

Waymo is suing people who allegedly smashed and slashed its robotaxis Read More »