Author name: Kris Guyer

intel-core-ultra-200v-promises-arm-beating-battery-life-without-compatibility-issues

Intel Core Ultra 200V promises Arm-beating battery life without compatibility issues

Intel Core Ultra 200V promises Arm-beating battery life without compatibility issues

Intel

Intel has formally announced its first batch of next-generation Core Ultra processors, codenamed “Lunar Lake.” The CPUs will be available in PCs beginning on September 24.

Formally dubbed “Intel Core Ultra (Series 2),” these CPUs follow up the Meteor Lake Core Ultra CPUs that Intel has been shipping all year. They promise modest CPU performance increases alongside big power efficiency and battery life improvements, much faster graphics performance, and a new neural processing engine (NPU) that will meet Microsoft’s requirements for Copilot+ PCs that use local rather than cloud processing for generative AI and machine-learning features.

Intel Core Ultra 200V

The high-level enhancements coming to the Lunar Lake Core Ultra chips.

Enlarge / The high-level enhancements coming to the Lunar Lake Core Ultra chips.

Intel

The most significant numbers in today’s update are actually about battery life: Intel compared a Lunar Lake system and a Snapdragon X Elite system from the “same OEM” using the “same chassis” and the same-sized 55 WHr battery. In the Procyon Office Productivity test, the Intel system lasted longer, though the Qualcomm system lasted longer on a Microsoft Teams call.

If Intel’s Lunar Lake laptops can match or even get close to Qualcomm’s battery life, it will be a big deal for Intel; as the company repeatedly stresses in its slide deck, x86 PCs don’t have the lingering app, game, and driver compatibility problems that Arm-powered Windows systems still do. If Intel can improve its battery life more quickly than Microsoft, and if Arm chipmakers and app developers can improve software compatibility, some of the current best arguments in favor of buying an Arm PC will go away.

  • Intel is trying to fight back against Qualcomm’s battery life advantage in Windows PCs.

    Intel

  • Many of Lunar Lake’s changes were done in service of reducing power use.

    Intel

  • Here, Intel claims a larger advantage in battery life against both Qualcomm and AMD, though there are lots of variables that determine battery life, and we’ll need to see more real-world testing to back these numbers up.

    Intel

Intel detailed many other Lunar Lake changes earlier this summer when it announced high-level performance numbers for the CPU, GPU, and NPU.

Like Meteor Lake, the Lunar Lake processors are a collection of silicon chiplets (also called “tiles”) fused into one large chip using Intel’s Foveros packaging technology. The big difference is that there are fewer functional tiles—two, instead of four, not counting the blank “filler tile” or the base tile that ties them all together—and that both of those tiles are now being manufactured at Intel competitor TSMC, rather than using a mix of TSMC and Intel manufacturing processes as Meteor Lake did.

Intel also said it would be shipping Core Ultra CPUs with the system RAM integrated into the CPU package, which Apple also does for its M-series Mac processors; Intel says this will save quite a bit of power relative to external RAM soldered to the laptop’s motherboard.

Keep that change in mind when looking at the list of initial Core Ultra 200V-series processors Intel is announcing today. There are technically nine separate CPU models here, but because memory is integrated into the CPU package, Intel is counting the 16GB and 32GB versions of the same processor as two separate model numbers. The exception is the Core Ultra 9 288V, which is only available with 32GB of memory.

Intel Core Ultra 200V promises Arm-beating battery life without compatibility issues Read More »

jaguar-i-pace-fire-risk-leads-to-recall,-instructions-to-park-outdoors

Jaguar I-Pace fire risk leads to recall, instructions to park outdoors

fold tab to recall —

The problem is similar to one that affected the Chevrolet Bolt in 2021.

A closeup of a cutaway jaguar I-apce battery pack

Enlarge / Jaguar sourced the I-Pace’s battery cells from LG Energy Solutions. But now there’s a problem with some of them.

Jonathan Gitlin

The Jaguar I-Pace deserves more credit. When it debuted in 2018, it was one of only two electric vehicles on sale that could offer Tesla-rivaling range. The other was the much more plebeian Chevrolet Bolt, which was cheaper but nowhere near as luxurious, nor as enjoyable to drive. Now, some I-Pace and Bolt owners have something else in common, as Jaguar issues a recall for some model-year 2019 I-Paces due to a fire risk, probably caused by badly folded battery anode tabs.

The problem doesn’t affect all I-Paces, just those built between January 9, 2018, and March 14, 2019—2,760 cars in total in the US. To date, three fires have been reported following software updates, which Jaguar’s recall report says does not provide “an appropriate level of protection for the 2019MY vehicles in the US.”

Although Jaguar’s investigation is still ongoing, it says that its battery supplier (LG Energy Solutions) is inspecting some battery modules that were identified by diagnostic software as “having characteristics of a folded anode tab.” In 2021, problems with LG batteries—in this case, folded separators and torn anode tabs—resulted in Chevrolet recalling every Bolt on the road and replacing their batteries under warranty at a cost of more than $1.8 billion.

For now, the Jaguar recall is less drastic. A software update will limit the maximum charge of the affected cars to 80 percent, to prevent the packs from charging to 100 percent. Jaguar also says that, similar to other OEMs who have conducted recalls for similar problems, the patched I-Paces should be parked away from structures for 30 days post-recall and should be charged outdoors where possible.

Jaguar I-Pace fire risk leads to recall, instructions to park outdoors Read More »

asus-rog-ally-x-review:-better-performance-and-feel-in-a-pricey-package

Asus ROG Ally X review: Better performance and feel in a pricey package

Faster, grippier, pricier, and just as Windows-ed —

A great hardware refresh, but it stands out for its not-quite-handheld cost.

Updated

It's hard to fit the perfomance-minded but pricey ROG Ally X into a simple product category. It's also tricky to fit it into a photo, at the right angle, while it's in your hands.

Enlarge / It’s hard to fit the perfomance-minded but pricey ROG Ally X into a simple product category. It’s also tricky to fit it into a photo, at the right angle, while it’s in your hands.

Kevin Purdy

The first ROG Ally from Asus, a $700 Windows-based handheld gaming PC, performed better than the Steam Deck, but it did so through notable compromises on battery life. The hardware also had a first-gen feel and software jank from both Asus’ own wraparound gaming app and Windows itself. The Ally asked an awkward question: “Do you want to pay nearly 50 percent more than you’d pay for a Steam Deck for a slightly faster but far more awkward handheld?”

The ROG Ally X makes that question more interesting and less obvious to answer. Yes, it’s still a handheld that’s trying to hide Windows annoyances, and it’s still missing trackpads, without which some PC games just feel bad. And (review spoiler) it still eats a charge faster than the Steam Deck OLED on less demanding games.

But the improvements Asus made to this X sequel are notable, and its new performance stats make it more viable for those who want to play more demanding games on a rather crisp screen. At $800, or $100 more than the original ROG Ally with no extras thrown in, you have to really, really want the best possible handheld gaming experience while still tolerating Windows’ awkward fit.

Asus

What’s new in the Ally X

Specs at a glance: Asus ROG Ally X
Display 7-inch IPS panel: 1920×1080, 120 Hz, 7 ms, 500 nits, 100% sRGB, FreeSync, Gorilla Glass Victus
OS Windows 11 (Home)
CPU AMD Ryzen Z1 Extreme (Zen 4, 8 core, 24M cache, 5.10 Ghz, 9-30 W (as reviewed)
RAM 24GB LPDDR5X 6400 MHz
GPU AMD Radeon RDNA3, 2.7 GHz, 8.6 Teraflops
Storage M.2 NVME 2280 Gen4x4, 1TB (as reviewed)
Networking Wi-Fi 6E, Bluetooth 5.2
Battery 80 Wh (65W max charge)
Ports USB-C (3.2 Gen2, DPI 1.4, PD 3.0), USB-C (DP, PD 3.0), 3.5 mm audio, Micro SD
Size 11×4.3×0.97 in. (280×111×25 mm)
Weight 1.49 lbs (678 g)
Price as reviewed $800

The ROG Ally X is essentially the ROG Ally with a bigger battery packed into a shell that is impressively not much bigger or heavier, more storage and RAM, and two USB-C ports instead of one USB-C and one weird mobile port that nobody could use. Asus reshaped the device and changed the face-button feel, and it all feels noticeably better, especially now that gaming sessions can last longer. The company also moved the microSD card slot so that your cards don’t melt, which is nice.

There’s a bit more to each of those changes that we’ll get into, but that’s the short version. Small spec bumps wouldn’t have changed much about the ROG Ally experience, but the changes Asus made for the X version do move the needle. Having more RAM available has a sizable impact on the frame performance of demanding games, and you can see that in our benchmarks.

We kept the LCD Steam Deck in our benchmarks because its chip has roughly the same performance as its OLED upgrade. But it’s really the Ally-to-Ally-X comparisons that are interesting; the Steam Deck has been fading back from AAA viability. If you want the Ally X to run modern, GPU-intensive games as fast as is feasible for a battery-powered device, it can now do that a lot better—for longer—and feel a bit better while you do.

The Rog Ally X has better answered the question “why not just buy a gaming laptop?” than its predecessor. At $800 and up, you might still ask how much portability is worth to you. But the Ally X is not as much of a niche (Windows-based handheld) inside a niche (moderately higher-end handhelds).

I normally would not use this kind of handout image with descriptive text embedded, but Asus is right: the ROG Ally X is indeed way more comfortable (just maybe not all-caps).

I normally would not use this kind of handout image with descriptive text embedded, but Asus is right: the ROG Ally X is indeed way more comfortable (just maybe not all-caps).

Asus

How it feels using the Rog Ally X

My testing of the Rog Ally X consisted of benchmarks, battery testing, and playing some games on the couch. Specifically: Deep Rock Galactic: Survivor and Tactical Breach Wizards on the devices lowest-power setting (“Silent”), Deathloop on its medium-power setting (“Performance”), and Shadow of the Erdtree on its all-out “Turbo” mode.

All four of those games worked mostly fine, but DRG: Survivor pushed the boundaries of Silent mode a bit when its levels got crowded with enemies and projectiles. Most games could automatically figure out a decent settings scheme for the Ally X. If a game offers AMD’s FSR (FidelityFX Super Resolution) upscaling, you should at least try it; it’s usually a big boon to a game running on this handheld.

Overall, the ROG Ally X was a device I didn’t notice when I was using it, which is the best recommendation I can make. Perhaps I noticed that the 1080p screen was brighter, closer to the glass, and sharper than the LCD (original) Steam Deck. At handheld distance, the difference between 800p and 1080p isn’t huge to me, but the difference between LCD and OLED is more so. (Of course, an OLED version of the Steam Deck was released late last year.)

Asus ROG Ally X review: Better performance and feel in a pricey package Read More »

sunrise-alarm-clock-didn’t-make-waking-up-easier—but-made-sleeping-more-peaceful

Sunrise alarm clock didn’t make waking up easier—but made sleeping more peaceful

  • The Hatch Restore 2 with one of its lighting options on.

    Scharon Harding

  • The time is visible here, but you can disable that.

    Scharon Harding

  • Here’s the clock with a light on in the dark.

    Scharon Harding

  • A closer look.

    Scharon Harding

  • The clock’s backside.

    Scharon Harding

To say “I’m not a morning person” would be an understatement. Not only is it hard for me to be useful in the first hour (or so) of being awake, but it’s hard for me to wake up. I mean, really hard.

I’ve tried various recommendations and tricks: I’ve set multiple alarms and had coffee ready and waiting, and I’ve put my alarm clock far from my bed and kept my blinds open so the sun could wake me. But I’m still prone to sleeping through my alarm or hitting snooze until the last minute.

The Hatch Restore 2, a smart alarm clock with lighting that mimics sunrises and sunsets, seemed like a technologically savvy approach to realizing my dreams of becoming a morning person.

After about three weeks, though, I’m still no early bird. But the smart alarm clock is still earning a spot on my nightstand.

How it works

Hatch refers to the Restore 2 as a “smart sleep clock.” That’s marketing speak, but to be fair, the Restore 2 does help me sleep. A product page describes the clock as targeting users’ “natural circadian rhythm, so you can get your best sleep.” There’s some reasoning here. Circadian rhythms are “the physical, mental, and behavioral changes an organism experiences over a 24-hour cycle,” per the National Institute of General Medical Sciences (NIGMS). Circadian rhythms affect our sleep patterns (as well as other biological aspects, like appetite), NIGMS says.

The Restore 2’s pitch is a clock programmed to emit soothing lighting, which you can make change gradually as it approaches bedtime (like get darker), partnered with an alarm clock that simulates a sunrise with brightening lighting that can help you wake up more naturally. You can set the clock to play various soothing sounds while you’re winding down, sleeping, and/or as your alarm sound.

The clock needs a Wi-Fi connection and its app to set up the device. The free app has plenty of options, including sounds, colors, and tips for restful sleep (there’s a subscription for extra features and sounds for $5 per month, but thankfully, it’s optional).

Out like a light

This is, by far, the most customizable alarm clock I’ve ever used. The app was a little overwhelming at first, but once I got used to it, it was comforting to be able to set Routines or different lighting/sounds for different days. For example, I set mine to play two hours of “Calming Singing Bowls” with a slowly dimming sunset effect when I press the “Rest” button. Once I press the button again, the clock plays ocean sounds until my alarm goes off.

  • Routines in the Restore 2 app.

    Scharon Harding/Hatch

  • Setting a sunrise alarm part one.

    Scharon Harding/Hatch

  • Setting a sunrise alarm part two. (Part three would show a volume slider).

    Scharon Harding/Hatch

I didn’t think I needed a sleeping aid—I’m really good at sleeping. But I was surprised at how the Restore 2 helped me fall asleep more easily by blocking unpleasant noises. In my room, the biggest culprit is an aging air conditioner that’s loud while on, and it gets even more uproarious when automatically turning itself on and off (a feature that has become a bug I can’t disable).

As I’ve slept these past weeks, the clock has served as a handy, adjustable colored light to have on in the evening or as a cozy nightlight. The ocean noises have been blending in with the AC’s sounds, clearing my mind. I’d sleepily ponder if certain sounds I heard were coming from the clock or my AC. That’s the dull, fruitless thinking that quickly gets me snoozing.

Playing sounds to fall asleep is obviously not new (some of my earlier memories are falling asleep to a Lady and the Tramp cassette). Today, many would prefer using an app or playing a long video over getting a $170 alarm clock for the experience. Still, the convenience of setting repeating Routines on a device dedicated to being a clock turned out to be an asset. It’s also nice to be able to start a Routine by pressing an on-device button rather than having to use my phone to play sleeping sounds.

But the idea of the clock’s lighting and sounds helping me wind down in the hours before bed would only succeed if I was by the clock when winding down. I’m usually spending my last waking moments in my living room. So unless I’m willing to change my habits, or get a Restore 2 for the living room, this feature is lost on me.

Sunrise alarm clock didn’t make waking up easier—but made sleeping more peaceful Read More »

harmful-“nudify”-websites-used-google,-apple,-and-discord-sign-on-systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems

Major technology companies, including Google, Apple, and Discord, have been enabling people to quickly sign up to harmful “undress” websites, which use AI to remove clothes from real photos to make victims appear to be “nude” without their consent. More than a dozen of these deepfake websites have been using login buttons from the tech companies for months.

A WIRED analysis found 16 of the biggest so-called undress and “nudify” websites using the sign-in infrastructure from Google, Apple, Discord, Twitter, Patreon, and Line. This approach allows people to easily create accounts on the deepfake websites—offering them a veneer of credibility—before they pay for credits and generate images.

While bots and websites that create nonconsensual intimate images of women and girls have existed for years, the number has increased with the introduction of generative AI. This kind of “undress” abuse is alarmingly widespread, with teenage boys allegedly creating images of their classmates. Tech companies have been slow to deal with the scale of the issues, critics say, with the websites appearing highly in search results, paid advertisements promoting them on social media, and apps showing up in app stores.

“This is a continuation of a trend that normalizes sexual violence against women and girls by Big Tech,” says Adam Dodge, a lawyer and founder of EndTAB (Ending Technology-Enabled Abuse). “Sign-in APIs are tools of convenience. We should never be making sexual violence an act of convenience,” he says. “We should be putting up walls around the access to these apps, and instead we’re giving people a drawbridge.”

The sign-in tools analyzed by WIRED, which are deployed through APIs and common authentication methods, allow people to use existing accounts to join the deepfake websites. Google’s login system appeared on 16 websites, Discord’s appeared on 13, and Apple’s on six. X’s button was on three websites, with Patreon and messaging service Line’s both appearing on the same two websites.

WIRED is not naming the websites, since they enable abuse. Several are part of wider networks and owned by the same individuals or companies. The login systems have been used despite the tech companies broadly having rules that state developers cannot use their services in ways that would enable harm, harassment, or invade people’s privacy.

After being contacted by WIRED, spokespeople for Discord and Apple said they have removed the developer accounts connected to their websites. Google said it will take action against developers when it finds its terms have been violated. Patreon said it prohibits accounts that allow explicit imagery to be created, and Line confirmed it is investigating but said it could not comment on specific websites. X did not reply to a request for comment about the way its systems are being used.

In the hours after Jud Hoffman, Discord vice president of trust and safety, told WIRED it had terminated the websites’ access to its APIs for violating its developer policy, one of the undress websites posted in a Telegram channel that authorization via Discord was “temporarily unavailable” and claimed it was trying to restore access. That undress service did not respond to WIRED’s request for comment about its operations.

Harmful “nudify” websites used Google, Apple, and Discord sign-on systems Read More »

nonprofit-scrubs-illegal-content-from-controversial-ai-training-dataset

Nonprofit scrubs illegal content from controversial AI training dataset

Nonprofit scrubs illegal content from controversial AI training dataset

After Stanford Internet Observatory researcher David Thiel found links to child sexual abuse materials (CSAM) in an AI training dataset tainting image generators, the controversial dataset was immediately taken down in 2023.

Now, the LAION (Large-scale Artificial Intelligence Open Network) team has released a scrubbed version of the LAION-5B dataset called Re-LAION-5B and claimed that it “is the first web-scale, text-link to images pair dataset to be thoroughly cleaned of known links to suspected CSAM.”

To scrub the dataset, LAION partnered with the Internet Watch Foundation (IWF) and the Canadian Center for Child Protection (C3P) to remove 2,236 links that matched with hashed images in the online safety organizations’ databases. Removals include all the links flagged by Thiel, as well as content flagged by LAION’s partners and other watchdogs, like Human Rights Watch, which warned of privacy issues after finding photos of real kids included in the dataset without their consent.

In his study, Thiel warned that “the inclusion of child abuse material in AI model training data teaches tools to associate children in illicit sexual activity and uses known child abuse images to generate new, potentially realistic child abuse content.”

Thiel urged LAION and other researchers scraping the Internet for AI training data that a new safety standard was needed to better filter out not just CSAM, but any explicit imagery that could be combined with photos of children to generate CSAM. (Recently, the US Department of Justice pointedly said that “CSAM generated by AI is still CSAM.”)

While LAION’s new dataset won’t alter models that were trained on the prior dataset, LAION claimed that Re-LAION-5B sets “a new safety standard for cleaning web-scale image-link datasets.” Where before illegal content “slipped through” LAION’s filters, the researchers have now developed an improved new system “for identifying and removing illegal content,” LAION’s blog said.

Thiel told Ars that he would agree that LAION has set a new safety standard with its latest release, but “there are absolutely ways to improve it.” However, “those methods would require possession of all original images or a brand new crawl,” and LAION’s post made clear that it only utilized image hashes and did not conduct a new crawl that could have risked pulling in more illegal or sensitive content. (On Threads, Thiel shared more in-depth impressions of LAION’s effort to clean the dataset.)

LAION warned that “current state-of-the-art filters alone are not reliable enough to guarantee protection from CSAM in web scale data composition scenarios.”

“To ensure better filtering, lists of hashes of suspected links or images created by expert organizations (in our case, IWF and C3P) are suitable choices,” LAION’s blog said. “We recommend research labs and any other organizations composing datasets from the public web to partner with organizations like IWF and C3P to obtain such hash lists and use those for filtering. In the longer term, a larger common initiative can be created that makes such hash lists available for the research community working on dataset composition from web.”

According to LAION, the bigger concern is that some links to known CSAM scraped into a 2022 dataset are still active more than a year later.

“It is a clear hint that law enforcement bodies have to intensify the efforts to take down domains that host such image content on public web following information and recommendations by organizations like IWF and C3P, making it a safer place, also for various kinds of research related activities,” LAION’s blog said.

HRW researcher Hye Jung Han praised LAION for removing sensitive data that she flagged, while also urging more interventions.

“LAION’s responsive removal of some children’s personal photos from their dataset is very welcome, and will help to protect these children from their likenesses being misused by AI systems,” Han told Ars. “It’s now up to governments to pass child data protection laws that would protect all children’s privacy online.”

Although LAION’s blog said that the content removals represented an “upper bound” of CSAM that existed in the initial dataset, AI specialist and Creative.AI co-founder Alex Champandard told Ars that he’s skeptical that all CSAM was removed.

“They only filter out previously identified CSAM, which is only a partial solution,” Champandard told Ars. “Statistically speaking, most instances of CSAM have likely never been reported nor investigated by C3P or IWF. A more reasonable estimate of the problem is about 25,000 instances of things you’d never want to train generative models on—maybe even 50,000.”

Champandard agreed with Han that more regulations are needed to protect people from AI harms when training data is scraped from the web.

“There’s room for improvement on all fronts: privacy, copyright, illegal content, etc.,” Champandard said. Because “there are too many data rights being broken with such web-scraped datasets,” Champandard suggested that datasets like LAION’s won’t “stand the test of time.”

“LAION is simply operating in the regulatory gap and lag in the judiciary system until policymakers realize the magnitude of the problem,” Champandard said.

Nonprofit scrubs illegal content from controversial AI training dataset Read More »

feds-to-get-early-access-to-openai,-anthropic-ai-to-test-for-doomsday-scenarios

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

“Advancing the science of AI safety” —

AI companies agreed that ensuring AI safety was key to innovation.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios

OpenAI and Anthropic have each signed unprecedented deals granting the US government early access to conduct safety testing on the companies’ flashiest new AI models before they’re released to the public.

According to a press release from the National Institute of Standards and Technology (NIST), the deal creates a “formal collaboration on AI safety research, testing, and evaluation with both Anthropic and OpenAI” and the US Artificial Intelligence Safety Institute.

Through the deal, the US AI Safety Institute will “receive access to major new models from each company prior to and following their public release.” This will ensure that public safety won’t depend exclusively on how the companies “evaluate capabilities and safety risks, as well as methods to mitigate those risks,” NIST said, but also on collaborative research with the US government.

The US AI Safety Institute will also be collaborating with the UK AI Safety Institute when examining models to flag potential safety risks. Both groups will provide feedback to OpenAI and Anthropic “on potential safety improvements to their models.”

NIST said that the agreements also build on voluntary AI safety commitments that AI companies made to the Biden administration to evaluate models to detect risks.

Elizabeth Kelly, director of the US AI Safety Institute, called the agreements “an important milestone” to “help responsibly steward the future of AI.”

Anthropic co-founder: AI safety “crucial” to innovation

The announcement comes as California is poised to pass one of the country’s first AI safety bills, which will regulate how AI is developed and deployed in the state.

Among the most controversial aspects of the bill is a requirement that AI companies build in a “kill switch” to stop models from introducing “novel threats to public safety and security,” especially if the model is acting “with limited human oversight, intervention, or supervision.”

Critics say the bill overlooks existing safety risks from AI—like deepfakes and election misinformation—to prioritize prevention of doomsday scenarios and could stifle AI innovation while providing little security today. They’ve urged California’s governor, Gavin Newsom, to veto the bill if it arrives at his desk, but it’s still unclear if Newsom intends to sign.

Anthropic was one of the AI companies that cautiously supported California’s controversial AI bill, Reuters reported, claiming that the potential benefits of the regulations likely outweigh the costs after a late round of amendments.

The company’s CEO, Dario Amodei, told Newsom why Anthropic supports the bill now in a letter last week, Reuters reported. He wrote that although Anthropic isn’t certain about aspects of the bill that “seem concerning or ambiguous,” Anthropic’s “initial concerns about the bill potentially hindering innovation due to the rapidly evolving nature of the field have been greatly reduced” by recent changes to the bill.

OpenAI has notably joined critics opposing California’s AI safety bill and has been called out by whistleblowers for lobbying against it.

In a letter to the bill’s co-sponsor, California Senator Scott Wiener, OpenAI’s chief strategy officer, Jason Kwon, suggested that “the federal government should lead in regulating frontier AI models to account for implications to national security and competitiveness.”

The ChatGPT maker striking a deal with the US AI Safety Institute seems in line with that thinking. As Kwon told Reuters, “We believe the institute has a critical role to play in defining US leadership in responsibly developing artificial intelligence and hope that our work together offers a framework that the rest of the world can build on.”

While some critics worry California’s AI safety bill will hamper innovation, Anthropic’s co-founder, Jack Clark, told Reuters today that “safe, trustworthy AI is crucial for the technology’s positive impact.” He confirmed that Anthropic’s “collaboration with the US AI Safety Institute” will leverage the government’s “wide expertise to rigorously test” Anthropic’s models “before widespread deployment.”

In NIST’s press release, Kelly agreed that “safety is essential to fueling breakthrough technological innovation.”

By directly collaborating with OpenAI and Anthropic, the US AI Safety Institute also plans to conduct its own research to help “advance the science of AI safety,” Kelly said.

Feds to get early access to OpenAI, Anthropic AI to test for doomsday scenarios Read More »

we-can-now-watch-grace-hopper’s-famed-1982-lecture-on-youtube

We can now watch Grace Hopper’s famed 1982 lecture on YouTube

Amazing Grace —

The lecture featured Hopper discussing future challenges of protecting information.

Rear Admiral Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part One, 1982).

The late Rear Admiral Grace Hopper was a gifted mathematician and undisputed pioneer in computer programming, honored posthumously in 2016 with the Presidential Medal of Freedom. She was also very much in demand as a speaker in her later career. Hopper’s famous 1982 lecture on “Future Possibilities: Data, Hardware, Software, and People,” has long been publicly unavailable because of the obsolete media on which it was recorded. The National Archives and Records Administration (NARA) finally managed to retrieve the footage for the National Security Agency (NSA), which posted the lecture in two parts on YouTube (Part One embedded above, Part Two embedded below).

Hopper earned undergraduate degrees in math and physics from Vassar College and a PhD in math from Yale in 1930. She returned to Vassar as a professor, but when World War II broke out, she sought to enlist in the US Naval Reserve. She was initially denied on the basis of her age (34) and low weight-to-height ratio, and also because her expertise elsewhere made her particularly valuable to the war effort. Hopper got an exemption, and after graduating first in her class, she joined the Bureau of Ships Computation Project at Harvard University, where she served on the Mark I computer programming staff under Howard H. Aiken.

She stayed with the lab until 1949 and was next hired as a senior mathematician by Eckert-Mauchly Computer Corporation to develop the Universal Automatic Computer, or UNIVAC, the first computer. Hopper championed the development of a new programming language based on English words. “It’s much easier for most people to write an English statement than it is to use symbols,” she reasoned. “So I decided data processors ought to be able to write their programs in English and the computers would translate them into machine code.”

Her superiors were skeptical, but Hopper persisted, publishing papers on what became known as compilers. When Remington Rand took over the company, she created her first A-0 compiler. This early achievement would one day lead to the development of COBOL for data processors, which is still the major programming language used today.

“Grandma COBOL”

In November 1952, the UNIVAC was introduced to America by CBS news anchor Walter Cronkite as the presidential election results rolled in. Hopper and the rest of her team had worked tirelessly to input voting statistics from earlier elections and write the code that would allow the calculator to extrapolate the election results based on previous races. National pollsters predicted Adlai Stevenson II would win, while the UNIVAC group predicted a landslide for Dwight D. Eisenhower. UNIVAC’s prediction proved to be correct: Eisenhower won over 55 percent of the popular vote with an electoral margin of 442 to 89.  

Hopper retired at age 60 from the Naval Reserve in 1966 with the rank of commander but was subsequently recalled to active duty for many more years, thanks to congressional special approval allowing her to remain beyond the mandatory retirement age. She was promoted to commodore in 1983, a rank that was renamed “rear admiral” two years later, and Rear Admiral Grace Hopper finally retired permanently in 1986. But she didn’t stop working: She became a senior consultant to Digital Equipment Corporation and “goodwill ambassador,” giving public lectures at various computer-related events.

One of Hopper’s best-known lectures was delivered to NSA employees in August 1982. According to a National Security Agency press release, the footage had been preserved in a defunct media format—specifically, two 1-inch AMPEX tapes. The agency asked NARA to retrieve that footage and digitize it for public release, and NARA did so. The NSA described it as “one of the more unique public proactive transparency record releases… to date.”

Hopper was a very popular speaker not just because of her pioneering contributions to computing, but because she was a natural raconteur, telling entertaining and often irreverent war stories from her early days. And she spoke plainly, as evidenced in the 1982 lecture when she drew an analogy between using pairs of oxen to move large logs in the days before large tractors, and pairing computers to get more computer power rather than just getting a bigger computer—”which of course is what common sense would have told us to begin with.” For those who love the history of computers and computation, the full lecture is very much worth the time.

Grace Hopper on Future Possibilities: Data, Hardware, Software, and People (Part Two, 1982).

Listing image by Lynn Gilbert/CC BY-SA 4.0

We can now watch Grace Hopper’s famed 1982 lecture on YouTube Read More »

massive-nationwide-meat-linked-outbreak-kills-5-more,-now-largest-since-2011

Massive nationwide meat-linked outbreak kills 5 more, now largest since 2011

Hardy germs —

CDC implores consumers to check their fridges for the recalled meats.

Listeria monocytogenes.

Enlarge / Listeria monocytogenes.

Five more people have died in a nationwide outbreak of Listeria infections linked to contaminated Boar’s Head brand meats, the Centers for Disease Control and Prevention reported Wednesday.

To date, 57 people across 18 states have been sickened, all of whom required hospitalization. A total of eight have died. The latest tally makes this the largest listeriosis outbreak in the US since 2011, when cantaloupe processed in an unsanitary facility led to 147 Listeria infections in 28 states, causing 33 deaths, the CDC notes.

The new cases and deaths come after a massive recall of more than 7 million pounds of Boar’s Head meat products, which encompassed 71 of the company’s products. That recall was announced on July 30, which itself was an expansion of a July 26 recall of an additional 207,528 pounds of Boar’s Head products. By August 8, when the CDC last provided an update on the outbreak, the number of cases had hit 43, with 43 hospitalizations and three deaths.

In a media statement Wednesday, the CDC says the updated toll of cases and deaths is a “reminder to avoid recalled products.” The agency noted that the outbreak bacteria, Listeria monocytogenes, is a “hardy germ that can remain on surfaces, like meat slicers, and foods, even at refrigerated temperatures. It can also take up to 10 weeks for some people to have symptoms of listeriosis.” The agency recommends that people look through their fridges for any recalled Boar’s Head products, which have sell-by dates into October.

If you find any recalled meats, do not eat them, the agency warns. Throw them away or return them to the store where they were purchased for a refund. The CDC and the US Department of Agriculture also recommend that you disinfect your fridge, given the germs’ ability to linger.

L. monocytogenes is most dangerous to people who are pregnant, people age 65 years or older, and people who have weakened immune systems. In these groups, the bacteria are more likely to move beyond the gastrointestinal system to cause an invasive listeriosis infection. In older and immunocompromised people, listeriosis usually causes fever, muscle aches, and tiredness but may also cause headache, stiff neck, confusion, loss of balance, or seizures. These cases almost always require hospitalization, and 1 in 6 die. In pregnant people, listeriosis also causes fever, muscle aches, and tiredness but can also lead to miscarriage, stillbirth, premature delivery, or a life-threatening infection in their newborns.

Massive nationwide meat-linked outbreak kills 5 more, now largest since 2011 Read More »

espn’s-where-to-watch-tries-to-solve-sports’-most-frustrating-problem

ESPN’s Where to Watch tries to solve sports’ most frustrating problem

A Licensing Cluster—

Find your game in a convoluted landscape of streaming services and TV channels.

The ESPN app on an iPhone 11 Pro.

The ESPN app on an iPhone 11 Pro.

ESPN

Too often, new tech product or service launches seem like solutions in search of a problem, but not this one: ESPN is launching software that lets you figure out just where you can watch the specific game you want to see amid an overcomplicated web of streaming services, cable channels, and arcane licensing agreements. Every sports fan is all too familiar with today’s convoluted streaming schedules.

Launching today on ESPN.com and the various ESPN mobile and streaming device apps, the new guide offers various views, including one that lists all the sporting events in a single day and a search function, among other things. You can also flag favorite sports or teams to customize those views.

“At the core of Where to Watch is an event database created and managed by the ESPN Stats and Information Group (SIG), which aggregates ESPN and partner data feeds along with originally sourced information and programming details from more than 250 media sources, including television networks and streaming platforms,” ESPN’s press release says.

ESPN previously offered browsable lists of games like this, but it didn’t identify where you could actually watch all the games.

There’s no guarantee that you’ll have access to the services needed to watch the games in the list, though. Those of us who cut the cable cord long ago know that some games—especially those local to your city—are unavailable without cable.

For example, I live within walking distance from Wrigley Field, but because I don’t have cable, I can’t watch most Cubs games on any screens in my home. As a former Angeleno, I follow the Dodgers instead because there are no market blackouts for me watching them all the way from Chicago. The reverse would be true if I were in LA.

Even if you do have cable, many sports are incredibly convoluted when it comes to figuring out where to watch stuff. ESPN Where to Watch could be useful for the new college football season, for example.

Expansion effort

ESPN isn’t the first company to envision this, though. The company to make the most progress up until now was Apple. Apple’s TV device and app was initially meant as a one-stop shop for virtually all streaming video, like a comprehensive 21st-century TV Guide. But with cable companies being difficult to work with and Netflix not participating, Apple never quite made that dream a reality.

It kept trying for sports, though, tying into third-party offerings like the MLB app alongside its own programming to try to make the TV app a place to launch all your games. Apple got pretty close, depending on which sport you’re trying to follow.

ESPN’s app seems a little more promising, as it covers a more comprehensive range of games and goes beyond the TV app’s “what’s happening right now” focus with better search and listings.

ESPN execs have said they hope to start offering more games streaming directly in the app, and if that app becomes the go-to spot thanks to this new guide, it might give the company more leverage with leagues to make that happen.

That could certainly be more convenient for viewers, though there are, of course, downsides to one company having too much influence and leverage in a sport.

ESPN’s Where to Watch tries to solve sports’ most frustrating problem Read More »

a-long,-weird-foss-circle-ends-as-microsoft-donates-mono-to-wine-project

A long, weird FOSS circle ends as Microsoft donates Mono to Wine project

Thank you for your service (calls) —

Mono had many homes over 23 years, but Wine’s repos might be its final stop.

Man looking over the offerings at a wine store with a tablet in hand.

Enlarge / Does Mono fit between the Chilean cab sav and Argentinian malbec, or is it more of an orange, maybe?

Getty Images

Microsoft has donated the Mono Project, an open-source framework that brought its .NET platform to non-Windows systems, to the Wine community. WineHQ will be the steward of the Mono Project upstream code, while Microsoft will encourage Mono-based apps to migrate to its open source .NET framework.

As Microsoft notes on the Mono Project homepage, the last major release of Mono was in July 2019. Mono was “a trailblazer for the .NET platform across many operating systems” and was the first implementation of .NET on Android, iOS, Linux, and other operating systems.

Ximian, Novell, SUSE, Xamarin, Microsoft—now Wine

Mono began as a project of Miguel de Icaza, co-creator of the GNOME desktop. De Icaza led Ximian (originally Helix Code), aiming to bring Microsoft’s then-new .NET platform to Unix-like platforms. Ximian was acquired by Novell in 2003.

Mono was key to de Icaza’s efforts to get Microsoft’s Silverlight, a browser plug-in for “interactive rich media applications” (i.e., a Flash competitor), onto Linux systems. Novell pushed Mono as a way to develop iOS apps with C# and other .NET languages. Microsoft applied its “Community Promise” to its .NET standards in 2009, confirming its willingness to let Mono flourish outside its specific control.

By 2011, however, Novell, on its way to being acquired into obsolescence, was not doing much with Mono, and de Icaza started Xamarin to push Mono for Android. Novell (through its SUSE subsidiary) and Xamarin reached an agreement in which Xamarin would take over the IP and customers, using Mono inside Novell/SUSE.

Microsoft open-sourced most of .NET in 2014, then took it further, acquiring Xamarin entirely in 2016, putting Mono under an MIT license, and bundling Xamarin offerings into various open source projects. Mono now exists as a repository that may someday be archived, though Microsoft promises to keep binaries around for at least four years. Those who want to keep using Mono are directed to Microsoft’s “modern fork” of the project inside .NET.

What does this mean for Mono and Wine? Not much at first. Wine, a compatibility layer for Windows apps on POSIX-compliant systems, has already made use of Mono code in fixes and has its own Mono engine. By donating Mono to Wine, Microsoft has, at a minimum, erased the last bit of concern anyone might have had about the company’s control of the project. It’s a very different, open-source-conversant Microsoft making this move, of course, but regardless, it’s a good gesture.

A long, weird FOSS circle ends as Microsoft donates Mono to Wine project Read More »

sb-1047:-final-takes-and-also-ab-3211

SB 1047: Final Takes and Also AB 3211

This is the endgame. Very soon the session will end, and various bills either will or won’t head to Newsom’s desk. Some will then get signed and become law.

Time is rapidly running out to have your voice impact that decision.

Since my last weekly, we got a variety of people coming in to stand for or against the final version of SB 1047. There could still be more, but probably all the major players have spoken at this point.

So here, today, I’m going to round up all that rhetoric, all those positions, in one place. After this, I plan to be much more stingy about talking about the whole thing, and only cover important new arguments or major news.

I’m not going to get into the weeds arguing about the merits of SB 1047 – I stand by my analysis in the Guide to SB 1047, and the reasons I believe it is a good bill, sir.

I do however look at the revised AB 3211. I was planning on letting that one go, but it turns out it has a key backer, and thus seems far more worthy of our attention.

I saw two major media positions taken, one pro and one anti.

Neither worried itself about the details of the bill contents.

The Los Angeles Times Editorial Board endorses SB 1047, since the Federal Government is not going to step up, and using an outside view and big picture analysis. I doubt they thought much about the bill’s implementation details.

The Economist is opposed, in a quite bad editorial calling belief in the possibility of a catastrophic harm ‘quasi-religious’ without argument, and uses that to dismiss the bill, instead calling for regulations that address mundane harms. That’s actually it.

The first half of the story is that OpenAI came out publicly against SB 1047.

They took four pages to state its only criticism in what could have and should have been a Tweet: That it is a state bill and they would prefer this be handled at the Federal level. To which, I say, okay, I agree that would have been first best and that is one of the best real criticisms. I strongly believe we should pass the bill anyway because I am a realist about Congress, do not expect them to act in similar fashion any time soon even if Harris wins and certainly if Trump wins, and if they pass a similar bill that supersedes this one I will be happily wrong.

Except the letter is four pages long, so they can echo various industry talking points, and echo their echoes. In it, they say: Look at all the things we are doing to promote safety, and the bills before Congress, OpenAI says, as if to imply the situation is being handled. Once again, we see the argument ‘this might prevent CBRN risks, but it is a state bill, so doing so would not only not be first bet, it would be bad, actually.’

They say the bill would ‘threaten competitiveness’ but provide no evidence or argument for this. They echo, once again without offering any mechanism, reason or evidence, Rep. Lofgren’s unsubstantiated claims that this risks companies leaving California. The same with ‘stifle innovation.’

In four pages, there is no mention of any specific provision that OpenAI thinks would have negative consequences. There is no suggestion of what the bill should have done differently, other than to leave the matter to the Feds. A duck, running after a person, asking for a mechanism.

My challenge to OpenAI would be to ask: If SB 1047 was a Federal law, that left all responsibilities in the bill to the USA AISI and NIST and the Department of Justice, funding a national rather than state Compute fund, and was otherwise identical, would OpenAI then support? Would they say their position is Support if Federal?

Or, would they admit that the only concrete objection is not their True Objection?

I would also confront them with AB 3211, but hold that thought.

My challenge to certain others: Now that OpenAI has come out in opposition to the bill, would you like to take back your claims that SB 1047 would enshrine OpenAI and others in Big Tech with a permanent monopoly, or other such Obvious Nonsense?

Max Tegmark: Jason [Kwon], it will be great if you can clarify *howyou want AI to be regulated rather than just explaining *how not*. Please list specific rules and standards that you want @OpenAI to be legally bound by as long as your competitors are too.

I think this is generous. OpenAI did not explain how not to regulate AI, other than that it should not be by California. I couldn’t find a single thing in the bill OpenAI would not want the Federal Government to do they were willing to name?

Anthony Aguirre: Happy to be proven wrong, but I think the way to interpret this is straightforward.

Dylan Matthews: You’re telling me that Silicon Valley companies oppose an attempt to regulate their products?

Wow. I didn’t know that. You’re telling me now for the first time.

Obv the fact that OpenAI, Anthropic, etc are pushing against the bill is not proof it’s a good idea — some regulations are bad!

But it’s like … the most classic story in all of politics, and it’s weird how much coverage has treated it as a kind of oddity.

Two former OpenAI employees point out some obvious things about OpenAI deciding to oppose SB 1047 after speaking of the need for regulation. To be fair, Rohit is very right that any given regulation can be bad, but again they only list one specific criticism, and do not say they would support if that criticism were fixed.

For SB 1047, OpenAI took four pages to say essentially this one sentence:

OpenAI: However, the broad and significant implications of Al for U.S. competitiveness and national security require that regulation of frontier models be shaped and implemented at the federal level.

So presumably that would mean they oppose all state-level regulations. They then go on to note they support three federal bills. I see those bills as a mixed bag, not unreasonable things to be supporting, but nothing in them substitutes for SB 1047.

Again, I agree that would be the first best solution to do this Federally. Sure.

For AB 3211, they… support it? Wait, what?

Anna Tong (Reuters): ChatGPT developer OpenAI is supporting a California bill that would require tech companies to label AI-generated content, which can range from harmless memes to deepfakes aimed at spreading misinformation about political candidates.

The bill, called AB 3211, has so far been overshadowed by attention on another California state artificial intelligence (AI) bill, SB 1047, which mandates that AI developers conduct safety testing on some of their own models.

San Francisco-based OpenAI believes that for AI-generated content, transparency and requirements around provenance such as watermarking are important, especially in an election year, according to a letter sent to California State Assembly member Buffy Wicks, who authored the bill.

You’re supposed to be able to request such things. I have been trying for several days to get a copy of the support letter, getting bounced around by several officials. So far, I got them to say they got my request, but no luck on the actual letter, so we don’t get to see their reasoning, as the article does not say. Nor does it clarify if they offered this support before or after recent changes. The old version was very clearly a no good, very bad bill with a humongous blast radius, although many claim it has since been improved to be less awful.

OpenAI justifies this position as saying ‘there is a role for states to play’ in such issues, despite AB 3211 very clearly being similar to SB 1047 in the degree to which it is a Federal law in California guise. It would absolutely apply outside state lines and impose its rules on everyone. So I don’t see this line of reasoning as valid. Is this saying that preventing CBRN harms at the state level is bad (which they actually used as an argument), but deepfakes don’t harm national security so preventing them at the state level is good? I guess? I mean, I suppose that is a thing one can say.

The bill has changed dramatically from when I looked at it. I am still opposed to it, but much less worried about what might happen if it passed, and supporting it on the merits is no longer utterly insane if you have a different world model. But that world model would have to include the idea that California should be regulating frontier generative AI, at least for audio, video and images.

There are three obvious reasons why OpenAI might support this bill.

The first is that it might be trying to head off other bills. If Newsom is under pressure to sign something, and different bills are playing off against each other, perhaps they think AB 3211 passing could stop SB 1047 or one of many other bills – I’ve only covered the two, RTFB is unpleasant and slow, but there are lots more. Probably most of them are not good.

The second reason is if they believe that AB 3211 would assist them in regulatory capture, or at least be easier for them to comply with than for others and thus give them an advantage.

Which the old version certainly would have done. The central thing the bill intends to do is to require effective watermarking for all AIs capable of fooling humans into thinking they are producing ‘real’ content, and labeling of all content everywhere.

OpenAI is known to have been sitting on a 99.9% effective (by their own measure) watermarking system for a year. They chose not to deploy it, because it would hurt their business – people want to turn in essays and write emails, and would rather the other person not know that ChatGPT wrote them.

As far as we know, no other company has similar technology. It makes sense that they would want to mandate watermarking everywhere.

The third reason is they might actually think this is a good idea, in which case they think it is good for California to be regulating in this way, and they are willing to accept the blast radius, rather than actively welcoming that blast radius or trying to head off other bills. I am… skeptical that this dominates, but it is possible.

What we do now know, even if we are maximally generous, is that OpenAI has no particular issue with regulating AI at the state level.

Anthropic sends a letter to Governor Newsom regarding SB 1047, saying its benefits likely exceed its costs. Jack Clark explains.

Jack Clark: Here’s a letter we sent to Governor Newsom about SB 1047. This isn’t an endorsement but rather a view of the costs and benefits of the bill.

You can read the letter for the main details, but I’d say on a personal level SB 1047 has struck me as representative of many of the problems society encounters when thinking about safety at the frontier of a rapidly evolving industry…

How should we balance precaution with an experimental and empirically driven mindset? How does safety get ‘baked in’ to companies at the frontier without stifling them? What is the appropriate role for third-parties ranging from government bodies to auditors?

These are all questions that SB 1047 tries to deal with – which is partly why the bill has been so divisive; these are complicated questions for which few obvious answers exist.

Nonetheless, we felt it important to give our view on the bill following its amendments. We hope this helps with the broader debate about AI legislation.

Jack Clack’s description seems accurate. While the letter says that benefits likely exceed costs, it expresses uncertainty on that. It is net positive on the bill, in a way that would normally imply it was a support letter, but makes clear Anthropic and Dario Amodei technically do not support or endorse SB 1047.

So first off, thank you to Dario Amodei and Anthropic for this letter. It is a helpful thing to do, and if this is Dario’s actual point of view then I support him saying so. More people should do that. And the letter’s details are far more lopsided than their introduction suggests, they would be fully compatible with a full endorsement.

Shirin Ghaffary: Anthropic is voicing support for CA AI safety bill SB 1047, saying the benefits outweigh the costs but still stopping short of calling it a full endorsement.

Tess Hegarty: Wow! That’s great from @AnthropicAI. Sure makes @OpenAI and

@Meta look kinda behind on the degree of caution warranted here 👀

Dan Hendrycks: Anthropic has carefully explained the importance, urgency, and feasibility of SB 1047 in its letter to @GavinNewsom.

“We want to be clear, as we were in our original support if amended letter, that SB 1047 addresses real and serious concerns with catastrophic risk in AI systems. AI systems are advancing in capabilities extremely quickly, which offers both great promise for California’s economy and substantial risk. Our work with biodefense experts, cyber experts, and others shows a trend towards the potential for serious misuses in the coming years – perhaps in as little as 1-3 years.”

Garrison Lovely: Anthropic’s letter may be a critical factor in whether CA AI safety bill SB 1047 lives or dies.

The existence of an AI company at the frontier saying that the bill actually won’t be a disaster really undermines the ‘sky is falling’ attitude taken by many opponents.

Every other top AI company has opposed the bill, making the usual anti-regulatory arguments.

Up front, this statement is huge: “In our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs.” … [thread continues]

Simeon: Credit must be given where credit is due. This move from Anthropic is a big deal and must be applauded as such.

Cicero (reminder for full accuracy: Anthropic said ‘benefits likely exceed costs’ but made clear they did not fully support or endorse):

The letter is a bit too long to quote in full but consider reading the whole thing. Here’s the topline and the section headings, basically.

Dario Amodei (CEO Anthropic) to Governor Newsom: Dear Governor Newsom: As you may be aware, several weeks ago Anthropic submitted a Support if Amended letter regarding SB 1047, in which we suggested a series of amendments to the bill. Last week the bill emerged from the Assembly Appropriations Committee and appears to us to be halfway between our suggested version and the original bill: many of our amendments were adopted while many others were not.

In our assessment the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs. However, we are not certain of this, and there are still some aspects of the bill which seem concerning or ambiguous to us.

In the hopes of helping to inform your decision, we lay out the pros and cons of SB 1047 as we see them, and more broadly we discuss what we see as some key principles for crafting effective and efficient regulation for frontier AI systems based on our experience developing these systems over the past decade.

They say the main advantages are:

  1. Developing SSPs and being honest with the public about them.

  2. Deterrence of downstream harms through clarifying the standard of care.

  3. Pushing forward the science of AI risk reduction.

And these are their remaining concerns:

  1. Some concerning aspects of pre-harm enforcement are preserved in auditing and GovOps.

  2. The bill’s treatment of injunctive relief.

  3. Miscellaneous other issues, basically the KYC provisions, which they oppose.

They also offer principles on regulating frontier systems:

  1. The key dilemma of AI regulation is driven by speed of progress.

  2. One resolution to this dilemma is very adaptable regulation.

  3. Catastrophic risks are important to address.

They see three elements as essential:

  1. Transparent safety and security practices.

  2. Incentives to make safety and security plans effective in preventing catastrophes.

  3. Minimize collateral damage.

As you might expect, I have thoughts.

I would challenge Dario’s assessment that this is only ‘halfway.’ I analyzed the bill last week to compare it to Anthropic’s requests, using the public letter. On major changes, I found they got three, mostly got another two and were refused on one, the KYC issue. On minor issues, they fully got 5, they partially got 3 and they got refused on expanding the reporting time of incidents. Overall, I would say this is at least 75% of Anthropic’s requests weighted by how important they seem to me.

I would also note that they themselves call for ‘very adaptable’ regulation, and that this request is not inherently compatible with this level of paranoia about how things will adapt. SB 1047 is about as flexible as I can imagine a law being here, while simultaneously being this hard to implement in damaging fashion. I’ve discussed those details previously, my earlier analysis stands.

I continue to be baffled by the idea that in a world where AGI is near and existential risks are important, Anthropic is terrified of absolutely any form of pre-harm enforcement. They want to say that no matter how obviously irresponsible you are being, until something goes horribly wrong, we should count purely on deterrence. And indeed, they even got most of what they wanted. But they should understand why that is not a viable strategy on its own.

And I would take issue with their statement that SB 1047 drew so much opposition because it was ‘insufficiently clean,’ as opposed to the bill being the target of a systematic well-funded disinformation campaign from a16z and others, most of whom would have opposed any bill, and who so profoundly misunderstood the bill they successfully killed a key previous provision that purely narrowed the bill, the Limited Duty Exception, without (I have to presume?) realizing what they were doing.

To me, if you take Anthropic’s report at face value, they clear up that many talking points opposing the bill are false, and are clearly saying to Newsom that if you are going to sign an AI regulation bill with any teeth whatsoever, that SB 1047 is a good choice for that bill. Even if they’d, if given the choice, prefer it with even less teeth.

Another way of putting this is that I think it is excellent that Anthropic sent this letter, that it accurately represents the bill (modulo the minor ‘halfway’ line) and I presume also how Anthropic leadership is thinking about it, and I thank them for it.

I wish we had a version of Anthropic where this letter was instead disappointing.

I am grateful we do have at least this version of Anthropic.

You know who else is conflicted but ultimately decided SB 1047 should probably pass?

Elon Musk (August 26, 6: 59pm eastern): This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.

For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.

Notice Elon Musk noticing that this will cost him social capital, and piss people off, and doing it anyway, while also stating his nuanced opinion – a sharp contrast with his usual political statements. A good principle is that when someone says they are conflicted (which can happen in both directions, e.g. Danielle Fong here saying she opposes the bill about at the level Anthropic is in favor of it) it is a good bet they are sincere even if you disagree.

OK, I’ve got my popcorn ready, everyone it’s time to tell us who you are, let’s go.

As in, who understands that Elon Musk has for a long time cared deeply about AI existential risk, and who assumes that any such concern must purely be a mask for some nefarious commercial plot? Who does that thing where they turn on anyone who dares disagree with them, and who sees an honest disagreement?

Bindu Reddy: I am pretty sure Grok-2 wouldn’t have caught up to SOTA models without open-source models and techniques

SB-1047 will materially hurt xAI, so why support it?

People can support bills for reasons under than their own narrow self-interest?

Perhaps he might care about existential risk, as evidenced by him talking a ton over the years about existential risk? And that being the reason he helped found OpenAI? From the beginning I thought that move was a mistake, but that was indeed his reasoning. Similarly, his ideas of things like ‘a truth seeking AI would keep us around’ seem to me like Elon grasping at straws and thinking poorly, but he’s trying.

Adam Thierer: You gotta really appreciate the chutzpah of a guy who has spent the last decade effectively evading NHTSA bureaucrats on AV regs declaring that he’s a long-time advocate of AI safety. 😂.

Musk has also repeatedly called AI an existential threat to humanity while simultaneously going on a massive hiring spree for AI engineers at X. You gotta appreciate that level of moral hypocrisy!

Meanwhile, Musk is also making it easier for MAGA conservatives to come out in favor of extreme AI regulation with all this nonsense. Regardless of how this plays out in California with this particular bill, this is horrible over the long haul.

Here we have some fun not-entirely-unfair meta-chutzpah given Elon’s views on government and California otherwise, suddenly calling out Musk for doing xAI despite thinking AI is an existential risk (which is actually a pretty great point), and a rather bizarre theory of future debates about regulatory paths.

Martin Casado:

Step 1: Move out of California

Step 2: Support legislation that’ll hurt California.

Well played Mr. Musk. Well played.

That is such a great encapsulation of the a16z mindset. Everything is a con, everyone has an angle, Musk must be out there trying to hurt his enemies. That must be it. Beff Jezos went with the same angle.

xAI is, of course, still in California.

Jeremy White (Senior California politics reporter, Politico): .@elonmusk and @Scott_Wiener have clashed often, but here Musk — an early OpenAI co-founder – backs Wiener’s AI safety bill contra @OpenAI and much of the tech industry.

Sam D’Amico (CEO Impulse Labs): Honestly good that this issue is one that appears to have no clear partisan valence, yet.

Dean Ball: As I said earlier, I’m not surprised by this, but I do think it’s interesting that AI policy continues to be… weird. Certainly nonpartisan. We’ve got Nancy Pelosi and e/acc on one side, and Elon Musk and Scott Wiener on the other.

I like this about AI policy.

This is an excellent point. Whichever side you are on, you should be very happy the issue remains non-partisan. Let’s all work to keep it that way.

Andrew Critch: Seems like Musk actually read the bill! Congrats to all who wrote and critiqued it until its present form 😀 And to everyone who’s causally opposing it based on vibes or old drafts: check again. This is the regulation you want, not crazy backlash laws if this one fails.

Another excellent point and a consistent pattern. Watch who has clearly RTFB (read the bill) especially in its final form, and who has not.

We also have at least one prominent reaction (>600k views) from a bill opponent calling for a boycott of Anthropic, highlighting the statement about benefits likely exceeding costs and making Obvious Nonsense accusations that the bill is some Anthropic plot (I can directly assure you this is not true, or you could, ya know, read the letter, or the bill), confirming how this is being interpreted. To his credit, even Brian Chau noticed this kind of hostile reaction made him uncomfortable, and he warns about the dangers of purity spirals.

Meanwhile Garry Tan (among others, but he’s the one Chau quoted) is doing exactly what Chau warns about, saying things like ‘your API customers will notice how decelerationist you are’ and that is absolutely a threat and an attempt to silence dissent against the consensus. The message, over and over, loud and clear, is: We tolerate no talk that there might be any risk in the room whatsoever, or any move to take safety precautions or encourage them in others. If you dare not go with the vibe they will work to ensure you lose business.

(And of course, everyone who doesn’t think you should go forward with reckless disregard, and ‘move fast and break things,’ is automatically a ‘decel,’ which should absolutely be read in-context the way you would a jingoistic slur.)

It should not be underestimated the extent to which, in the VC-SV core, dissent is being suppressed, with people and companies voicing the wrong support or the wrong vibes risking being cut off from their social networks and funding sources. When there are prominent calls for even the lightest of all support for acting responsibly – such as a non-binding letter saying maybe we should pay attention to safety risks that was so harmless SoftBank signed it – there are calls to boycott everyone in question, on principle.

The thinness of skin is remarkable. They fight hard for the vibes.

Aaron Levie: California should be leading the way on accelerating AI (safely), not creating the template to slow things down. If SB 1047 were written 2 years ago, we would have prevented all the AI progress we’ve seen thus far. We’re simply too early in the state of AI to taper progress.

I like the refreshing clarity of Aaron’s first sentence. He says we should not ‘create the template to slow things down,’ on principle. As in, we should not only not slow things down in exchange for other benefits, we should intentionally not have the ability to, in the future, take actions that might do that. The second sentence then goes on to make a concrete counterfactual claim, also a good thing to do, although I strongly claim that the second sentence is false, such a bill would have done very little.

If you’re wondering why so many in VC/YC/SV worlds think ‘everyone is against SB 1047,’ this kind of purity spiral and echo chamber is a lot of why. Well played, a16z?

Yoshua Bengio is interviewed by Shirin Ghaffary of Bloomberg about the need for regulation, and SB 1047 in particular, warning that we are running out of time. Bloomberg took no position I can see, and Bengio’s position is not new.

Dan Hendrycks offers a final op-ed in Time Magazine, pointing out that it is important for the AI industry that it prevent catastrophic harms. Otherwise, it could provoke a large negative reaction. Another externality problem.

Here is a list of industry opposition to SB 1047.

Nathan Labenz (Cognitive Revolution): Don’t believe the SB 1047 hype folks!

Few models thus far created would be covered (only those that cost $100M+), and their developers are voluntarily doing extensive safety testing anyway

I think it’s a prudent step, but I don’t expect a huge impact either way.

Nathan Lebenz had a full podcast, featuring both the pro (Nathan Calvin) and the con (Dean Ball) sides.

In the Atlantic, bill author Scott Weiner is interviewed about all the industry opposition, insisting this is ‘not a doomer bill’ or focused on ‘science fiction risks.’ He is respectful towards bill most opponents, but does not pretend that a16z isn’t running a profoundly dishonest campaign.

I appreciated this insightful take on VCs who oppose SB 1047.

Liron Shapira: > The anti-SB 1047 VCs aren’t being clear and constructive in their rejection.

Have you ever tried to fundraise from a VC?

Indeed I have. At least here they tell you they’re saying no. Now you want them to tell you why and how you can change their minds? Good luck with that.

Lawrence Chan does an RTFB, concludes it is remarkably light touch and a good bill. He makes many of the usual common sense points – this covers zero existing models, will never cover anything academics do, and (he calls it a ‘spicy take’) if you cannot take reasonable care doing something then have you considered not doing it?

Mike Knoop, previously having opposed SB 1047 because he does not think AGI progress is progressing and that anything slowing down AGI progress would be bad, updates to believing it is a ‘no op’ that doesn’t do anything but it could reassure the worried and head off worse other actions. But if the bill actually did anything, he would oppose. This is a remarkably common position, that there is no cost-benefit analysis to be done when building things smarter than humans. They think this is a situation where no amount of safety is worth any amount of potentially slowing down if there was a safety issue, so they refuse to talk price. The implications are obvious.

Aidan McLau of Topology AI says:

Aiden McLau: As a capabilities researcher, accelerationist, libertarian, and ai founder… I’m coming out of the closet. I support sb 1047.

growing up, you realize we mandate seatbelts and licenses to de-risk outlawing cars. Light and early regulation is the path of optimal acceleration.

the bill realllllllllllllly isn’t that bad

if you have $100m to train models (no startup does), you can probably afford some auditing. Llama will be fine.

But if CA proposes some hall monitor shit, I’ll be the first to oppose them. Stay vigilant.

I think there’s general stigma about supporting regulation as an ai founder, but i’ve talked to many anti-sb 1047 people who are smart, patient, and engage in fair discourse.

Daniel Eth: Props to Aidan for publicly supporting SB1047 while working in the industry. I know a bunch of you AI researchers out there quietly support the bill (lots of you guys at the big labs like my pro-SB1047 tweets) – being public about that support is commendable & brave.

Justin Harvey (co-founder AIVideo.com): I generally support SB 1047

I hate regulation. I want AI to go fast. I don’t trust the competency of the government.

But If you truly believe this will be the most powerful technology ever created, idk. This seems like a reasonable first step tbh.

Notice how much the online debate has always been between libertarians and more extreme libertarians. Everyone involved hates regulation. The public, alas, does not.

Witold Wnuk makes the case that the bill is sufficiently weak that it will de facto be moral license for the AI companies to go ahead and deal with the consequences later, and the blame when models go haywire will thus also be on those who passed this bill, and that this does nothing to solve the problem. As I explained in my guide, I very much disagree and think this is a good bill. And I don’t think this bill gives anone ‘moral license’ at all. But I understand the reasoning.

Stephen Casper notices that the main mechanism of SB 1047 is basic transparency, and that it does not bode well that industry is so vehemently against this and it is so controversial. I think he goes too far in terms of how he describes how difficult it would be to sue under the bill, he’s making generous (to the companies) assumptions, but the central point here seems right.

One thing California does well is show you how a bill has changed since last time. So rather than having to work from scratch, we can look at the diff.

We’ll start with a brief review of the old version (abridged a bit for length). Note that some of this was worded badly in ways that might backfire quite a lot.

  1. Authentic content is created by humans.

  2. Inauthentic content is created by AIs and could be mistaken for authentic.

  3. The bill applies to every individual, no size thresholds at all.

  4. Providers of any size must ‘to the extent possible’ place ‘imperceptible and maximally indelible’ watermarks on all content, along with watermark decoders.

  5. Grandfathering in old systems requires a 99% accurate detector.

    1. We now know that OpenAI thinks it knows how to do that.

    2. No one else, to our knowledge, is close. Models would be banned.

  6. Internet hosting platforms are responsible for ensuring indelible watermarks.

  7. All failures must be reported within 24 hours.

  8. All AI that could produce inauthentic content requires notification for each conversation, including audio notification for every audio interaction.

  9. New cameras have to provide watermarks.

  10. Large online platforms (1 million California users, not only social media but also e.g. texting systems) shall use labels on every piece of content to mark it as human or AI, or some specific mix of the two. For audio that means a notice at the beginning and another one at the end, every time, for all messages AI or human. Also you check a box on every upload.

  11. Fines are up to $1 million or 5% of global annual revenue, EU style.

  12. All existing open models are toast. New open models might or might not be toast. It doesn’t seem possible to comply with an open model, on the law’s face.

All right, let’s see what got changed and hopefully fixed, excluding stuff that seems to be for clarity or to improve grammar without changing the meaning.

There is a huge obvious change up front: Synthetic content now only includes images, videos and audio. The bill no longer cares about LLMs or text at all.

A bunch of definitions changed in ways that don’t alter my baseline understanding.

Large online platform no longer includes internet website, web application or digital application. It now has to be either a social media platform, messaging platform, advertising network or standalone search engine that displays content to viewers who are not the creator or collaborator, and the threshold is up to 2 million monthly unique California users.

Generative AI providers have to make available to the public a provenance detection tool or permit users to use one provided by a third party, based on industry standards, that detects generative AI content and how that content was created. There is no minimum size threshold for the provider before they must do this.

Summaries of testing procedures must be made available upon requests to academics, except when that would compromise the method.

A bunch of potentially crazy disclosure requirements got removed.

The thing about audio disclosures happening twice is gone.

Users of platforms need not label every piece of data now, the platform scans the data and reports any provenance data contained therein, or says it is unknown if none is found.

There are new disclosure rules around the artist, track and copyright information on sound recordings and music videos, requiring the information be displayed in text.

I think that’s the major changes, and they are indeed major. I am no longer worried AB 3211 is going to do anything too dramatic, since at worst it applies only to audio, video and images, and the annoyance levels involved are down a lot, and standards for compliance are lower, and compliance in these other formats seems easier than text.

My new take on the new AB 3211 is that this is a vast improvement. If nothing else, the blast radius is vastly diminished.

Is it now a good bill?

I wouldn’t go that far. It’s still not a great implementation. I don’t think deepfakes are a big enough issue to motivate this level of annoyance, or the tail risk that this is effectively a much broader burden than it appears. But the core thing it is attempting to do is no longer a crazy thing to attempt, and the worst dangers are gone. I think the costs exceed the benefits, but you could make a case, if you felt deepfake audio and video were a big short term deal, that this bill has more benefits than costs.

What you cannot reasonably do is support this bill, then turn around and say that California should not be regulating AI and should let the Federal government do it. That does not make any sense, and I have confidence the Federal government will if necessary deal with deepfakes, and that we could safely react after the problem gets worse and being modestly ‘too late’ to it would not be a big deal.

SB 1047: Final Takes and Also AB 3211 Read More »