Author name: Tim Belzer

although-it’s-‘insane’-to-try-and-land-new-glenn,-bezos-said-it’s-important-to-try

Although it’s ‘insane’ to try and land New Glenn, Bezos said it’s important to try

“We would certainly like to achieve orbit, and get the Blue Ring Pathfinder into orbit,” Bezos said. “Landing the booster would be gravy on top of that. It’s kind of insane to try and land the booster. A more sane approach would probably be to try to land it into ocean. But we’re gonna go for it.”

Blue Origin has built a considerable amount of infrastructure on a drone ship, Jacklyn, that will be waiting offshore for the rocket to land upon. Was Bezos not concerned about putting that hardware at risk?

A view inside the New Glenn rocket factory in Florida.

Credit: Blue Origin

A view inside the New Glenn rocket factory in Florida. Credit: Blue Origin

“I’m worried about everything,” he admitted. However, the rocket has been programmed to divert from the ship if the avionics on board the vehicle sense that anything is off-nominal.

And there is, of course, a pretty good chance of that happening.

“We’ve done a lot of work, we’ve done a lot of testing, but there are some things that can only be tested in flight,” Bezos said. “And you can’t be overconfident in these things. You have to real. The reality is, there are a lot of things that go wrong, and you have to accept that, if something goes wrong, we’ll pick ourselves up and get busy for the second flight.”

As for that flight, the company has a second booster stage deep in development. It could be seen on the factory floor below Sunday, and should be ready later this spring, Limp said. There are about seven upper stages in the flow as the company works to optimize the factory for production.

A pivotal moment for spaceflight

Bezos founded Blue Origin a little more than 24 years ago, and the company has moved slowly compared to some of its competitors, most notably SpaceX. However, when Blue Origin has built products, they’ve been of high quality. Bezos himself flew on the first human mission of the New Shepard spacecraft in 2021, a day he described as the ‘best’ in his life. Of all the people who have ever flown into space, he noted that 7 percent have now done so on a Blue Origin vehicle. And the company’s BE-4 rocket engine has performed exceptionally well in flight. But an orbital mission, such a touchstone for launch companies, has eluded Bezos until now.

Although it’s ‘insane’ to try and land New Glenn, Bezos said it’s important to try Read More »

the-8-most-interesting-pc-monitors-from-ces-2025

The 8 most interesting PC monitors from CES 2025


Monitors worth monitoring

Here are upcoming computer screens with features that weren’t around last year.

Yes, that’s two monitors in a suitcase.

Yes, that’s two monitors in a suitcase.

Plenty of computer monitors made debuts at the Consumer Electronics Show (CES) in Las Vegas this year, but many of the updates at this year’s event were pretty minor. Many could have easily been a part of 2024’s show.

But some brought new and interesting features to the table for 2025—in this article, we’ll tell you all about them.

LG’s 6K monitor

Pixel addicts are always right at home at CES, and the most interesting high-resolution computer monitor to come out of this year’s show is the LG UltraFine 6K Monitor (model 32U990A).

People seeking more than 3840×2160 resolution have limited options, and they’re all rather expensive (looking at you, Apple Pro Display XDR). LG’s 6K monitor means there’s another option for professionals needing extra pixels for things like developing, engineering, and creative work. And LG’s 6144×3456, 32-inch display has extra oomph thanks to something no other 6K monitor has: Thunderbolt 5.

This is the only image LG provided for the monitor. Credit: LG

LG hasn’t confirmed the refresh rate of its 6K monitor, so we don’t know how much bandwidth it needs. But it’s possible that pairing the UltraFine with a Thunderbolt 5 PC could trigger Bandwidth Boost, a Thunderbolt 5 feature that automatically increases bandwidth from 80Gbps to 120Gbps. For comparison, Thunderbolt 4 maxes out at 40Gbps. Thunderbolt 5 also requires 140 W power delivery and maxes out at 240 W. That’s a notable bump from Thunderbolt 4’s 100–140 W.

Considering that Apple’s only 6K monitor has Thunderbolt 3, Thunderbolt 5 is a differentiator. With this capability, the LG UltraFine is ironically better equipped in this regard for use with the new MacBook Pros and Mac Mini (which all have Thunderbolt 5) compared to Apple’s own monitors. LG may be aware of this, as the 32U990A’s aesthetic could be considered very Apple-like.

Inside the 32U990A’s silver chassis is a Nano IPS panel. In recent years, LG has advertised its Nano IPS panels as having “nanometer-sized particles” applied to their LED backlight to absorb “excess, unnecessary light wavelengths” for “richer color expression.” LG’s 6K monitor claims to cover 98 percent of DCI-P3 and 99.5 percent of Adobe RGB. IPS Black monitors, meanwhile, have higher contrast ratios (up to 3,000:1) than standard IPS panels. However, LG has released Nano IPS monitors with 2,000:1 contrast, the same contrast ratio as Dell’s 6K, IPS Black monitor.

LG hasn’t shared other details, like price or a release date. But the monitor may cost more than Dell’s Thunderbolt 4-equipped monitor, which is currently $2,480.

Brelyon’s multi-depth monitor

Brelyon Ultra Reality Extend.

Someone from CNET using the Ultra Reality Extend. Credit: CNET/YouTube

Brelyon is headquartered in San Mateo, California, and was founded by scientists and executives from MIT, IMAX, UCF, and DARPA. It’s been selling display technology for commercial and defense applications since 2022. At CES, the company unveiled the Ultra Reality Extend, describing it as an “immersive display line that renders virtual images in multiple depths.”

“As the first commercial multi-focal monitor, the Extend model offers multi-depth programmability for information overlay, allowing users to see images from 0.7 m to as far as 2.5 m of depth virtually rendered behind the monitor; organizing various data streams at different depth layers, or triggering focal cues to induce an ultra immersive experience akin to looking out through a window,” Brelyon’s announcement said.

Brelyon says the monitor runs 4K at 60 Hz with 1 bit of monocular depth for an 8K effect. The monitor includes “OLED-based curved 2D virtual images, with the largest stretching to 122 inches and extending 2.5 meters deep, viewable through a 30-inch frame,” according to the firm’s announcement. The closer you sit, the greater the field of view you get.

The Extend leverages “new GPU capabilities to process light and video signals inside our display platforms,” Brelyon CEO Barmak Heshmat said in a statement this week. He added: “We are thinking beyond headsets and glasses, where we can leverage GPU capabilities to do real-time driving of higher-bandwidth display interfaces.”

Brelyon says this was captured from the Extend, with its camera lens focus changing from 70 cm to 2,500 cm. Credit: Brelyon

Advancements in AI-based video processing, as well as other software advancements and hardware improvements, purportedly enable the Extend to upscale lower-dimension streams to multiple, higher-dimension ones. Brelyon describes its product as a “generative display system” that uses AI computation and optics to assign different depth values to content in real time for rendering images and information overlays.

The idea of a virtual monitor that surpasses the field of view of typical desktop monitors while allowing users to see the real world isn’t new. Tech firms (including many at CES) usually try to accomplish this through AR glasses. But head-mounted displays still struggle with problems like heat, weight, computing resources, battery, and aesthetics.

Brelyon’s monitor seemingly demoed well at CES. Sam Rutherford, a senior writer at Engadget, watched a clip from the Marvel’s Spider-Man video game on the Extend and said that “trees and light poles whipping past in my face felt so real I started to flinch subconsciously.” He added that the monitor separated “different layers of the content to make snow in the foreground look blurry as it whipped across the screen, while characters in the distance” still looked sharp.

The monitor costs $5,000 to $8,000 depending on how you’ll use it and whether you have other business with Brelyon, per Engadget, and CES is one of the few places where people could actually see the display in action.

Samsung’s 3D monitor

Samsung Odyssey 3D

Samsung’s depiction of the 3D effect of its 3D PC monitor. Credit: Samsung

It’s 2025, and tech companies are still trying to convince people to bring a 3D display into their homes. This week, Samsung took its first swing since 2009 at 3D screens with the Odyssey 3D monitor.

In lieu of 3D glasses. the Odyssey 3D achieves its 3D effect with a lenticular lens “attached to the front of the panel and its front stereo camera,” Samsung says, as well eye tracking and view mapping. Differing from other recent 3D monitors, the Odyssey 3D claims to be able to make 2D content look three-dimensional even if that content doesn’t officially support 3D.

You can find more information in our initial coverage of Samsung’s Odyssey 3D, but don’t bet on finding 3D monitors in many people’s homes soon. The technology for quality 3D displays that work without glasses has been around for years but still has never taken off.

Dell’s OLED productivity monitor

With improvements in burn-in, availability, and brightness, finding OLED monitors today is much easier than it was two years ago. But a lot of the OLED monitors released recently target gamers with features like high refresh rates, ultrawide panels, and RGB. These features are unneeded or unwanted by non-gamers but contribute to OLED monitors’ already high pricing. Numerous smaller OLED monitors were announced at CES, with 27-inch, 4K models being a popular addition. Most of them are still high-refresh gaming monitors, though.

The Dell 32-inch QD-OLED, on the other hand, targets “play, school, and work,” Dell’s announcement says. And its naming (based on a new naming convention Dell announced this week that kills XPS and other longstanding branding) signals that this is a mid-tier monitor from Dell’s entry-level lineup.

Dell 32-inch QD-OLED,

OLED for normies. Credit: Dell

The monitor’s specs, which include a 120 Hz refresh rate, AMD FreeSync Premium, and USB-C power delivery at up to 90 W, make it a good fit for pairing with many mainstream laptops.

Dell also says this is the first QD-OLED with spatial audio, which uses head tracking to alter audio coming from the monitor’s five 5 W speakers. This is a feature we’ve seen before, but not on an OLED monitor.

For professionals and/or Mac users that prefer the sleek looks, reputation, higher power delivery and I/O hubs associated with Dell’s popular UltraSharp line, Dell made two more notable announcements at CES: an UltraSharp 32 4K Thunderbolt Hub Monitor (U3225QE) coming out in February 25 for $950 and an UltraSharp 27 4K Thunderbolt Hub Monitor (U2725QE) coming out that same day for $700.

The suitcase monitors

Before we get into the Base Case, please note that this product has no release date because its creators plan to go to market via crowdfunding. Base Case says it will launch its Indiegogo campaign next month, but even then, we don’t know if the project will be funded, if any final product will work as advertised, or if customers will receive orders in a timely fashion. Still, this is one of the most unusual monitors at CES, and it’s worth discussing.

The Base Case is shaped like a 24x14x16.5-inch rolling suitcase, but when you open it up, you’ll find two 24-inch monitors for connecting to a laptop. Each screen reportedly has a 1920×1080 resolution, a 75 Hz refresh rate, and a max brightness claim of 350 nits. Base Case is also advertising PC and Mac support (through DisplayLink), as well as HDMI, USB-C, USB-A, Thunderbolt, and Ethernet ports. Telescoping legs allow the case to rise 10 inches so the display can sit closer to eye level.

Ultimately, the Base Case would see owners lug around a 20-pound product for the ability to quickly create a dual-monitor setup equipped with a healthy amount of I/O. Tom’s Guide demoed a prototype at CES and reported that the monitors took “seconds to set up.”

In case you’re worried that the Base Case prioritizes displays over storage, note that its makers plan on adding a front pocket to the suitcase that can fit a laptop. The pocket wasn’t on the prototype Tom’s Guide saw, though.

Again, this is far from a finalized product, but Base Case has alluded to a $2,400 starting price. For comparison to other briefcase-locked displays—and yes, doing this is possible—LG’s StanbyME Go (27LX5QKNA) tablet in a briefcase currently has a $1,200 MSRP.

Corsair’s PC-mountable touchscreen

A promotional image of the touchscreen.

If the Base Case is on the heftier side of portable monitors, Corsair’s Xeneon Edge is certainly on the minute side. The 14.5-inch LCD touchscreen isn’t meant to be a primary display, though. Corsair built it as a secondary screen for providing quick information, like the song your computer is playing, the weather, the time, and calendar events. You could also use the 2560×720 pixels to display system information, like component usage and temperatures.

Corsair says its iCue software will be able to provide system information on the Xeneon, but because the Xeneon Edge works like a regular monitor, you could (and likely would prefer to) use your own methods. Still, the Xeneon Edge stands out from other small, touchscreen PC monitors with its clean UI that can succinctly communicate a lot of information on the tiny display at once.

Specs-wise, this is a 60 Hz IPS panel with 5-point capacitive touch. Corsair says the monitor can hit 350 nits of brightness.

You can connect the Xeneon Edge to a computer via USB-C (DisplayPort Alt mode) or HDMI. There are also screw holes, so PC builders could install it via a 360 mm radiator mounting point inside their PC case.

Alternatively, Corsair recommends attaching the touchscreen to the outside of a PC case through the monitor’s 14 integrated magnets. Corsair said in a blog post that the “magnets are underneath the plastic casing so the metal surface you stick it to won’t get scratched.” Or, in traditional portable monitor style, the Xeneon Edge could also just sit on a desk with its included stand.

Corsair Xeneon Edge

Corsair demos different ways the screen could attach to a case. Credit: TechPowerUp/YouTube

Corsair plans to release the Xeneon Edge in Q2. Expected pricing is “around $249,” Tom’s Hardware reported.

MSI’s side panel display panel

Why attach a monitor to your PC case when you can turn your PC case into a monitor instead?

MSI says that the touchscreen embedded into this year’s MEG Vision X AI 2nd gaming desktop’s side panel can work like a regular computer monitor. Similar to Corsair’s monitor, the MSI’s display has a corresponding app that can show system information and other customizations, which you can toggle with controls on the front of the case, PCMag reported.

MSI used an IPS panel with 1920×1080 resolution for the display, which also has an integrated mic and speaker. MSI says “electric vehicle control centers” inspired the design. We’ve seen similar PC cases, like iBuyPower’s more translucent side panel display and the touchscreen on Hyte’s pentagonal PC case, before. But MSI is bringing the design to a more mainstream form factor by including it in a prebuilt desktop, potentially opening the door for future touchscreen-equipped desktops.

Considering the various locations people place their desktops and the different angles at which they may try to look at this screen, I’m curious about the monitor’s viewing angles and brightness. IPS seems like a good choice since it tends to have strong image quality when viewed from different angles. A video PC Mag shot from the show floor shows images on the monitor appearing visible and lively:

Hands on with MSI’s MEG Vision X AI Desktop: Now, your PC tower’s a monitor, too.

World’s fastest monitor

There’s a competitive air at CES that lends to tech brands trying to one-up each other on spec sheets. Some of the most heated competition concerns monitor refresh rates; for years, we’ve been meeting the new world’s fastest monitor at CES. This year is no different.

The brand behind the monitor is Koorui, a three-year-old Chinese firm whose website currently lists monitors and keyboards. Koorui hasn’t confirmed when it will make its 750 Hz display available, where it will sell it, or what it will cost. That should bring some skepticism about this product actually arriving for purchase in the US. However, Koorui did bring the display to the CES show floor.

The speedy display had a refresh rate test running at CES, and according to several videos we’ve seen from attendees, the monitor appeared to consistently hit the 750 Hz mark.

World’s first 750Hz monitor???

For those keeping track, high-end gaming monitors—namely ones targeting professional gamers—hit 360 Hz in 2020. Koorui’s announcement means max monitor speeds have increased 108.3 percent in four years.

One CES attendee noticed, however, that the monitor wasn’t showing any gameplay. This could be due to the graphical and computing prowess needed to demonstrate the benefits of a 750 Hz monitor. A system capable of 750 frames per second would give people a chance to see if they could detect improved motion resolution but would also be very expensive. It’s also possible that the monitor Koorui had on display wasn’t ready for that level of scrutiny yet.

Like many eSports monitors, the Koorui is 24.5 inches, with a resolution of 1920×1080. Perhaps more interesting than Koorui taking the lead in the perennial race for higher refresh rates is the TN monitor’s claimed color capabilities. TN monitors aren’t as popular as they were years ago, but OEMs still employ them sometimes for speed.

They tend to be less colorful than IPS and VA monitors, though. Most offer sRGB color gamuts instead of covering the larger DCI-P3 color space. Asus’ 540 Hz ROG Swift Pro PG248QP, for example, is a TN monitor claiming 125 percent sRGB coverage. Koorui’s monitor claims to cover 95 percent of DCI-P3, due to the use of a quantum dot film. Again, there’s a lot that prospective shoppers should confirm about this monitor if it becomes available.

For those seeking the fastest monitors with more concrete release plans, several companies announced 600 Hz monitors coming out this year. Acer, for example, has a 600 Hz Nitro XV240 F6 (also a TN monitor) that it plans to release in North America this quarter at a starting price of $600.

Photo of Scharon Harding

Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She’s been reporting on technology for over 10 years, with bylines at Tom’s Hardware, Channelnomics, and CRN UK.

The 8 most interesting PC monitors from CES 2025 Read More »

new-glenn-rocket-is-at-the-launch-pad,-waiting-for-calm-seas-to-land

New Glenn rocket is at the launch pad, waiting for calm seas to land

COCOA BEACH, Fla.—As it so often does in the final days before the debut of a new rocket, it all comes down to weather. Accordingly, Blue Origin is only awaiting clear skies and fair seas for its massive New Glenn vehicle to lift off from Florida.

After the company completed integration of the rocket this week, and rolled the super heavy lift rocket to its launch site at Cape Canaveral, the focus turned toward the weather. Conditions at Cape Canaveral Space Force Base have been favorable during the early morning launch windows available to the rocket, but there have been complications offshore.

That’s because Blue Origin aims to recover the first stage of the New Glenn rocket, and sea states in the Atlantic Ocean have been unsuitable for an initial attempt to catch the first stage booster on a drone ship. The company has already waived one launch attempt set for 1 am ET (06: 00 UTC) on Friday, January 10.

Conditions have improved a bit since then, but on Saturday evening the company’s launch officials canceled a second attempt planned for 1 am ET on Sunday. The new launch time is now 1 am ET on Monday, January 13, when better sea states are expected. There is a three-hour launch window. The company will provide a webcast of proceedings at this link beginning one hour before liftoff.

Seeking a nominal flight

According to a mission timeline shared by Blue Origin on Saturday, it will take several hours to fuel the New Glenn rocket. Second stage hydrogen loading will begin 4.5 hours before liftoff, followed by the booster stage and second stage liquid oxygen at 4 hours, and methane for the booster stage at 3.5 hours to go. Fueling should be complete about an hour before liftoff.

New Glenn rocket is at the launch pad, waiting for calm seas to land Read More »

did-hilma-af-klint-draw-inspiration-from-19th-century-physics?

Did Hilma af Klint draw inspiration from 19th century physics?


Diagrams from Thomas Young’s 1807 Lectures bear striking resemblance to abstract figures in af Klint’s work.

Hilma af Klint’s Group IX/SUW, The Swan, No. 17, 1915. Credit: Hilma af Klimt Foundation

In 2019, astronomer Britt Lundgren of the University of North Carolina Asheville visited the Guggenheim Museum in New York City to take in an exhibit of the works of Swedish painter Hilma af Klint. Lundgren noted a striking similarity between the abstract geometric shapes in af Klint’s work and scientific diagrams in 19th century physicist Thomas Young‘s Lectures (1807). So began a four-year journey starting at the intersection of science and art that has culminated in a forthcoming paper in the journal Leonardo, making the case for the connection.

Af Klint was formally trained at the Royal Academy of Fine Arts and initially focused on drawing, portraits, botanical drawings, and landscapes from her Stockholm studio after graduating with honors. This provided her with income, but her true life’s work drew on af Klint’s interest in spiritualism and mysticism. She was one of “The Five,” a group of Swedish women artists who shared those interests. They regularly organized seances and were admirers of theosophical teachings of the time.

It was through her work with The Five that af Klint began experimenting with automatic drawing, driving her to invent her own geometric visual language to conceptualize the invisible forces she believed influenced our world. She painted her first abstract series in 1906 at age 44. Yet she rarely exhibited this work because she believed the art world at the time wasn’t ready to appreciate it. Her will requested that the paintings stay hidden for at least 20 years after her death.

Even after the boxes containing her 1,200-plus abstract paintings were opened, their significance was not fully appreciated at first. The Moderna Museum in Stockholm actually declined to accept them as a gift, although it now maintains a dedicated space to her work. It wasn’t until art historian Ake Fant presented af Klint’s work at a Helsinki conference that the art world finally took notice. The Guggenheim’s exhibit was af Klint’s American debut. “The exhibit seemed to realize af Klint’s documented dream of introducing her paintings to the world from inside a towering spiral temple and it was met roundly with acclaim, breaking all attendance records for the museum,” Lundgren wrote in her paper.

A pandemic project

Lundgren is the first person in her family to become a scientist; her mother studied art history, and her father is a photographer and a carpenter. But she always enjoyed art because of that home environment, and her Swedish heritage made af Klint an obvious artist of interest. It wasn’t until the year after she visited the Guggenheim exhibit, as she was updating her lectures for an astrophysics course, that Lundgren decided to investigate the striking similarities between Young’s diagrams and af Klint’s geometric paintings—in particular those series completed between 1914 and 1916. It proved to be the perfect research project during the COVID-19 lockdowns.

Lundgren acknowledges the inherent skepticism such an approach by an outsider might engender among the art community and is sympathetic, given that physics and astronomy both have their share of cranks. “As a professional scientist, I have in the past received handwritten letters about why Einstein is wrong,” she told Ars. “I didn’t want to be that person.”

That’s why her very first research step was to contact art professors at her institution to get their expert opinions on her insight. They were encouraging, so she dug in a little deeper, reading every book about af Klint she could get her hands on. She found no evidence that any art historians had made this connection before, which gave her the confidence to turn her work into a publishable paper.

The paper didn’t find a home right away, however; the usual art history journals rejected it, partly because Lundgren was an outsider with little expertise in that field. She needed someone more established to vouch for her. Enter Linda Dalrymple Henderson of the University of Texas at Austin, who has written extensively about scientific influences on abstract art, including that of af Klint. Henderson helped Lundgren refine the paper, encouraged her to submit it to Leonardo, and “it came back with the best review I’ve ever received, even inside astronomy,” said Lundgren.

Making the case

Young and af Klint were not contemporaries; Young died in 1829, and af Klint was born in 1862. Nor are there any specific references to Young or his work in the academic literature examining the sources known to have influenced the Swedish painter’s work. Yet af Klint had a well-documented interest in science, spanning everything from evolution and botany to color theory and physics. While those influences tended to be scientists who were her contemporaries, Lundgren points out that the artist’s personal library included a copy of an 1823 astronomy book.

Excerpt from Plate XXIX of Young’s Lectures Niels Bohr Library and Archives/AIP

Af Klint was also commissioned to paint a portrait of Swedish physicist Knut Angstrom in 1910 at Uppsala University, whose library includes a copy of Young’s Lectures. So it’s entirely possible that af Klint had access to the astronomy and physics of the previous century and would likely have been particularly intrigued by discoveries involving “invisible light” (electromagnetism, x-rays, radioactivity, etc.).

Young’s Lectures contain a speculative passage about the existence of a universal ether (since disproven), a concept that fascinated both scientists and those (like af Klint) with certain occult interests in the late 19th and early 20th centuries. In fact, Young’s passage was included in a popular 1875 spiritualist text, Unseen Universe by P.G. Tait and Balfour Stewart, that was heavily cited by Theosophical Society founder Helena Petrovna Blavatsky. Blavatsky in turn is known to have influenced af Klint around the time the artist created The Swan, The Dove, and Altarpieces series.

Lundgren found that “in several instances, the captions accompanying Young’s color figures [in the Lectures] even seem to decode elements of af Klint’s paintings or bring attention to details that might otherwise be overlooked.” For instance, the caption for Young’s Plate XXIX describes the “oblique stripes of color” that appear when candlelight is viewed through a prism that “almost interchangeably describes features in af Klint’s Group X., No. 1, Altarpiece,” she wrote

(a) Excerpt from Young's Plate XXX. (b) af Klint, Parsifal Series No. 68. (c and d) af Klint, Group IX/UW, The Dove, No. 12 and No. 13.

(a) Excerpt from Young’s Plate XXX. (b) af Klint, Parsifal Series No. 68. (c and d) af Klint, Group IX/UW, The Dove, No. 12 and No. 13. Credit: Niels Bohr Library/Hilma af Klint Foundation

Art historians had previously speculated about af Klint’s interest in color theory, as reflected in the annotated watercolor squares featured in her Parsifal Series (1916). Lundgren argues that those squares resemble Fig. 439 in the color plates of Young’s Lectures, demonstrating the inversion of color in human vision. Those diagrams also “appear almost like crude sketches of af Klint’s The Dove, Nos. 12 and 13,” Lundgren wrote. “Paired side by side, these paintings can produce the same visual effects described by Young, with even the same color palette.”

The geometric imagery of af Klint’s The Swan series is similar to Young’s illustrations of the production and perception of colors, while “black and white diagrams depicting the propagation of light through combinations of lenses and refractive surfaces, included in Young’s Lectures On the Theory of Optics, bear a particularly strong geometric resemblance to The Swan paintings No. 12 and No.13,” Lundgren wrote. Other pieces in The Swan series may have been inspired by engravings in Young’s Lectures.

This is admittedly circumstantial evidence and Lundgren acknowledges as much. “Not being able to prove it is intriguing and frustrating at the same time,” she said. She continues to receive additional leads, most recently from an af Klint relative on the board of the Moderna Museum. Once again, the evidence wasn’t direct, but it seems af Klint would have attended certain local lecture circuits about science, while several members of the Theosophy Society were familiar with modern physics and Young’s earlier work. “But none of these are nails in the coffin that really proved she had access to Young’s book,” said Lundgren.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Did Hilma af Klint draw inspiration from 19th century physics? Read More »

only-5-percent-of-us-car-buyers-want-an-ev,-according-to-survey

Only 5 percent of US car buyers want an EV, according to survey

Only 5 percent of US consumers want their next vehicle to be a battery electric vehicle, according to a new survey by Deloitte. The consulting company gathered data from more than 31,000 people across 30 countries as part of its 2025 Global Automotive Consumer Study, and some of the results are rather interesting, as they pertain to technologies like new powertrains, connectivity, and artificial intelligence.

Among US consumers, internal combustion engines (ICE) remain number one, with 62 percent indicating that their next car will not be electrified. Another 1 in 5 would like a hybrid for their next vehicle, with a further 6 percent desiring a plug-in hybrid. (The remaining survey respondents either did not know or wanted some other powertrain option.)

By contrast, only 38 percent of Chinese consumers want to stick with ICE; meanwhile, 27 percent of them want a BEV next. That’s a far higher percentage than in other large nations—in Germany, only 14 percent want a BEV; in the UK and Canada, only 8 percent are BEV-bound; and in Japan, the number is a mere 3 percent.

Meanwhile, hybrids are far more attractive to consumers in most countries. While only 16 percent of Chinese and 12 percent of German consumers indicated this preference, 23 percent of Canadians, 24 percent of UK consumers, and 35 percent of Japanese consumers replied that they were looking for a hybrid for their next car.

Deloitte suspects that some of this reticence toward BEVs “could be due, in part, to lingering affordability concerns.” The hoped-for parity in the cost of a BEV powertrain and an ICE powertrain has still not arrived, and fully 45 percent of US consumers said they did not want to pay more than $34,999 for their next car (11 percent said less than $15,000, 9 percent said $15,000–$19,999, and the remaining 25 percent said $20,000–$34,999.)

Why the reticence?

Despite popular sentiment, there are actually quite a few electric vehicles available for much less than the average new vehicle price of $47,000. But other than the Nissan Leaf, all of them have prices starting with a “3.” (Meanwhile, 75 percent of car buyers in the US buy used cars, and the transition to electrification will not change that underlying reality.)

Only 5 percent of US car buyers want an EV, according to survey Read More »

elon-musk-wants-courts-to-force-openai-to-auction-off-a-large-ownership-stake

Elon Musk wants courts to force OpenAI to auction off a large ownership stake

Musk, who founded his own AI startup xAI in 2023, has recently stepped up efforts to derail OpenAI’s conversion.

In November, he sought to block the process with a request for a preliminary injunction filed in California. Meta has also thrown its weight behind the suit.

In legal filings from November, Musk’s team wrote: “OpenAI and Microsoft together exploiting Musk’s donations so they can build a for-profit monopoly, one now specifically targeting xAI, is just too much.”

Kathleen Jennings, attorney-general in Delaware—where OpenAI is incorporated—has since said her office was responsible for ensuring that OpenAI’s conversion was in the public interest and determining whether the transaction was at a fair price.

Members of Musk’s camp—wary of Delaware authorities after a state judge rejected a proposed $56 billion pay package for the Tesla boss last month—read that as a rebuke of his efforts to block the conversion, and worry it will be rushed through. They have also argued OpenAI’s PBC conversion should happen in California, where the company has its headquarters.

In a legal filing last week Musk’s attorneys said Delaware’s handling of the matter “does not inspire confidence.”

OpenAI committed to become a public benefit corporation within two years as part of a $6.6 billion funding round in October, which gave it a valuation of $157 billion. If it fails to do so, investors would be able to claw back their money.

There are a number of issues OpenAI is yet to resolve, including negotiating the value of Microsoft’s investment in the PBC. A conversion was not imminent and would be likely to take months, according to the person with knowledge of the company’s thinking.

A spokesperson for OpenAI said: “Elon is engaging in lawfare. We remain focused on our mission and work.” The California and Delaware attorneys-general did not immediately respond to a request for comment.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Elon Musk wants courts to force OpenAI to auction off a large ownership stake Read More »

microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform

Microsoft sues service for creating illicit content with its AI platform

Microsoft and others forbid using their generative AI systems to create various content. Content that is off limits includes materials that feature or promote sexual exploitation or abuse, is erotic or pornographic, or attacks, denigrates, or excludes people based on race, ethnicity, national origin, gender, gender identity, sexual orientation, religion, age, disability status, or similar traits. It also doesn’t allow the creation of content containing threats, intimidation, promotion of physical harm, or other abusive behavior.

Besides expressly banning such usage of its platform, Microsoft has also developed guardrails that inspect both prompts inputted by users and the resulting output for signs the content requested violates any of these terms. These code-based restrictions have been repeatedly bypassed in recent years through hacks, some benign and performed by researchers and others by malicious threat actors.

Microsoft didn’t outline precisely how the defendants’ software was allegedly designed to bypass the guardrails the company had created.

Masada wrote:

Microsoft’s AI services deploy strong safety measures, including built-in safety mitigations at the AI model, platform, and application levels. As alleged in our court filings unsealed today, Microsoft has observed a foreign-based threat–actor group develop sophisticated software that exploited exposed customer credentials scraped from public websites. In doing so, they sought to identify and unlawfully access accounts with certain generative AI services and purposely alter the capabilities of those services. Cybercriminals then used these services and resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content. Upon discovery, Microsoft revoked cybercriminal access, put in place countermeasures, and enhanced its safeguards to further block such malicious activity in the future.

The lawsuit alleges the defendants’ service violated the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, the Lanham Act, and the Racketeer Influenced and Corrupt Organizations Act and constitutes wire fraud, access device fraud, common law trespass, and tortious interference. The complaint seeks an injunction enjoining the defendants from engaging in “any activity herein.”

Microsoft sues service for creating illicit content with its AI platform Read More »

public-health-emergency-declared-amid-la’s-devastating-wildfires

Public health emergency declared amid LA’s devastating wildfires

The US health department on Friday declared a public health emergency for California in response to devastating wildfires in the Los Angeles area that have so far killed 10 people and destroyed more than 10,000 structures.

As of Friday morning, 153,000 residents are under evacuation orders, and an additional 166,800 are under evacuation warnings, according to local reports.

Wildfires pose numerous health risks, including exposure to extreme heat, burns, harmful air pollution, and emotional distress.

“We will do all we can to assist California officials with responding to the health impacts of the devastating wildfires going on in Los Angeles County,” US Department of Health and Human Services (HHS) Secretary Xavier Becerra said in a statement. “We are working closely with state and local health authorities, as well as our partners across the federal government, and stand ready to provide public health and medical support.”

The Administration for Strategic Preparedness and Response (ASPR), an agency within HHS, is monitoring hospitals and shelters in the LA area and is prepared to deploy responders, medical equipment, and supplies upon the state’s request.

Public health emergency declared amid LA’s devastating wildfires Read More »

rocket-report:-china-launches-refueling-demo;-dod’s-big-appetite-for-hypersonics

Rocket Report: China launches refueling demo; DoD’s big appetite for hypersonics


We’re just a few days away from getting a double-dose of heavy-lift rocket action.

Stratolaunch’s Talon-A hypersonic rocket plane will be used for military tests involving hypersonic missile technology. Credit: Stratolaunch

Welcome to Edition 7.26 of the Rocket Report! Let’s pause and reflect on how far the rocket business has come in the last 10 years. On this date in 2015, SpaceX made the first attempt to land a Falcon 9 booster on a drone ship positioned in the Atlantic Ocean. Not surprisingly, the rocket crash-landed. In less than a year and a half, though, SpaceX successfully landed reusable Falcon 9 boosters onshore and offshore, and now has done it nearly 400 times. That was remarkable enough, but we’re in a new era now. Within a few days, we could see SpaceX catch its second Super Heavy booster and Blue Origin land its first New Glenn rocket on an offshore platform. Extraordinary.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Our annual ranking of the top 10 US launch companies. You can easily guess who made the top of the list: the company that launched Falcon rockets 134 times in 2024 and launched the most powerful and largest rocket ever built on four test flights, each accomplishing more than the last. The combined 138 launches is more than NASA flew the Space Shuttle over three decades. SpaceX will aim to launch even more often in 2025. These missions have far-reaching impacts, supporting Internet coverage for consumers worldwide, launching payloads for NASA and the US military, and testing technology that will take humans back to the Moon and, someday, Mars.

Are there really 10? … It might also be fairly easy to rattle off a few more launch companies that accomplished big things in 2024. There’s United Launch Alliance, which finally debuted its long-delayed Vulcan rocket and flew two Atlas V missions and the final Delta IV mission, and Rocket Lab, which launched 16 missions with its small Electron rocket this year. Blue Origin flew its suborbital New Shepard vehicle on three human missions and one cargo-only mission and nearly launched its first orbital-class New Glenn rocket in 2024. That leaves just Firefly Aerospace as the only other US company to reach orbit last year.

DoD announces lucrative hypersonics deal. Defense technology firm Kratos has inked a deal worth up to $1.45 billion with the Pentagon to help develop a low-cost testbed for hypersonic technologies, Breaking Defense reports. The award is part of the military’s Multi-Service Advanced Capability Hypersonic Test Bed (MACH-TB) 2.0 program. The MACH-TB program, which began as a US Navy effort, includes multiple “Task Areas.” For its part, Kratos will be tasked with “systems engineering, integration, and testing, to include integrated subscale, full-scale, and air launch services to address the need to affordably increase hypersonic flight test cadence,” according to the company’s release.

Multiple players … The team led by Kratos, which specializes in developing airborne drones and military weapons systems, includes several players such as Leidos, Rocket Lab, Stratolaunch, and others. Kratos last year revealed that its Erinyes hypersonic test vehicle successfully flew for a Missile Defense Agency experiment. Rocket Lab has launched multiple suborbital hypersonic experiments for the military using a modified version of its Electron rocket, and Stratolaunch reportedly flew a high-speed test vehicle and recovered it last month, according to Aviation Week & Space Technology. The Pentagon is interested in developing hypersonic weapons that can evade conventional air and missile defenses. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

ESA will modify some of its geo-return policies. An upcoming European launch competition will be an early test of efforts by the European Space Agency to modify its approach to policies that link contracts to member state contributions, Space News reports. ESA has long used a policy known as geo-return, where member states are guaranteed contracts with companies based in their countries in proportion to the contribution those member states make to ESA programs.

The third rail of European space … Advocates of geo-return argue that it provides an incentive for countries to fund those programs. This incentivizes ESA to lure financial contributions from its member states, which will win guaranteed business and jobs from the agency’s programs. However, critics of geo-return, primarily European companies, claim that it creates inefficiencies that make them less competitive. One approach to revising geo-return is known as “fair contribution,” where ESA first holds competitions for projects, and member states then make contributions based on how companies in their countries fared in the competition. ESA will try the fair contribution approach for the upcoming launch competition to award contracts to European rocket startups. (submitted by EllPeaTea)

RFA is building a new rocket. German launch services provider Rocket Factory Augsburg (RFA) is currently focused on building a new first stage for the inaugural flight of its RFA One rocket, European Spaceflight reports. The stage that was initially earmarked for the flight was destroyed during a static fire test last year on a launch pad in Scotland. In a statement given to European Spaceflight, RFA confirmed that it expects to attempt an inaugural flight of RFA One in 2025.

Waiting on a booster … RFA says it is “fully focused on building a new first stage and qualifying it.” The rocket’s second stage and Redshift OTV third stage are already qualified for flight and are being stored until a new first stage is ready. The RFA One rocket will stand 98 feet (30 meters) tall and will be capable of delivering payloads of up to 1.3 metric tons (nearly 2,900 pounds) into polar orbits. RFA is one of several European startups developing commercial small satellite launchers and was widely considered the frontrunner before last year’s setback. (submitted by EllPeaTea)

Pentagon provides a boost for defense startup. Defense technology contractor Anduril Industries has secured a $14.3 million Pentagon contract to expand solid-fueled rocket motor production, as the US Department of Defense moves to strengthen domestic manufacturing capabilities amid growing supply chain concerns, Space News reports. The contract, awarded under the Defense Production Act, will support facility modernization and manufacturing improvements at Anduril’s Mississippi plant, the Pentagon said Tuesday.

Doing a solid … The Pentagon is keen to incentivize new entrants into the solid rocket manufacturing industry, which provides propulsion for missiles, interceptors, and other weapons systems. Two traditional defense contractors, Northrop Grumman and L3Harris, control almost all US solid rocket production. Companies like Anduril, Ursa Major, and X-Bow are developing solid rocket motor production capability. The Navy previously awarded Anduril a $19 million contract last year to develop solid rocket motors for the Standard Missile 6 program. (submitted by EllPeaTea)

Relativity’s value seems to be plummeting. For several years, an innovative, California-based launch company named Relativity Space has been the darling of investors and media. But the honeymoon appears to be over, Ars reports. A little more than a year ago, Relativity reached a valuation of $4.5 billion following its latest Series F fundraising round. This was despite only launching one rocket and then abandoning that program and pivoting to the development of a significantly larger reusable launch vehicle. The decision meant Relativity would not realize any significant revenue for several years, and Ars reported in September on some of the challenges the company has encountered developing the much larger Terran R rocket.

Gravity always wins … Relativity is a privately held company, so its financial statements aren’t public. However, we can glean some clues from the published quarterly report from Fidelity Investments, which owns Relativity shares. As of March 2024, Fidelity valued its 1.67 million shares at an estimated $31.8 million. However, in a report ending November 29 of last year, which was only recently published, Fidelity’s valuation of Relativity plummeted. Its stake in Relativity was then thought to be worth just $866,735—a per-share value of 52 cents. Shares in the other fundraising rounds are also valued at less than $1 each.

SpaceX has already launched four times this year. The space company is off to a fast start in 2025, with four missions in the first nine days of the year. Two of these missions launched Starlink internet satellites, and the other two deployed an Emirati-owned geostationary communications satellite and a batch of Starshield surveillance satellites for the National Reconnaissance Office. In its new year projections, SpaceX estimates it will launch more than 170 Falcon rockets, between Falcon 9 and Falcon Heavy, Spaceflight Now reports. This is in addition to SpaceX’s plans for up to 25 flights of the Starship rocket from Texas.

What’s in store this year?… Highlights of SpaceX’s launch manifest this year will likely include an attempt to catch and recover Starship after returning from orbit, a first in-orbit cryogenic propellant transfer demonstration with Starship, and perhaps the debut of a second launch pad at Starbase in South Texas. For the Falcon rocket fleet, notable missions this year will include launches of commercial robotic lunar landers for NASA’s CLPS program and several crew flights, including the first human spaceflight mission to fly in polar orbit. According to public schedules, a Falcon 9 rocket could launch a commercial mini-space station for Vast, a privately held startup, before the end of the year. That would be a significant accomplishment, but we won’t be surprised if this schedule moves to the right.

China is dipping its toes into satellite refueling. China kicked off its 2025 launch activities with the successful launch of the Shijian-25 satellite Monday, aiming to advance key technologies for on-orbit refueling and extending satellite lifespans, Space News reports. The satellite launched on a Long March 3B into a geostationary transfer orbit, suggesting the unspecified target spacecraft for the refueling demo test might be in geostationary orbit more than 22,000 miles (nearly 36,000 kilometers) over the equator.

Under a watchful eye … China has tested mission extension and satellite servicing capabilities in space before. In 2021, China launched a satellite named Shijian-21, which docked a defunct Beidou navigation satellite and towed it to a graveyard orbit above the geostationary belt. Reportedly, Shijian-21 satellite may have carried robotic arms to capture and manipulate other objects in space. These kinds of technologies are dual-use, meaning they have civilian and military applications. The US Space Force is also interested in satellite life extension and refueling tech, so US officials will closely monitor Shijian-25’s actions in orbit.

SpaceX set to debut upgraded Starship. An upsized version of SpaceX’s Starship mega-rocket rolled to the launch pad early Thursday in preparation for liftoff on a test flight next week, Ars reports. The rocket could lift off as soon as Monday from SpaceX’s Starbase test facility in South Texas. This flight is the seventh full-scale demonstration launch for Starship. The rocket will test numerous upgrades, including a new flap design, larger propellant tanks, redesigned propellant feed lines, a new avionics system, and an improved antenna for communications and navigation.

The new largest rocket … Put together, all of these changes to the ship raise the rocket’s total height by nearly 6 feet (1.8 meters), so it now towers 404 feet (123.1 meters) tall. With this change, SpaceX will break its own record for the largest rocket ever launched. SpaceX plans to catch the rocket’s Super Heavy booster back at the launch site in Texas and will target a controlled splashdown of the ship in the Indian Ocean.

Blue Origin targets weekend launch of New Glenn. Blue Origin is set to launch its New Glenn rocket in a long-delayed, uncrewed test mission that would help pave the way for the space venture founded by Jeff Bezos to compete against Elon Musk’s SpaceX, The Washington Post reports. Blue Origin has confirmed it plans to launch the 320-foot-tall rocket during a three-hour launch window opening at 1 am EDT (06: 00 UTC) Sunday in the company’s first attempt to reach orbit.

Finally … This is a much-anticipated milestone for Blue Origin and for the company’s likely customers, which include the Pentagon and NASA. Data from this test flight will help the Space Force certify New Glenn to loft national security satellites, providing a new competitor for SpaceX and United Launch Alliance in the heavy-lift segment of the market. Blue Origin isn’t quite shooting for the Moon on this inaugural launch, but the company will attempt to reach orbit and try to land the New Glenn’s first stage booster on a barge in the Atlantic Ocean. (submitted by EllPeaTea)

Next three launches

Jan. 10: Falcon 9 | Starlink 12-12 | Cape Canaveral Space Force Station, Florida | 18: 11 UTC

Jan. 12: New Glenn | NG-1 Blue Ring Pathfinder | Cape Canaveral Space Force Station, Florida | 06: 00 UTC

Jan. 13: Jielong 3 | Unknown Payload | Dongfang Spaceport, Yellow Sea | 03: 00 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: China launches refueling demo; DoD’s big appetite for hypersonics Read More »

everyone-agrees:-2024-the-hottest-year-since-the-thermometer-was-invented

Everyone agrees: 2024 the hottest year since the thermometer was invented


An exceptionally hot outlier, 2024 means the streak of hottest years goes to 11.

With very few and very small exceptions, 2024 was unusually hot across the globe. Credit: Copernicus

Over the last 24 hours or so, the major organizations that keep track of global temperatures have released figures for 2024, and all of them agree: 2024 was the warmest year yet recorded, joining 2023 as an unusual outlier in terms of how rapidly things heated up. At least two of the organizations, the European Union’s Copernicus and Berkeley Earth, place the year at about 1.6° C above pre-industrial temperatures, marking the first time that the Paris Agreement goal of limiting warming to 1.5° has been exceeded.

NASA and the National Oceanic and Atmospheric Administration both place the mark at slightly below 1.5° C over pre-industrial temperatures (as defined by the 1850–1900 average). However, that difference largely reflects the uncertainties in measuring temperatures during that period rather than disagreement over 2024.

It’s hot everywhere

2023 had set a temperature record largely due to a switch to El Niño conditions midway through the year, which made the second half of the year exceptionally hot. It takes some time for that heat to make its way from the ocean into the atmosphere, so the streak of warm months continued into 2024, even as the Pacific switched into its cooler La Niña mode.

While El Niños are regular events, this one had an outsized impact because it was accompanied by unusually warm temperatures outside the Pacific, including record high temperatures in the Atlantic and unusual warmth in the Indian Ocean. Land temperatures reflect this widespread warmth, with elevated temperatures on all continents. Berkeley Earth estimates that 104 countries registered 2024 as the warmest on record, meaning 3.3 billion people felt the hottest average temperatures they had ever experienced.

Different organizations use slightly different methods to calculate the global temperature and have different baselines. For example, Copernicus puts 2024 at 0.72° C above a baseline that will be familiar to many people since they were alive for it: 1991 to 2000. In contrast, NASA and NOAA use a baseline that covers the entirety of the last century, which is substantially cooler overall. Relative to that baseline, 2024 is 1.29° C warmer.

Lining up the baselines shows that these different services largely agree with each other, with most of the differences due to uncertainties in the measurements, with the rest accounted for by slightly different methods of handling things like areas with sparse data.

Describing the details of 2024, however, doesn’t really capture just how exceptional the warmth of the last two years has been. Starting in around 1970, there’s been a roughly linear increase in temperature driven by greenhouse gas emissions, despite many individual years that were warmer or cooler than the trend. The last two years have been extreme outliers from this trend. The last time there was a single comparable year to 2024 was back in the 1940s. The last time there were two consecutive years like this was in 1878.

A graph showing a curve that increases smoothly from left to right, with individual points on the curve hosting red and blue lines above and below. The red line at 2024 is larger than any since 1978.

Relative to the five-year temperature average, 2024 is an exceptionally large excursion. Credit: Copernicus

“These were during the ‘Great Drought’ of 1875 to 1878, when it is estimated that around 50 million people died in India, China, and parts of Africa and South America,” the EU’s Copernicus service notes. Despite many climate-driven disasters, the world at least avoided a similar experience in 2023-24.

Berkeley Earth provides a slightly different way of looking at it, comparing each year since 1970 with the amount of warming we’d expect from the cumulative greenhouse gas emissions.

A graph showing a reddish wedge, growing from left to right. A black line traces the annual temperatures, which over near the top edge of the wedge until recent years.

Relative to the expected warming from greenhouse gasses, 2024 represents a large departure. Credit: Berkeley Earth

These show that, given year-to-year variations in the climate system, warming has closely tracked expectations over five decades. 2023 and 2024 mark a dramatic departure from that track, although it comes at the end of a decade where most years were above the trend line. Berkeley Earth estimates that there’s just a 1 in 100 chance of that occurring due to the climate’s internal variability.

Is this a new trend?

The big question is whether 2024 is an exception and we should expect things to fall back to the trend that’s dominated since the 1970s, or it marks a departure from the climate’s recent behavior. And that’s something we don’t have a great answer to.

If you take away the influence of recent greenhouse gas emissions and El Niño, you can focus on other potential factors. These include a slight increase expected due to the solar cycle approaching its maximum activity. But, beyond that, most of the other factors are uncertain. The Hunga Tonga eruption put lots of water vapor into the stratosphere, but the estimated effects range from slight warming to cooling equivalent to a strong La Niña. Reductions in pollution from shipping are expected to contribute to warming, but the amount is debated.

There is evidence that a decrease in cloud cover has allowed more sunlight to be absorbed by the Earth, contributing to the planet’s warming. But clouds are typically a response to other factors that influence the climate, such as the amount of water vapor in the atmosphere and the aerosols present to seed water droplets.

It’s possible that a factor that we missed is driving the changes in cloud cover or that 2024 just saw the chaotic nature of the atmosphere result in less cloud cover. Alternatively, we may have crossed a warming tipping point, where the warmth of the atmosphere makes cloud formation less likely. Knowing that will be critical going forward, but we simply don’t have a good answer right now.

Climate goals

There’s an equally unsatisfying answer to what this means for our chance of hitting climate goals. The stretch goal of the Paris Agreement is to limit warming to 1.5° C, because it leads to significantly less severe impacts than the primary, 2.0° target. That’s relative to pre-industrial temperatures, which are defined using the 1850–1900 period, the earliest time where temperature records allow a reconstruction of the global temperature.

Unfortunately, all the organizations that handle global temperatures have some differences in the analysis methods and data used. Given recent data, these differences result in very small divergences in the estimated global temperatures. But with the far larger uncertainties in the 1850–1900 data, they tend to diverge more dramatically. As a result, each organization has a different baseline, and different anomalies relative to that.

As a result, Berkeley Earth registers 2024 as being 1.62° C above preindustrial temperatures, and Copernicus 1.60° C. In contrast, NASA and NOAA place it just under 1.5° C (1.47° and 1.46°, respectively). NASA’s Gavin Schmidt said this is “almost entirely due to the [sea surface temperature] data set being used” in constructing the temperature record.

There is, however, consensus that this isn’t especially meaningful on its own. There’s a good chance that temperatures will drop below the 1.5° mark on all the data sets within the next few years. We’ll want to see temperatures consistently exceed that mark for over a decade before we consider that we’ve passed the milestone.

That said, given that carbon emissions have barely budged in recent years, there’s little doubt that we will eventually end up clearly passing that limit (Berkeley Earth is essentially treating it as exceeded already). But there’s widespread agreement that each increment between 1.5° and 2.0° will likely increase the consequences of climate change, and any continuing emissions will make it harder to bring things back under that target in the future through methods like carbon capture and storage.

So, while we may have committed ourselves to exceed one of our major climate targets, that shouldn’t be viewed as a reason to stop trying to limit greenhouse gas emissions.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Everyone agrees: 2024 the hottest year since the thermometer was invented Read More »

on-dwarkesh-patel’s-4th-podcast-with-tyler-cowen

On Dwarkesh Patel’s 4th Podcast With Tyler Cowen

Dwarkesh Patel again interviewed Tyler Cowen, largely about AI, so here we go.

Note that I take it as a given that the entire discussion is taking place in some form of an ‘AI Fizzle’ and ‘economic normal’ world, where AI does not advance too much in capability from its current form, in meaningful senses, and we do not get superintelligence [because of reasons]. It’s still massive additional progress by the standards of any other technology, but painfully slow by the ‘AGI is coming soon’ crowd.

That’s the only way I can make the discussion make at least some sense, with Tyler Cowen predicting 0.5%/year additional RGDP growth from AI. That level of capabilities progress is a possible world, although the various elements stated here seem like they are sometimes from different possible worlds.

I note that this conversation was recorded prior to o3 and all the year end releases. So his baseline estimate of RGDP growth and AI impacts has likely increased modestly.

I go very extensively into the first section on economic growth and AI. After that, the podcast becomes classic Tyler Cowen and is interesting throughout, but I will be relatively sparing in my notes in other areas, and am skipping over many points.

This is a speed premium and ‘low effort’ post, in the sense that this is mostly me writing down my reactions and counterarguments in real time, similar to how one would do a podcast. It is high effort in that I spent several hours listening to, thinking about and responding to the first fifteen minutes of a podcast.

As a convention: When I’m in the numbered sections, I’m reporting what was said. When I’m in the secondary sections, I’m offering (extensive) commentary. Timestamps are from the Twitter version.

[EDIT: In Tyler’s link, he correctly points out a confusion in government spending vs. consumption, which I believe is fixed now. As for his comment about market evidence for the doomer position, I’ve given my answer before, and I would assert the market provides substantial evidence neither in favor or against anything but the most extreme of doomer positions, as in extreme in a way I have literally never heard one person assert, once you control for its estimate of AI capabilities (where it does indeed offer us evidence, and I’m saying that it’s too pessimistic). We agree there is no substantial and meaningful ‘peer-reviewed’ literature on the subject, in the way that Tyler is pointing.]

They recorded this at the Progress Studies conference, and Tyler Cowen has a very strongly held view that AI won’t accelerate RGDP growth much that Dwarkesh clearly does not agree with, so Dwarkesh Patel’s main thrust is to try comparisons and arguments and intuition pumps to challenge Tyler. Tyler, as he always does, has a ready response to everything, whether or not it addresses the point of the question.

  1. (1: 00) Dwarkesh doesn’t waste any time and starts off asking why we won’t get explosive economic growth. Tyler’s first answer is cost disease, that as AI works in some parts of the economy costs in other areas go up.

    1. That’s true in relative terms for obvious reasons, but in absolute terms or real resource terms the opposite should be true, even if we accept the implied premise that AI won’t simply do everything anyway. This should drive down labor costs and free up valuable human capital. It should aid in availability of many other inputs. It makes almost any knowledge acquisition, strategic decision or analysis, data analysis or gathering, and many other universal tasks vastly better.

    2. Tyler then answers this directly when asked at (2: 10) by saying cost disease is not about employees per se, it’s more general, so he’s presumably conceding the point about labor costs, saying that non-intelligence inputs that can’t be automated will bind more and thus go up in price. I mean, yes, in the sense that we have higher value uses for them, but so what?

    3. So yes, you can narrowly define particular subareas of some areas as bottlenecks and say that they cannot grow, and perhaps they can even be large areas if we impose costlier bottlenecks via regulation. But that still leaves lots of room for very large economic growth for a while – the issue can’t bind you otherwise, the math doesn’t work.

  2. Tyler says government consumption [EDIT: I originally misheard this as spending, he corrected me, I thank him] at 18% of GDP (government spending is 38% but a lot of that is duplicative and a lot isn’t consumption), health care at 20%, education is 6% (he says 6-7%, Claude says 6%), the nonprofit sector (Claude says 5.6%) and says together that is half of the economy. Okay, sure, let’s tackle that.

    1. Healthcare is already seeing substantial gains from AI even at current levels. There are claims that up to 49% of half of doctor time is various forms of EMR and desk work that AIs could reduce greatly, certainly at least ~25%. AI can directly substitute for much of what doctors do in terms of advising patients, and this is already happening where the future is distributed. AI substantially improves medical diagnosis and decision making. AI substantially accelerates drug discovery and R&D, will aid in patient adherence and monitoring, and so on. And again, that’s without further capability gains. Insurance companies doubtless will embrace AI at every level. Need I go on here?

    2. Government spending at all levels is actually about 38% of GDP, but that’s cheating, only ~11% is non-duplicative and not transfers, interest (which aren’t relevant) or R&D (I’m assuming R&D would get a lot more productive).

    3. The biggest area is transfers. AI can’t improve the efficiency of transfers too much, but it also can’t be a bottleneck outside of transaction and administrative costs, which obviously AI can greatly reduce and are not that large to begin with.

    4. The second biggest area is provision of healthcare, which we’re already counting, so that’s duplicative. Third is education, which we count in the next section.

    5. Third is education. Fourth is national defense, where efficiency per dollar or employee should get vastly better, to the point where failure to be at the AI frontier is a clear national security risk.

    6. Fifth is interest on the debt, which again doesn’t count, and also we wouldn’t care about if GDP was growing rapidly.

    7. And so on. What’s left to form the last 11% or so? Public safety, transportation and infrastructure, government administration, environment and natural resources and various smaller other programs. What happens here is a policy choice. We are already seeing signs of improvement in government administration (~2% of the 11%), the other 9% might plausibly stall to the extent we decide to do an epic fail.

    8. Education and academia is already being transformed by AI, in the sense of actually learning things, among anyone who is willing to use it. And it’s rolling through academia as we speak, in terms of things like homework assignments, in ways that will force change. So whether you think growth is possible depends on your model of education. If it’s mostly a signaling model then you should see a decline in education investment since the signals will decline in value and AI creates the opportunity for better more efficient signals, but you can argue that this could continue to be a large time and dollar tax on many of us.

    9. Nonprofits are about 20%-25% education, and ~50% is health care related, which would double count, so the remainder is only ~1.3% of GDP. This also seems like a dig at nonprofits and their inability to adapt to change, but why would we assume nonprofits can’t benefit from AI?

    10. What’s weird is that I would point to different areas that have the most important anticipated bottlenecks to growth, such as housing or power, where we might face very strong regulatory constraints and perhaps AI can’t get us out of those.

  3. (1: 30) He says it will take ~30 years for sectors of the economy that do not use AI well to be replaced by those that do use AI well.

    1. That’s a very long time, even in an AI fizzle scenario. I roll to disbelieve that estimate in most cases. But let’s even give it to him, and say it is true, and it takes 30 years to replace them, while the productivity of the replacement goes up 5%/year above incumbents, which are stagnant. Then you delay the growth, but you don’t prevent it, and if you assume this is a gradual transition you start seeing 1%+ yearly GDP growth boosts even in these sectors within a decade.

  4. He concludes by saying some less regulated areas grow a lot, but that doesn’t get you that much, so you can’t have the whole economy ‘growing by 40%’ in a nutshell.

    1. I mean, okay, but that’s double Dwarkesh’s initial question of why we aren’t growing at 20%. So what exactly can we get here? I can buy this as an argument for AI fizzle world growing slower than it would have otherwise, but the teaser has a prediction of 0.5%, which is a whole different universe.

  1. (2: 20) Tyler asserts that value of intelligence will go down because more intelligence will be available.

    1. Dare I call this the Lump of Intelligence fallacy, after the Lump of Labor fallacy? Yes, to the extent that you are doing the thing an AI can do, the value of that intelligence goes down, and the value of AI intelligence itself goes down in economic terms because its cost of production declines. But to the extent that your intelligence complements and unlocks the AI’s, or is empowered by the AI’s and is distinct from it (again, we must be in fizzle-world), the value of that intelligence goes up.

    2. Similarly, when he talks about intelligence as ‘one input’ in the system among many, that seems like a fundamental failure to understand how intelligence works, a combination of intelligence denialism (failure to buy that much greater intelligence could meaningfully exist) and a denial of substitution or ability to innovate as a result – you couldn’t use that intelligence to find alternative or better ways to do things, and you can’t use more intelligence as a substitute for other inputs. And you can’t substitute the things enabled more by intelligence much for the things that aren’t, and so on.

    3. It also assumes that intelligence can’t be used to convince us to overcome all these regulatory barriers and bottlenecks. Whereas I would expect that raising the intelligence baseline greatly would make it clear to everyone involved how painful our poor decisions were, and also enable improved forms of discourse and negotiation and cooperation and coordination, and also greatly favor those that embrace it over those that don’t, and generally allow us to take down barriers. Tyler would presumably agree that if we were to tear down the regulatory state in the places it was holding us back, that alone would be worth far more than his 0.5% of yearly GDP growth, even with no other innovation or AI.

  1. (2: 50) Dwarkesh challenges Tyler by pointing out that the Industrial Revolution resulted in a greatly accelerated rate of economic growth versus previous periods, and asks what Tyler would say to someone from the past doubting it was possible. Tyler attempts to dodge (and is amusing doing so) by saying they’d say ‘looks like it would take a long time’ and he would agree.

    1. Well, it depends what a long time is, doesn’t it? 2% sustained annual growth (or 8%!) is glacial in some sense and mind boggling by ancient standards. ‘Take a long time’ in AI terms, such as what is actually happening now, could still look mighty quick if you compared it to most other things. OpenAI has 300 million MAUs.

  2. (3: 20) Tyler trots out the ‘all the financial prices look normal’ line, that they are not predicting super rapid growth and neither are economists or growth experts.

    1. Yes, the markets are being dumb, the efficient market hypothesis is false, and also aren’t you the one telling me I should have been short the market? Well, instead I’m long, and outperforming. And yes, economists and ‘experts on economic growth’ aren’t predicting large amounts of growth, but their answers are Obvious Nonsense to me and saying that ‘experts don’t expect it’ without arguments why isn’t much of an argument.

  3. (3: 40) Aside, since you kind of asked: So who am I to say different from the markets and the experts? I am Zvi Mowshowitz. Writer. Son of Solomon and Deborah Mowshowitz. I am the missing right hand of the one handed economists you cite. And the one warning you about what is about to kick Earth’s sorry ass into gear. I speak the truth as I see it, even if my voice trembles. And a warning that we might be the last living things this universe ever sees. God sent me.

  4. Sorry about that. But seriously, think for yourself, schmuck! Anyway.

What would happen if we had more people? More of our best people? Got more out of our best people? Why doesn’t AI effectively do all of these things?

  1. (3: 55) Tyler is asked wouldn’t a large rise in population drive economic growth? He says no, that’s too much a 1-factor model, in fact we’ve seen a lot of population growth without innovation or productivity growth.

    1. Except that Tyler is talking here about growth on a per capita basis. If you add AI workers, you increase the productive base, but they don’t count towards the capita.

  2. Tyler says ‘it’s about the quality of your best people and institutions.’

    1. But quite obviously AI should enable a vast improvement in the effective quality of your best people, it already does, Tyler himself would be one example of this, and also the best institutions, including because they are made up of the best people.

  3. Tyler says ‘there’s no simple lever, intelligence or not, that you can push on.’ Again, intelligence as some simple lever, some input component.

    1. The whole point of intelligence is that it allows you to do a myriad of more complex things, and to better choose those things.

  4. Dwarkesh points out the contradiction between ‘you are bottlenecked by your best people’ and asserting cost disease and constraint by your scarce input factors. Tyler says Dwarkesh is bottlenecked, Dwarkesh points out that with AGI he will be able to produce a lot more podcasts. Tyler says great, he’ll listen, but he will be bottlenecked by time.

    1. Dwarkesh’s point generalizes. AGI greatly expand the effective amount of productive time of the best people, and also extend their capabilities while doing so.

    2. AGI can also itself become ‘the best people’ at some point. If that was the bottleneck, then the goose asks, what happens now, Tyler?

  5. (5: 15) Tyler cites that much of sub-Saharan Africa still does not have clean reliable water, and intelligence is not the bottleneck there. And that taking advantage of AGI will be like that.

    1. So now we’re expecting AGI in this scenario? I’m going to kind of pretend we didn’t hear that, or that this is a very weak AGI definition, because otherwise the scenario doesn’t make sense at all.

    2. Intelligence is not directly the bottleneck there, true, but yes quite obviously Intelligence Solves This if we had enough of it and put those minds to that particular problem and wanted to invest the resources towards it. Presumably Tyler and I mostly agree on why the resources aren’t being devoted to it.

    3. What it mean for similar issues to that to be involved in taking advantage of AGI? Well, first, it would mean that you can’t use AGI to get to ASI (no I can’t explain why), but again that’s got to be a baseline assumption here. After that, well, sorry, I failed to come up with a way to finish this that makes it make sense to me, beyond a general ‘humans won’t do the things and will throw up various political and legal barriers.’ Shrug?

  6. (5: 35) Dwarkesh speaks about a claim that there is a key shortage of geniuses, and that America’s problems come largely from putting its geniuses in places like finance, whereas Taiwan puts them in tech, so the semiconductors end up in Taiwan. Wouldn’t having lots more of those types of people eat a lot of bottlenecks? What would happen if everyone had 1000 times more of the best people available?

  7. Tyler Cowen, author of a very good book about Talent and finding talent and the importance of talent, says he didn’t agree with that post, and returns to IQ in the labor market are amazingly low, and successful people are smart but mostly they have 8-9 areas where they’re an 8-9 on a 1-10 scale, with one 11+ somewhere, and a lot of determination.

    1. All right, I don’t agree that intelligence doesn’t offer returns now, and I don’t agree that intelligence wouldn’t offer returns even at the extremes, but let’s again take Tyler’s own position as a given…

    2. But that exactly describes what an AI gives you! An AI is the ultimate generalist. An AGI will be a reliable 8-9 on everything, actual everything.

    3. And it would also turn everyone else into an 8-9 on everything. So instead of needing to find someone 11+ in one area, plus determination, plus having 8-9 in ~8 areas, you can remove that last requirement. That will hugely expand the pool of people in question.

    4. So there’s two obvious very clear plans here: You can either use AI workers who have that ultimate determination and are 8-9 in everything and 11+ in the areas where AIs shine (e.g. math, coding, etc).

    5. Or you can also give your other experts an AI companion executive assistant to help them, and suddenly they’re an 8+ in everything and also don’t have to deal with a wide range of things.

  8. (6: 50) Tyler says, talk to a committee at a Midwestern university about their plans for incorporating AI, then get back to him and talk to him about bottlenecks. Then write a report and the report will sound like GPT-4 and we’ll have a report.

    1. Yes, the committee will not be smart or fast about its official policy for how to incorporate AI into its existing official activities. If you talk to them now they will act like they have a plagiarism problem and that’s it.

    2. So what? Why do we need that committee to form a plan or approve anything or do anything at all right now, or even for a few years? All the students are already using AI. The professors are rapidly forced to adapt AI. Everyone doing the research will soon be using AI. Half that committee, three years from now, prepared for that meeting using AI. Their phones will all work based on AI. They’ll be talking to their AI phone assistant companions that plan their schedules. You think this will all involve 0.5% GDP growth?

  9. (7: 20) Dwarkesh asks, won’t the AIs be smart, super conscientious and work super hard? Tyler explicitly affirms the 0.5% GDP growth estimate, that this will transform the world over 30 years but ‘over any given year we won’t so much notice it.’ Things like drug developments that would have taken 20 years now take 10 years, but you won’t feel it as revolutionary for a long time.

    1. I mean, it’s already getting very hard to miss. If you don’t notice it in 2025 or at least 2026, and you’re in the USA, check your pulse, you might be dead, etc.

    2. Is that saying we will double productivity in pharmaceutical R&D, and that it would have far more than doubled if progress didn’t require long expensive clinical trials, so other forms of R&D should be accelerated much more?

    3. For reference, according to Claude, R&D in general contributes about 0.3% to RGDP growth per year right now. If we were to double that effect in roughly half the current R&D spend that is bottlenecked in similar fashion, and the other half would instead go up by more.

    4. Claude also estimates that R&D spending would, if returns to R&D doubled, go up by 30%-70% on net.

    5. So we seem to be looking at more than 0.5% RGDP growth per year from R&D effects alone, between additional spending on it and greater returns. And obviously AI is going to have additional other returns.

This is a plausible bottleneck, but that implies rather a lot of growth.

  1. (8: 00) Dwarkesh points out that Progress Studies is all about all the ways we could unlock economic growth, yet Tyler says that tons more smart conscientious digital workers wouldn’t do that much. What gives? Tyler again says bottlenecks, and adds on energy as an important consideration and bottleneck.

    1. Feels like bottleneck is almost a magic word or mantra at this point.

    2. Energy is a real consideration, yes the vision here involves spending a lot more energy, and that might take time. But also we see rapidly declining costs, including energy costs, to extract the same amount of intelligence, things like 10x savings each year.

    3. And for inference purposes we can outsource our needs elsewhere, which we would if this was truly bottlenecking explosive growth, and so on. So while I think energy will indeed be an important limiting factor and be strained, and this will be especially important in terms of pushing the frontier or if we want to use o3-style very expensive inference a lot.

    4. I don’t expect it to bind medium-term economic growth so much in a slow growth scenario, and the bottlenecks involved here shouldn’t compound with others. In a high growth takeoff scenario, I do think energy could bind far more impactfully.

    5. Another way of looking at this is that if the price of energy goes substantially up due to AI, or at least the price of energy outside of potentially ‘government-protected uses,’ then that can only happen if it is having a large economic impact. If it doesn’t raise the price of energy a lot, then no bottleneck exists.

Tyler Cowen and I think very differently here.

  1. (9: 25) Fascinating moment. Tyler says he goes along with the experts in general, but agrees that ‘the experts’ on basically everything but AI are asleep at the wheel when it comes to AI – except when it comes to their views on diffusions of new technology in general, where the AI people are totally wrong. His view is, you get the right view by trusting the experts in each area, and combining them.

    1. Tyler seems to be making an argument from reference class expertise? That this is a ‘diffusion of technology’ question, so those who are experts on that should be trusted?

    2. Even if they don’t actually understand AI and what it is and its promise?

    3. That’s not how I roll. At all. As noted above in this post, and basically all the time. I think that you have to take the arguments being made, and see if you agree with them, and whether and how much they apply to the case of AI and especially AGI. Saying ‘the experts in area [X] predict [Y]’ is a reasonable placeholder if you don’t have the ability to look at the arguments and models and facts involved, but hey look, we can do that.

    4. Simply put, while I do think the diffusion experts are pointing to real issues that will importantly slow down adaptation, and indeed we are seeing what for many is depressingly slow apadation, they won’t slow it down all that much, because this is fundamentally different. AI and especially workers ‘adapt themselves’ to a large extent, the intelligence and awareness involved is in the technology itself, and it is digital and we have a ubiquitous digital infrastructure we didn’t have until recently.

    5. It is also way too valuable a technology, even right out of the gate on your first day, and you will start to be forced to interact with it whether you like it or not, both in ways that will make it very difficult and painful to ignore. And the places it is most valuable will move very quickly. And remember, LLMs will get a lot better.

    6. Suppose, as one would reasonably expect, by 2026 we have strong AI agents, capable of handling for ordinary people a wide variety of logistical tasks, sorting through information, and otherwise offering practical help. Apple Intelligence is partly here, Claude Alexa is coming, Project Astra is coming, and these are pale shadows of the December 2025 releases I expect. How long would adaptation really take? Once you have that, what stops you from then adapting AI in other ways?

    7. Already, yes, adaptation is painfully slow, but it is also extremely fast. In two years ChatGPT alone has 300 million MAU. A huge chunk of homework and grading is done via LLMs. A huge chunk of coding is done via LLMs. The reason why LLMs are not catching on even faster is that they’re not quite ready for prime time in the fully user-friendly ways normies need. That’s about to change in 2025.

Dwarkesh tries to use this as an intuition pump. Tyler’s not having it.

  1. (10: 15) Dwarkesh asks, what would happen if the world population would double? Tyler says, depends what you’re measuring. Energy use would go up. But he doesn’t agree with population-based models, too many other things matter.

    1. Feels like Tyler is answering a different question. I see Dwarkesh as asking, wouldn’t the extra workers mean we could simply get a lot more done, wouldn’t (total, not per capita) GDP go up a lot? And Tyler’s not biting.

  2. (11: 10) Dwarkesh tries asking about shrinking the population 90%. Shrinking, Tyler says, the delta can kill you, whereas growth might not help you.

    1. Very frustrating. I suppose this does partially respond, by saying that it is hard to transition. But man I feel for Dwarkesh here. You can feel his despair as he transitions to the next question.

  1. (11: 35) Dwarkesh asks what are the specific bottlenecks? Tyler says: Humans! All of you! Especially you who are terrified.

    1. That’s not an answer yet, but then he actually does give one.

  2. He says once AI starts having impact, there will be a lot of opposition to it, not primarily on ‘doomer’ grounds but based on: Yes, this has benefits, but I grew up and raised my kids for a different way of life, I don’t want this. And there will be a massive fight.

    1. Yes. He doesn’t even mention jobs directly but that will be big too. We already see that the public strongly dislikes AI when it interacts with it, for reasons I mostly think are not good reasons.

    2. I’ve actually been very surprised how little resistance there has been so far, in many areas. AIs are basically being allowed to practice medicine, to function as lawyers, and do a variety of other things, with no effective pushback.

    3. The big pushback has been for AI art and other places where AI is clearly replacing creative work directly. But that has features that seem distinct.

    4. Yes people will fight, but what exactly do they intend to do about it? People have been fighting such battles for a while, every year I watch the battle for Paul Bunyan’s Axe. He still died. I think there’s too much money at stake, too much productivity at stake, too many national security interests.

    5. Yes, it will cause a bunch of friction, and slow things down somewhat, in the scenarios like the one Tyler is otherwise imagining. But if that’s the central actual thing, it won’t slow things down all that much in the end. Rarely has.

    6. We do see some exceptions, especially involving powerful unions, where the anti-automation side seems to do remarkably well, see the port strike. But also see which side of that the public is on. I don’t like their long term position, especially if AI can seamlessly walk in and take over the next time they strike. And that, alone, would probably be +0.1% or more to RGDP growth.

  1. (12: 15) Dwarkesh tries using China as a comparison case. If you can do 8% growth for decades merely by ‘catching up’ why can’t you do it with AI? Tyler responds, China’s in a mess now, they’re just a middle income country, they’re the poorest Chinese people on the planet, a great example of how hard it is to scale. Dwarkesh pushes back that this is about the previous period, and Tyler says well, sure, from the $200 level.

    1. Dwarkesh is so frustrated right now. He’s throwing everything he can at Tyler, but Tyler is such a polymath that he has detail points for anything and knows how to pivot away from the question intents.

  1. (13: 40) Dwarkesh asks, has Tyler’s attitude on AI changed from nine months ago? He says he sees more potential and there was more progress than he expected, especially o1 (this was before o3). The questions he wrote for GPT-4, which Dwarkesh got all wrong, are now too easy for models like o1. And he ‘would not be surprised if an AI model beat human experts on a regular basis within three years.’ He equates it to the first Kasparov vs. DeepBlue match, which Kasparov won, before the second match which he lost.

    1. I wouldn’t be surprised if this happens in one year.

    2. I wouldn’t be that shocked o3 turns out to do it now.

    3. Tyler’s expectations here, to me, contradict his statements earlier. Not strictly, they could still both be true, but it seems super hard.

    4. How much would availability of above-human level economic thinking help us in aiding economic growth? How much would better economic policy aid economic growth?

We take a detour to other areas, I’ll offer brief highlights.

  1. (15: 45) Why are founders staying in charge important? Courage. Making big changes.

  2. (19: 00) What is going on with the competency crisis? Tyler sees high variance at the top. The best are getting better, such as in chess or basketball, and also a decline in outright crime and failure. But there’s a thick median not quite at the bottom that’s getting worse, and while he thinks true median outcomes are about static (since more kids take the tests) that’s not great.

  3. (22: 30) Bunch of shade on both Churchill generally and on being an international journalist, including saying it’s not that impressive because how much does it pay?

    1. He wasn’t paid that much as Prime Minister either, you know…

  4. (24: 00) Why are all our leaders so old? Tyler says current year aside we’ve mostly had impressive candidates, and most of the leadership in Washington in various places (didn’t mention Congress!) is impressive. Yay Romney and Obama.

    1. Yes, yay Romney and Obama as our two candidates. So it’s only been three election cycles where both candidates have been… not ideal. I do buy Tyler’s claim that Trump has a lot of talent in some ways, but, well, ya know.

    2. If you look at the other candidates for both nominations over that period, I think you see more people who were mostly also not so impressive. I would happily have taken Obama over every candidate on the Democratic side in 2016, 2020 or 2024, and Romney over every Republican (except maybe Kasich) in those elections as well.

    3. This also doesn’t address Dwarkesh’s concern about age. What about the age of Congress and their leadership? It is very old, on both sides, and things are not going so great.

    4. I can’t speak about the quality people in the agencies.

  5. (27: 00) Commentary on early-mid 20th century leaders being terrible, and how when there is big change there are arms races and sometimes bad people win them (‘and this is relevant to AI’).

For something that is going to not cause that much growth, Tyler sees AI as a source for quite rapid change in other ways.

  1. (34: 20) Tyler says all inputs other than AI rise in value, but you have to do different things. He’s shifting from producing content to making connections.

    1. This again seems to be a disconnect. If AI is sufficiently impactful as to substantially increase the value of all other inputs, then how does that not imply substantial economic growth?

    2. Also this presumes that the AI can’t be a substitute for you, or that it can’t be a substitute for other people that could in turn be a substitute for you.

    3. Indeed, I would think the default model would presumably be that the value of all labor goes down, even for things where AI can’t do it (yet) because people substitute into those areas.

  2. (35: 25) Tyler says he’s writing his books primarily for the AIs, he wants them to know he appreciates them. And the next book will be even more for the AIs so it can shape how they see the AIs. And he says, you’re an idiot if you’re not writing for the AIs.

    1. Basilisk! Betrayer! Misaligned!

    2. ‘What the AIs will think of you’ is actually an underrated takeover risk, and I pointed this out as early as AI #1.

    3. The AIs will be smarter and better at this than you, and also will be reading what the humans say about you. So maybe this isn’t as clever as it seems.

    4. My mind boggles that it could be correct to write for the AIs… but you think they will only cause +0.5% GDP annual growth.

  3. (36: 30) What won’t AIs get from one’s writing? That vibe you get talking to someone for the first 3 minutes? Sense of humor?

    1. I expect the AIs will increasingly have that stuff, at least if you provide enough writing samples. They have true sight.

    2. Certainly if they have interview and other video data to train with, that will work over time.

  1. (37: 25) What happens when Tyler turns down a grant in the first three minutes? Usually it’s failure to answer a question, like ‘how do you build out your donor base?’ without which you have nothing. Or someone focuses on the wrong things, or cares about the wrong status markers, and 75% of the value doesn’t display on the transcript, which is weird since the things Tyler names seem like they would be in the transcript.

  2. (42: 15) Tyler’s portfolio is diversified mutual funds, US-weighted. He has legal restrictions on most other actions such as buying individual stocks, but he would keep the same portfolio regardless.

    1. Mutual funds over ETFs? Gotta chase that lower expense ratio.

    2. I basically think This Is Fine as a portfolio, but I do think he could do better if he actually tried to pick winners.

  3. (42: 45) Tyler expects gains to increasingly fall to private companies that see no reason to share their gains with the public, and he doesn’t have enough wealth to get into good investments but also has enough wealth for his purposes anyway, if he had money he’d mostly do what he’s doing anyway.

    1. Yep, I think he’s right about what he would be doing, and I too would mostly be doing the same things anyway. Up to a point.

    2. If I had a billion dollars or what not, that would be different, and I’d be trying to make a lot more things happen in various ways.

    3. This implies the efficient market hypothesis is rather false, doesn’t it? The private companies are severely undervalued in Tyler’s model. If private markets ‘don’t want to share the gains’ with public markets, that implies that public markets wouldn’t give fair valuations to those companies. Otherwise, why would one want such lack of liquidity and diversification, and all the trouble that comes with staying private?

    4. If that’s true, what makes you think Nvidia should only cost $140 a share?

Tyler Cowen doubles down on dismissing AI optimism, and is done playing nice.

  1. (46: 30) Tyler circles back to rate of diffusion of tech change, and has a very clear attitude of I’m right and all people are being idiots by not agreeing with me, that all they have are ‘AI will immediately change everything’ and ‘some hyperventilating blog posts.’ AIs making more AIs? Diminishing returns! Ricardo knew this! Well that was about humans breeding. But it’s good that San Francisco ‘doesn’t know about’ diminishing returns and the correct pessimism that results.

    1. This felt really arrogant, and willfully out of touch with the actual situation.

    2. You can say the AIs wouldn’t be able to do this, but: No, ‘Ricardo didn’t know that’ and saying ‘diminishing returns’ does not apply here, because the whole ‘AIs making AIs’ principle is that the new AIs would be superior to the old AIs, a cycle you could repeat. The core reason you get eventual diminishing returns from more people is that they’re drawn from the same people distribution.

    3. I don’t even know what to say at this point to ‘hyperventilating blog posts.’ Are you seriously making the argument that if people write blog posts, that means their arguments don’t count? I mean, yes, Tyler has very much made exactly this argument in the past, that if it’s not in a Proper Academic Journal then it does not count and he is correct to not consider the arguments or update on them. And no, they’re mostly not hyperventilating or anything like that, but that’s also not an argument even if they were.

    4. What we have are, quite frankly, extensive highly logical, concrete arguments about the actual question of what [X] will happen and what [Y]s will result from that, including pointing out that much of the arguments being made against this are Obvious Nonsense.

    5. Diminishing returns holds as a principle in a variety of conditions, yes, and is a very important concept to know. Bt there are other situations with increasing returns, and also a lot of threshold effects, even outside of AI. And San Francisco importantly knows this well.

    6. Saying there must be diminishing returns to intelligence, and that this means nothing that fast or important is about to happen when you get a lot more of it, completely begs the question of what it even means to have a lot more intelligence.

    7. Earlier Tyler used chess and basketball as examples, and talked about the best youth being better, and how that was important because the best people are a key bottleneck. That sounds like a key case of increasing returns to scale.

    8. Humanity is a very good example of where intelligence at least up to some critical point very obviously had increasing returns to scale. If you are below a certain threshold of intelligence as a human, your effective productivity is zero. Humanity having a critical amount of intelligence gave it mastery of the Earth. Tell what gorillas and lions still exist about decreasing returns to intelligence.

    9. For various reasons, with the way our physical world and civilization is constructed, we often don’t typically end up rewarding relatively high intelligence individuals with that much in the way of outsided economic returns versus ordinary slightly-above-normal intelligence individuals.

    10. But that is very much a product of our physical limitations and current social dynamics and fairness norms, and the concept of a job with essentially fixed pay, and actual good reasons not to try for many of the higher paying jobs out there in terms of life satisfaction.

    11. In areas and situations where this is not the case, returns look very different.

    12. Tyler Cowen himself is an excellent example of increasing returns to scale. The fact that Tyler can read and do so much enables him to do the thing he does at all, and to enjoy oversized returns in many ways. And if you decreased his intelligence substantially, he would be unable to produce at anything like this level. If you increased his intelligence substantially or ‘sped him up’ even more, I think that would result in much higher returns still, and also AI has made him substantially more productive already as he no doubt realizes.

    13. (I’ve been over all this before, but seems like a place to try it again.)

Trying to wrap one’s head around all of it at once is quite a challenge.

  1. (48: 45) Tyler worries about despair in certain areas from AI and worries about how happy it will make us, despite expecting full employment pretty much forever.

    1. If you expect full employment forever then you either expect AI progress to fully stall or there’s something very important you really don’t believe in, or both. I don’t understand, what does Tyler thinks happen once the AIs can do anything digital as well as most or all humans? What does he think will happen when we use that to solve robotics? What are all these humans going to be doing to get to full employment?

    2. It is possible the answer is ‘government mandated fake jobs’ but then it seems like an important thing to say explicitly, since that’s actually more like UBI.

  2. Tyler Cowen: “If you don’t have a good prediction, you should be a bit wary and just say, “Okay, we’re going to see.” But, you know, some words of caution.”

    1. YOU DON’T SAY.

    2. Further implications left as an exercise to the reader, who is way ahead of me.

  1. (54: 30) Tyler says that the people in DC are wise and think on the margin, whereas the SF people are not wise and think in infinities (he also says they’re the most intelligent hands down, elsewhere), and the EU people are wisest of all, but that if the EU people ran the world the growth rate would be -1%. Whereas the USA has so far maintained the necessary balance here well.

    1. If the wisdom you have would bring you to that place, are you wise?

    2. This is such a strange view of what constitutes wisdom. Yes, the wise man here knows more things and is more cultured, and thinks more prudently and is economically prudent by thinking on the margin, and all that. But as Tyler points out, a society of such people would decay and die. It is not productive. In the ultimate test, outcomes, and supporting growth, it fails.

    3. Tyler says you need balance, but he’s at a Progress Studies conference, which should make it clear that no, America has grown in this sense ‘too wise’ and insufficiently willing to grow, at least on the wise margin.

    4. Given what the world is about to be like, you need to think in infinities. You need to be infinitymaxing. The big stuff really will matter more than the marginal revolution. That’s kind of the point.

    5. You still have to, day to day, constantly think on the margin, of course.

  2. (55: 10) Tyler says he’s a regional thinker from New Jersey, that he is an uncultured barbarian, who only has a veneer of culture because of collection of information, but knowing about culture is not like being cultured, and that America falls flat in a lot of ways that would bother a cultured Frenchman but he’s used to it so they don’t bother Tyler.

    1. I think Tyler is wrong here, to his own credit. He is not a regional thinker, if anything he is far less a regional thinker than the typical ‘cultured’ person he speaks about. And to the extent that he is ‘uncultured’ it is because he has not taken on many of the burdens and social obligations of culture, and those things are to be avoided – he would be fully capable of ‘acting cultured’ if the situation were to call for that, it wouldn’t be others mistaking anything.

    2. He refers to his approach as an ‘autistic approach to culture.’ He seems to mean this in a pejorative way, that an autistic approach to things is somehow not worthy or legitimate or ‘real.’ I think it is all of those things.

    3. Indeed, the autistic-style approach to pretty much anything, in my view, is Playing in Hard Mode, with much higher startup costs, but brings a deeper and superior understanding once completed. The cultured Frenchman is like a fish in water, whereas Tyler understands and can therefore act on a much deeper, more interesting level. He can deploy culture usefully.

  3. (56: 00) What is autism? Tyler says it is officially defined by deficits, by which definition no one there [at the Progress Studies convention] is autistic. But in terms of other characteristics maybe a third of them would count.

    1. I think term autistic has been expanded and overloaded in a way that was not wise, but at this point we are stuck with this, so now it means in different contexts both the deficits and also the general approach that high-functioning people with those deficits come to take to navigating life, via consciously processing and knowing the elements of systems and how they fit together, treating words as having meanings, and having a map that matches the territory, whereas those not being autistic navigate largely on vibes.

    2. By this definition, being the non-deficit form of autistic is excellent, a superior way of being at least in moderation and in the right spots, for those capable of handling it and its higher cognitive costs.

    3. Indeed, many people have essentially none of this set of positive traits and ways of navigating the world, and it makes them very difficult to deal with.

  4. (56: 45) Why is tech so bad at having influence in Washington? Tyler says they’re getting a lot more influential quickly, largely due to national security concerns, which is why AI is being allowed to proceed.

For a while now I have found Tyler Cowen’s positions on AI very frustrating (see for example my coverage of the 3rd Cowen-Patel podcast), especially on questions of potential existential risk and expected economic growth, and what intelligence means and what it can do and is worth. This podcast did not address existential risks at all, so most of this post is about me trying (once again!) to explain why Tyler’s views on returns to intelligence and future economic growth don’t make sense to me, seeming well outside reasonable bounds.

I try to offer various arguments and intuition pumps, playing off of Dwarkesh’s attempts to do the same. It seems like there are very clear pathways, using Tyler’s own expectations and estimates, that on their own establish more growth than he expects, assuming AI is allowed to proceed at all.

I gave only quick coverage to the other half of the podcast, but don’t skip that other half. I found it very interesting, with a lot of new things to think about, but they aren’t areas where I feel as ready to go into detailed analysis, and was doing triage. In a world where we all had more time, I’d love to do dives into those areas too.

On that note, I’d also point everyone to Dwarkesh Patel’s other recent podcast, which was with physicist Adam Brown. It repeatedly blew my mind in the best of ways, and I’d love to be in a different branch where I had the time to dig into some of the statements here. Physics is so bizarre.

Discussion about this post

On Dwarkesh Patel’s 4th Podcast With Tyler Cowen Read More »