Author name: Tim Belzer

these-are-the-lasting-things-that-half-life-2-gave-us,-besides-headcrabs-and-crowbars

These are the lasting things that Half-Life 2 gave us, besides headcrabs and crowbars


Beyond the game itself (which rocks), Half-Life 2 had a big impact on PC gaming.

This article is part of our 20th anniversary of Half-Life 2 series. Credit: Aurich Lawson

It’s Half-Life 2 week at Ars Technica! This Saturday, November 16, is the 20th anniversary of the release of Half-Life 2—a game of historical importance for the artistic medium and technology of computer games. Each day up through the 16th, we’ll be running a new article looking back at the game and its impact.

“Well, I just hate the idea that our games might waste people’s time. Why spend four years of your life building something that isn’t innovative and is basically pointless?”

Valve software founder Gabe Newell is quoted by Geoff Keighley—yes, the Game Awards guy, back then a GameSpot writer—as saying this in June 1999, six months after the original Half-Life launched. Newell gave his team no real budget or deadline, only the assignment to “follow up the best PC game of all time” and redefine the genre.

When Half-Life 2 arrived in November 2004, the Collector’s Edition contained about 2.6GB of files. The game, however, contained so many things that would seem brand new in gaming, or just brave, that it’s hard to even list them.

Except I’m going to try that right here. Some will be hard to pin definitively in time to Half-Life 2 (HL2). But like many great games, HL2 refined existing ideas, borrowed others, and had a few of its own to show off.

Note that some aspects of the game itself, its status as Steam’s big push title, and what it’s like to play it today, are covered by other writers during Ars’ multi-day celebration of the game’s 20th anniversary. That includes the Gravity Gun.

How many film and gaming careers were launched by people learning how to make the Scout do something goofy?

Credit: Valve

How many film and gaming careers were launched by people learning how to make the Scout do something goofy? Credit: Valve

The Source Engine

It’s hard to imagine another game developer building an engine with such a forward-thinking mission as Source. Rather than just build the thing that runs their next game, Valve crafted Source to be modular, such that its core could be continually improved (and shipped out over Steam), and newer technologies could be optionally ported into games both new and old, while not breaking any older titles working perfectly fine.

Source started development during the late stages of the original Half-Life, but its impact goes far beyond the series. Team Fortress 2, Counter-Strike: Global Offensive, Portal 1/2, and Left 4 Dead, from Valve alone, take up multiple slots on lists of the all-time best games. The Stanley Parable, Vampire: The Masquerade—Bloodlines, and a whole lot of other games used Source, too. Countless future game developers, level designers, and mod makers cut their teeth on the very open and freely available Source code tools.

And then, of course, where would we be as a society were it not for Source Filmmaker and Garry’s Mod, without which we would never have Save as .dmx and Skibidi Toilet.

Half-Life: Alyx is a technical marvel of the VR age, but it’s pulled along by the emotional bonds of Alyx and Russell, and the quest to save Eli Vance.

Credit: Valve

Half-Life: Alyx is a technical marvel of the VR age, but it’s pulled along by the emotional bonds of Alyx and Russell, and the quest to save Eli Vance. Credit: Valve

A shooter with family dynamics

Novelist Marc Laidlaw has made it clear, multiple times, that he did not truly create the Half-Life story when he joined Valve; it was “all there when I got there, in embryo,” he told Rock Paper Shotgun. Laidlaw helped the developers tell their story through level design and wrote short, funny, unnerving dialogue.

For Half-Life 2, Laidlaw and the devs were tasked with creating some honest-to-goodness characters, something you didn’t get very often in first-person shooters (they were all dead in 1994’s System Shock). So in walked that father/daughter team of Eli and Alyx Vance, and the extended Black Mesa family, including folks like Dr. Kleiner.

These real and makeshift family members gave the mute protagonist Gordon Freeman stakes in wanting to fix the future. And Laidlaw’s “basic dramatic unit” set a precedent for lots of shooty-yet-soft-hearted games down the road: Mass Effect, The Last of Us, Gears of War, Red Dead Redemption, and far more.

Remember when a Boston-area medical manufacturing firm, run by a Half-Life fan, got everyone thinking a sequel was coming? Fun times. Credit: Black Mesa

Intense speculation about what Valve is actually doing

Another unique thing Laidlaw helped develop in PC gaming: intense grief and longing for a sequel that both does and does not exist, channeled through endless speculation about Valve’s processes and general radio silence.

Half-Life 2 got “Episodes” but never a true numbered Half-Life 3 sequel. The likelihood of 3 took a hit when Laidlaw unexpectedly announced his retirement in January 2016. Then it got even less likely, or maybe just sad, when Laidlaw posted a barely disguised “snapshot of a dream” of “Epistle 3” to his blog (since deleted and later transposed on Pastebin).

Laidlaw has expressed regret about this move. Fans have expressed regret that Half-Life 3 somehow seems even less likely, having seen Valve’s premiere writer post such a seemingly despondent bit of primary source fan fiction.

“Fans of popular game eager for sequel” isn’t itself a unique thing, but it is for Half-Life 3’s quantum existence. Valve published its new employee handbook from around 2012 on the web, and in it, you can read about the company’s boldly flat structure. To summarize greatly: Projects only get started if someone can get enough fellow employees to wheel their desks over and work on it with them. The company doesn’t take canceled or stalled games to heart; in its handbook, it’s almost celebrated that it killed Prospero as one of its first major decisions.

So the fact that Half-Life 3 exists only as something that hasn’t been formally canceled is uniquely frustrating. HL2’s last (chronological) chapter left off on a global-scale cliffhanger, and the only reason a sequel doesn’t exist is because too many other things are more appealing than developing a new first-person shooter. If you worked at Valve, you tell yourself, maybe you could change this! Maybe.

What, you’re telling me now it’s illegal to break in, take source code, and then ask for a job? This is a police state!

Credit: Valve

What, you’re telling me now it’s illegal to break in, take source code, and then ask for a job? This is a police state! Credit: Valve

Source code leak drama

The Wikipedia pages “List of commercial video games with available source code” and its cousin “Later released source code” show that, up until 2003, most of the notable games whose source code became publicly available were either altruistic efforts at preservation or, for some reason, accidental inclusions of source code on demos or in dummy files on the game disc.

And then, in late 2003, Valve and Half-Life superfan Axel Gembe hacked into Valve’s servers, grabbed the Half-Life 2 source code that existed at the time and posted it to the web. It not only showed off parts of the game Valve wanted to keep under wraps, but it showed just how far behind the game’s development was relative to the release date that had blown by weeks earlier. Valve’s response was typically atypical: they acknowledged the source code as real, asked their biggest fans for help, and then released the game a year later, to critical and commercial success.

The leak further ensconced Valve as a different kind of company, one with a particularly dedicated fanbase. It also seems to have taught companies a lesson about hardening their servers and development environments. Early builds of games still leak—witness Space Marine 2 this past July—but full source code leaks, coming from network intrusions, are something you don’t see quite so often.

Pre-loading a game before release

It would be hard to go back in time and tell our pre-broadband selves about pre-loading. You download entire games, over the Internet, and then they’re ready to play one second after the release time—no store lines, no drive back home, no stuffed servers or crashed discs. It seems like a remarkable bit of trust, though it’s really just a way to lessen server load on release day.

It’s hard to pin down which game first offered pre-loading in the modern sense, but HL2, being a major launch title for Valve’s Steam service and a title with heavy demand, definitely popularized the concept.

Always-online for single-player games

Here’s one way that Half-Life 2 moved the industry forward that some folks might want to move back.

Technically, you can play HL2 without an Internet connection, and maybe for long periods of time. But for most people, playing HL2 without a persistent net connection involves activating the game on Steam, letting it fully update, and then turning on Steam’s “Offline Mode” to play it. There’s no time limit, but you need to keep Steam active while playing.

It’s not so much the particular connection demands of HL2 that make it notable, but the pathway that it, and Steam, created on which other companies moved ahead, treating gaming as something that, by default, happens with at least a connection, and preferably a persistent one.

It’s Game of the Year. Which year? Most of them, really (until Disco Elysium shows up).

Credit: Valve

It’s Game of the Year. Which year? Most of them, really (until Disco Elysium shows up). Credit: Valve

A place on “All-time” video game rankings forever

Half-Life 2 introduced many ground-breaking things at once—deep facial animations and expressions, an accessible physics engine, a compelling global-scale but family-minded story—while also being tremendously enjoyable game to play through. This has made it hard for anyone to suggest another game to go above it on any “All-time greatest games” list, especially those with a PC focus.

Not that they don’t try. PC Gamer has HL2 at 7 out of 100, mostly because it has lost an understandable amount of “Hotness” in 20 years. IGN has it at No. 9 (while its descendant Portal 2 takes third place). Metacritic, however fallible, slots it in universal second place for PC games.

So give Half-Life 2 even more credit for fostering innovation in the “arbitrary ranked list of games” genre. Rock Paper Shotgun’s top 100 is cited as the best “to play on PC today,” as they have “paid no mind to what was important or influential.” And yet, Half-Life 2, as a game you can play in 2024, is still on that list. It’s really something, that game.

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

These are the lasting things that Half-Life 2 gave us, besides headcrabs and crowbars Read More »

as-abl-space-departs-launch,-the-1-ton-rocket-wars-have-a-clear-winner

As ABL Space departs launch, the 1-ton rocket wars have a clear winner

“Take a look around,” Piemont wrote. “US rockets fly every couple of days, with perfect success. It’s revolutionary. While there is still a need for more providers in certain market segments, those opportunities are decreasing. To succeed in such a demanding effort as scaling up an orbital launch program, you need deep motivation around your mission and potential impact, from many stakeholders. As the launch market matured, those motivations thinned and our path to making a big contribution as a commercial launch company narrowed considerably.”

Over the last half decade or so, three US companies have credibly vied to develop rockets in the 1-ton class in terms of lift capacity. ABL has been competing alongside Relativity Space and Firefly to bring its rockets to market. ABL never took off. In March 2023, Relativity reached space with the Terran 1 rocket, but, due to second-stage issues, failed to reach orbit. Within weeks, Relativity announced it was shifting its focus to a medium-lift rocket, Terran R. Since then, the California-based launch company has moved along, but there are persistent rumors that it faces a cash crunch.

Of the three, only Firefly has enjoyed success. The company’s Alpha rocket has reached orbit on multiple occasions, and just this week Firefly announced that it completed a $175 million Series D fundraising round, resulting in a valuation of more than $2 billion. The 1-ton rocket wars are over: Firefly has won.

Focusing on defense

Just as Relativity pivoted away from this class of rocket, ABL will now also shift its focus—this time in an even more radical direction.

US Defense spending on missile production and defense has skyrocketed since Russia’s invasion of Ukraine in 2022, and ABL will now seek to tap into this potentially lucrative market.

“We have made the decision to focus our efforts on national defense, and specifically on missile defense technologies,” Piemont said. “We’ll have more to share soon on our roadmap and traction in this area. For now, suffice to say we see considerable opportunity to leverage RS1, GS0, the E2 engine, and the rest of the technology we’ve developed to date to enable a new type of research effort around missile defense technologies.”

As ABL Space departs launch, the 1-ton rocket wars have a clear winner Read More »

review:-amazon’s-2024-kindle-paperwhite-makes-the-best-e-reader-a-little-better

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better

A fast Kindle?

From left to right: 2024 Paperwhite, 2021 Paperwhite, and 2018 Paperwhite. Note not just the increase in screen size, but also how the screen corners get a little more rounded with each release. Credit: Andrew Cunningham

I don’t want to oversell how fast the new Kindle is, because it’s still not like an E-Ink screen can really compete with an LCD or OLED panel for smoothness of animations or UI responsiveness. But even compared to the 2021 Paperwhite, tapping buttons, opening menus, opening books, and turning pages feels considerably snappier—not quite instantaneous, but without the unexplained pauses and hesitation that longtime Kindle owners will be accustomed to. For those who type out notes in their books, even the onscreen keyboard feels fluid and responsive.

Compared to the 2018 Paperwhite (again, the first waterproofed model, and the last one with a 6-inch screen and micro USB port), the difference is night and day. While it still feels basically fine for reading books, I find that the older Kindle can sometimes pause for so long when opening menus or switching between things that I wonder if it’s still working or whether it’s totally locked up and frozen.

“Kindle benchmarks” aren’t really a thing, but I attempted to quantify the performance improvements by running some old browser benchmarks using the Kindle’s limited built-in web browser and Google’s ancient Octane 2.0 test—the 2018, 2021, and 2024 Kindles are all running the same software update here (5.17.0), so this should be a reasonably good apples-to-apples comparison of single-core processor speed.

The new Kindle is actually way faster than older models. Credit: Andrew Cunningham

The 2021 Kindle was roughly 30 percent faster than the 2018 Kindle. The new Paperwhite is nearly twice as fast as the 2021 Paperwhite, and well over twice as fast as the 2018 Paperwhite. That alone is enough to explain the tangible difference in responsiveness between the devices.

Turning to the new Paperwhite’s other improvements: compared side by side, the new screen is appreciably bigger, more noticeably so than the 0.2-inch size difference might suggest. And it doesn’t make the Paperwhite much larger, though it is a tiny bit taller in a way that will wreck compatibility with existing cases. But you only really appreciate the upgrade if you’re coming from one of the older 6-inch Kindles.

Review: Amazon’s 2024 Kindle Paperwhite makes the best e-reader a little better Read More »

i,-too,-installed-an-open-source-garage-door-opener,-and-i’m-loving-it

I, too, installed an open source garage door opener, and I’m loving it


Open source closed garage

OpenGarage restored my home automations and gave me a whole bunch of new ideas.

Hark! The top portion of a garage door has entered my view, and I shall alert my owner to it. Credit: Kevin Purdy

Like Ars Senior Technology Editor Lee Hutchinson, I have a garage. The door on that garage is opened and closed by a device made by a company that, as with Lee’s, offers you a way to open and close it with a smartphone app. But that app doesn’t work with my preferred home automation system, Home Assistant, and also looks and works like an app made by a garage door company.

I had looked into the ratgdo Lee installed, and raved about, but hooking it up to my particular Genie/Aladdin system would have required installing limit switches. So I instead installed an OpenGarage unit ($50 plus shipping). My garage opener now works with Home Assistant (and thereby pretty much anything else), it’s not subject to the whims of API access, and I’ve got a few ideas how to make it even better. Allow me to walk you through what I did, why I did it, and what I might do next.

Thanks, I’ll take it from here, Genie

Genie, maker of my Wi-Fi-capable garage door opener (sold as an “Aladdin Connect” system), is not in the same boat as the Chamberlain/myQ setup that inspired Lee’s project. There was a working Aladdin Connect integration in Home Assistant, until the company changed its API in January 2024. Genie said it would release its own official Home Assistant integration in June, and it did, but then it was quickly pulled back, seemingly for licensing issues. Since then, no updates on the matter. (I have emailed Genie for comment and will update this post if I receive reply.)

This is not egregious behavior, at least on the scale of garage door opener firms. And Aladdin’s app works with Google Home and Amazon Alexa, but not with Home Assistant or my secondary/lazy option, HomeKit/Apple Home. It also logs me out “for security” more often than I’d like and tells me this only after an iPhone shortcut refuses to fire. It has some decent features, but without deeper integrations, I can’t do things like have the brighter ceiling lights turn on when the door opens or flash indoor lights if the garage door stays open too long. At least not without Google or Amazon.

I’ve seen OpenGarage passed around the Home Assistant forums and subreddits over the years. It is, as the name implies, fully open source: hardware design, firmware, and app code, API, everything. It is a tiny ESP board that has an ultrasonic distance sensor and circuit relay attached. You can control and monitor it from a web browser, mobile or desktop, from IFTTT, MQTT, and with the latest firmware, you can get email alerts. I decided to pull out the 6-foot ladder and give it a go.

Prototypes of the OpenGarage unit. To me, they look like little USB-powered owls, just with very stubby wings. Credit: OpenGarage

Installing the little watching owl

You generally mount the OpenGarage unit to the roof of your garage, so the distance sensor can detect if your garage door has rolled up in front of it. There are options for mounting with magnetic contact sensors or a side view of a roll-up door, or you can figure out some other way in which two different sensor depth distances would indicate an open or closed door. If you’ve got a Security+ 2.0 door (the kind with the yellow antenna, generally), you’ll need an adapter, too.

The toughest part of an overhead install is finding a spot that gives the unit a view of your garage door, not too close to rails or other obstructing objects, but then close enough for the contact wires and USB micro cable to reach. Ideally, too, it has a view of your car when the door is closed and the car is inside, so it can report its presence. I’ve yet to find the right thing to do with the “car is inside or not” data, but the seed is planted.

OpenGarage’s introduction and explanation video.

My garage setup, like most of them, is pretty simple. There’s a big red glowing button on the wall near the door, and there are two very thin wires running from it to the opener. On the opener, there are four ports that you can open up with a screwdriver press. Most of the wires are headed to the safety sensor at the door bottom, while two come in from the opener button. After stripping a bit of wire to expose more cable, I pressed the contact wires from the OpenGarage into those same opener ports.

Wires running from terminal points in the back of a garage door opener, with one set of wires coming in from the bottom and pressed into the same press-fit holes.

The wire terminal on my Genie garage opener. The green and pink wires lead to the OpenGarage unit. Credit: Kevin Purdy

After that, I connected the wires to the OpenGarage unit’s screw terminals, then did some pencil work on the garage ceiling to figure out how far I could run the contact and micro-USB power cable, getting the proper door view while maintaining some right-angle sense of order up there. When I had reached a decent compromise between cable tension and placement, I screwed the sensor into an overhead stud and used a staple gun to secure the wires. It doesn’t look like a pro installed it, but it’s not half bad.

A garage ceiling, with drywall stud paint running across, a small device with wires running at right angles to the opener, and an opener rail beneath.

Where I ended up installing my OpenGarage unit. Key points: Above the garage door when open, view of the car below, not too close to rails, able to reach power and opener contact. Credit: Kevin Purdy

A very versatile board

If you’ve got everything placed and wired up correctly, opening the OpenGarage access point or IP address should give you an interface that shows you the status of your garage, your car (optional), and its Wi-Fi and external connections.

Image of OpenGarage web interface, showing a

The landing screen for the OpenGarage. You can only open the door or change settings if you know the device key (which you should change immediately). Credit: Kevin Purdy

It’s a handy webpage and a basic opener (provided you know the secret device key you set), but OpenGarage is more powerful in how it uses that data. OpenGarage’s device can keep a cloud connection open to Blynk or the maker’s own OpenThings.io cloud server. You can hook it up to MQTT or an IFTTT channel. It can send you alerts when your garage has been open a certain amount of time or if it’s open after a certain time of day.

Screenshot showing 5 sensors: garage, distance, restart, vehicle, and signal strength.

You’re telling me you can just… see the state of these things, at all times, on your own network? Credit: Kevin Purdy

You really don’t need a corporate garage coder

For me, the greatest benefit is in hooking OpenGarage up to Home Assistant. I’ve added an opener button to my standard dashboard (one that requires a long-press or two actions to open). I’ve restored the automation that turns on the overhead bulbs for five minutes when the garage door opens. And I can dig in if I want, like alerting me that it’s Monday night at 10 pm and I’ve yet to open the garage door, indicating I forgot to put the trash out. Or maybe some kind of NFC tag to allow for easy opening while on a bike, if that’s not a security nightmare (it might be).

Not for nothing, but OpenGarage is also a deeply likable bit of indie kit. It’s a two-person operation, with Ray Wang building on his work with the open and handy OpenSprinkler project, trading Arduino for ESP8266, and doing some 3D printing to fit the sensors and switches, and Samer Albahra providing mobile app, documentation, and other help. Their enthusiasm for DIY home control has likely brought out the same in others and certainly in me.

Photo of Kevin Purdy

Kevin is a senior technology reporter at Ars Technica, covering open-source software, PC gaming, home automation, repairability, e-bikes, and tech history. He has previously worked at Lifehacker, Wirecutter, iFixit, and Carbon Switch.

I, too, installed an open source garage door opener, and I’m loving it Read More »

microsoft-finally-releases-generic-install-isos-for-the-arm-version-of-windows

Microsoft finally releases generic install ISOs for the Arm version of Windows

For some PC buyers, doing a clean install of Windows right out of the box is part of the setup ritual. But for Arm-based PCs, including the Copilot+ PCs with Snapdragon X Plus and Elite chips in them, it hasn’t been possible in the same way. Microsoft (mostly) hasn’t offered generic install media that can be used to reinstall Windows on an Arm PC from scratch.

Microsoft is fixing that today—the company finally has a download page for the official Arm release of Windows 11, linked to but separate from the ISOs for the x86 versions of Windows. These are useful not just for because-I-feel-like-it clean installs, but for reinstalling Windows after you’ve upgraded your SSD and setting up Windows virtual machines on Arm-based PCs and Macs.

Previously, Microsoft did offer install media for some Windows Insider Preview Arm builds, though these are for beta versions of Windows that may or may not be feature-complete or stable. Various apps, scripts, and websites also exist to grab files from Microsoft’s servers and build “unofficial” ISOs for the Arm version of Windows, though obviously this is more complicated than just downloading a single file directly.

Microsoft finally releases generic install ISOs for the Arm version of Windows Read More »

chatgpt’s-success-could-have-come-sooner,-says-former-google-ai-researcher

ChatGPT’s success could have come sooner, says former Google AI researcher


A co-author of Attention Is All You Need reflects on ChatGPT’s surprise and Google’s conservatism.

Jakob Uszkoreit Credit: Jakob Uszkoreit / Getty Images

In 2017, eight machine-learning researchers at Google released a groundbreaking research paper called Attention Is All You Need, which introduced the Transformer AI architecture that underpins almost all of today’s high-profile generative AI models.

The Transformer has made a key component of the modern AI boom possible by translating (or transforming, if you will) input chunks of data called “tokens” into another desired form of output using a neural network. Variations of the Transformer architecture power language models like GPT-4o (and ChatGPT), audio synthesis models that run Google’s NotebookLM and OpenAI’s Advanced Voice Mode, video synthesis models like Sora, and image synthesis models like Midjourney.

At TED AI 2024 in October, one of those eight researchers, Jakob Uszkoreit, spoke with Ars Technica about the development of transformers, Google’s early work on large language models, and his new venture in biological computing.

In the interview, Uszkoreit revealed that while his team at Google had high hopes for the technology’s potential, they didn’t quite anticipate its pivotal role in products like ChatGPT.

The Ars interview: Jakob Uszkoreit

Ars Technica: What was your main contribution to the Attention is All You Need paper?

Jakob Uszkoreit (JU): It’s spelled out in the footnotes, but my main contribution was to propose that it would be possible to replace recurrence [from Recurrent Neural Networks] in the dominant sequence transduction models at the time with the attention mechanism, or more specifically self-attention. And that it could be more efficient and, as a result, also more effective.

Ars: Did you have any idea what would happen after your group published that paper? Did you foresee the industry it would create and the ramifications?

JU: First of all, I think it’s really important to keep in mind that when we did that, we were standing on the shoulders of giants. And it wasn’t just that one paper, really. It was a long series of works by some of us and many others that led to this. And so to look at it as if this one paper then kicked something off or created something—I think that is taking a view that we like as humans from a storytelling perspective, but that might not actually be that accurate of a representation.

My team at Google was pushing on attention models for years before that paper. It’s a lot longer of a slog with much, much more, and that’s just my group. Many others were working on this, too, but we had high hopes that it would push things forward from a technological perspective. Did we think that it would play a role in really enabling, or at least apparently, seemingly, flipping a switch when it comes to facilitating products like ChatGPT? I don’t think so. I mean, to be very clear in terms of LLMs and their capabilities, even around the time we published the paper, we saw phenomena that were pretty staggering.

We didn’t get those out into the world in part because of what really is maybe a notion of conservatism around products at Google at the time. But we also, even with those signs, weren’t that confident that stuff in and of itself would make that compelling of a product. But did we have high hopes? Yeah.

Ars: Since you knew there were large language models at Google, what did you think when ChatGPT broke out into a public success? “Damn, they got it, and we didn’t?”

JU: There was a notion of, well, “that could have happened.” I think it was less of a, “Oh dang, they got it first” or anything of the like. It was more of a “Whoa, that could have happened sooner.” Was I still amazed by just how quickly people got super creative using that stuff? Yes, that was just breathtaking.

Jakob Uskoreit presenting at TED AI 2024.

Jakob Uszkoreit presenting at TED AI 2024. Credit: Benj Edwards

Ars: You weren’t at Google at that point anymore, right?

JU: I wasn’t anymore. And in a certain sense, you could say the fact that Google wouldn’t be the place to do that factored into my departure. I left not because of what I didn’t like at Google as much as I left because of what I felt I absolutely had to do elsewhere, which is to start Inceptive.

But it was really motivated by just an enormous, not only opportunity, but a moral obligation in a sense, to do something that was better done outside in order to design better medicines and have very direct impact on people’s lives.

Ars: The funny thing with ChatGPT is that I was using GPT-3 before that. So when ChatGPT came out, it wasn’t that big of a deal to some people who were familiar with the tech.

JU: Yeah, exactly. If you’ve used those things before, you could see the progression and you could extrapolate. When OpenAI developed the earliest GPTs with Alec Radford and those folks, we would talk about those things despite the fact that we weren’t at the same companies. And I’m sure there was this kind of excitement, how well-received the actual ChatGPT product would be by how many people, how fast. That still, I think, is something that I don’t think anybody really anticipated.

Ars: I didn’t either when I covered it. It felt like, “Oh, this is a chatbot hack of GPT-3 that feeds its context in a loop.” And I didn’t think it was a breakthrough moment at the time, but it was fascinating.

JU: There are different flavors of breakthroughs. It wasn’t a technological breakthrough. It was a breakthrough in the realization that at that level of capability, the technology had such high utility.

That, and the realization that, because you always have to take into account how your users actually use the tool that you create, and you might not anticipate how creative they would be in their ability to make use of it, how broad those use cases are, and so forth.

That is something you can sometimes only learn by putting something out there, which is also why it is so important to remain experiment-happy and to remain failure-happy. Because most of the time, it’s not going to work. But some of the time it’s going to work—and very, very rarely it’s going to work like [ChatGPT did].

Ars: You’ve got to take a risk. And Google didn’t have an appetite for taking risks?

JU: Not at that time. But if you think about it, if you look back, it’s actually really interesting. Google Translate, which I worked on for many years, was actually similar. When we first launched Google Translate, the very first versions, it was a party joke at best. And we took it from that to being something that was a truly useful tool in not that long of a period. Over the course of those years, the stuff that it sometimes output was so embarrassingly bad at times, but Google did it anyway because it was the right thing to try. But that was around 2008, 2009, 2010.

Ars: Do you remember AltaVista’sBabel Fish?

JU: Oh yeah, of course.

Ars: When that came out, it blew my mind. My brother and I would do this thing where we would translate text back and forth between languages for fun because it would garble the text.

JU: It would get worse and worse and worse. Yeah.

Programming biological computers

After his time at Google, Uszkoreit co-founded Inceptive to apply deep learning to biochemistry. The company is developing what he calls “biological software,” where AI compilers translate specified behaviors into RNA sequences that can perform desired functions when introduced to biological systems.

Ars: What are you up to these days?

JU: In 2021 we co-founded Inceptive in order to use deep learning and high throughput biochemistry experimentation to design better medicines that truly can be programmed. We think of this as really just step one in the direction of something that we call biological software.

Biological software is a little bit like computer software in that you have some specification of the behavior that you want, and then you have a compiler that translates that into a piece of computer software that then runs on a computer exhibiting the functions or the functionality that you specify.

You specify a piece of a biological program and you compile that, but not with an engineered compiler, because life hasn’t been engineered like computers have been engineered. But with a learned AI compiler, you translate that or compile that into molecules that when inserted into biological systems, organisms, our cells exhibit those functions that you’ve programmed into.

A pharmacist holds a bottle containing Moderna’s bivalent COVID-19 vaccine. Credit: Getty | Mel Melcon

Ars: Is that anything like how the mRNA COVID vaccines work?

JU: A very, very simple example of that are the mRNA COVID vaccines where the program says, “Make this modified viral antigen” and then our cells make that protein. But you could imagine molecules that exhibit far more complex behaviors. And if you want to get a picture of how complex those behaviors could be, just remember that RNA viruses are just that. They’re just an RNA molecule that when entering an organism exhibits incredibly complex behavior such as distributing itself across an organism, distributing itself across the world, doing certain things only in a subset of your cells for a certain period of time, and so on and so forth.

And so you can imagine that if we managed to even just design molecules with a teeny tiny fraction of such functionality, of course with the goal not of making people sick, but of making them healthy, it would truly transform medicine.

Ars: How do you not accidentally create a monster RNA sequence that just wrecks everything?

JU: The amazing thing is that medicine for the longest time has existed in a certain sense outside of science. It wasn’t truly understood, and we still often don’t truly understand their actual mechanisms of action.

As a result, humanity had to develop all of these safeguards and clinical trials. And even before you enter the clinic, all of these empirical safeguards prevent us from accidentally doing [something dangerous]. Those systems have been in place for as long as modern medicine has existed. And so we’re going to keep using those systems, and of course with all the diligence necessary. We’ll start with very small systems, individual cells in future experimentation, and follow the same established protocols that medicine has had to follow all along in order to ensure that these molecules are safe.

Ars: Thank you for taking the time to do this.

JU: No, thank you.

Photo of Benj Edwards

Benj Edwards is Ars Technica’s Senior AI Reporter and founder of the site’s dedicated AI beat in 2022. He’s also a widely-cited tech historian. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.

ChatGPT’s success could have come sooner, says former Google AI researcher Read More »

ibm-boosts-the-amount-of-computation-you-can-get-done-on-quantum-hardware

IBM boosts the amount of computation you can get done on quantum hardware

By making small adjustments to the frequency that the qubits are operating at, it’s possible to avoid these problems. This can be done when the Heron chip is being calibrated before it’s opened for general use.

Separately, the company has done a rewrite of the software that controls the system during operations. “After learning from the community, seeing how to run larger circuits, [we were able to] almost better define what it should be and rewrite the whole stack towards that,” Gambetta said. The result is a dramatic speed-up. “Something that took 122 hours now is down to a couple of hours,” he told Ars.

Since people are paying for time on this hardware, that’s good for customers now. However,  it could also pay off in the longer run, as some errors can occur randomly, so less time spent on a calculation can mean fewer errors.

Deeper computations

Despite all those improvements, errors are still likely during any significant calculations. While it continues to work toward developing error-corrected qubits, IBM is focusing on what it calls error mitigation, which it first detailed last year. As we described it then:

“The researchers turned to a method where they intentionally amplified and then measured the processor’s noise at different levels. These measurements are used to estimate a function that produces similar output to the actual measurements. That function can then have its noise set to zero to produce an estimate of what the processor would do without any noise at all.”

The problem here is that using the function is computationally difficult, and the difficulty increases with the qubit count. So, while it’s still easier to do error mitigation calculations than simulate the quantum computer’s behavior on the same hardware, there’s still the risk of it becoming computationally intractable. But IBM has also taken the time to optimize that, too. “They’ve got algorithmic improvements, and the method that uses tensor methods [now] uses the GPU,” Gambetta told Ars. “So I think it’s a combination of both.”

IBM boosts the amount of computation you can get done on quantum hardware Read More »

gog’s-preservation-program-is-the-drm-free-store-refocusing-on-the-classics

GOG’s Preservation Program is the DRM-free store refocusing on the classics

The classic PC games market is “in a sorry state,” according to DRM-free and classic-minded storefront GOG. Small games that aren’t currently selling get abandoned, and compatibility issues arise as technology moves forward or as one-off development ideas age like milk.

Classic games are only 20 percent of GOG’s catalog, and the firm hasn’t actually called itself “Good Old Games” in 12 years. And yet, today, GOG announces that it is making “a significant commitment of resources” toward a new GOG Preservation Program. It starts with 100 games for which GOG’s own developers are working to create current and future compatibility, keeping them DRM-free and giving them ongoing tech support, along with granting them a “Good Old Game: Preserved by GOG” stamp.

Selection of games available in GOG's

Credit: GOG

GOG is not shifting its mission of providing a DRM-free alternative to Steam, Epic, and other PC storefronts, at least not entirely. But it is demonstrably excited about a new focus that ties back to its original name, inspired in some part by its work on Alpha Protocol.

“We think we can significantly impact the classics industry by focusing our resources on it and creating superior products,” writes Arthur Dejardin, head of sales and marketing at GOG. “If we wanted to spread the DRM-free gospel by focusing on getting new AAA games on GOG instead, we would make little progress with the same amount of effort and money (we’ve been trying various versions of that for the last 5 years).”

GOG Preservation Program’s launch video.

Getting knights, demons, and zombies up to snuff

What kind of games? Scanning the list of Good Old Games, most of them are, by all accounts, both good and old. Personally, I’m glad to see the Jagged Alliance games, System Shock 2Warcraft I & IIDungeon Keeper Gold and Theme ParkSimCity 3000 Unlimited, and the Wing Commander series (particularly, personally, Privateer). Most of them are, understandably, Windows-only, though Mac support extends to 34 titles so far, and Linux may pick up many more through Proton compatibility beyond the 19 native titles to date.

GOG’s Preservation Program is the DRM-free store refocusing on the classics Read More »

tesla-is-recalling-2,431-cybertrucks,-and-this-time-there’s-no-software-fix

Tesla is recalling 2,431 Cybertrucks, and this time there’s no software fix

Tesla has issued yet another recall for the angular, unpainted Cybertruck. This is the sixth recall affecting the model-year 2024 Cybertruck to be issued since January, and it affects 2,431 vehicles in total. And this time, there’s no fix being delivered by a software update over the air—owners will need to have their pickup trucks physically repaired.

The problem is a faulty drive unit inverter, which stranded a Cybertruck at the end of July. Tesla says it started investigating the problem a week later and by late October arrived at the conclusion that it had made a bad batch of inverters that it used in production vehicles from November 6, 2023, until July 30, 2024. After a total of five failures and warranty claims that the company says “may be related to the condition,” Tesla issued a recall.

Tesla is often able to fix defects in its products by pushing out new software, something that leads many fans of the brand to get defensive over the topic. Although there is no requirement for a safety recall to involve some kind of hardware fix—20 percent of all car recalls are now software fixes—in this case, the solution to the failing inverters very much requires a technician to work on the affected trucks.

Tesla says that starting on December 9, it will begin replacing the faulty inverters with new ones that have components that won’t malfunction.

Tesla is recalling 2,431 Cybertrucks, and this time there’s no software fix Read More »

this-elephant-figured-out-how-to-use-a-hose-to-shower

This elephant figured out how to use a hose to shower

And the hose-showering behavior was “lateralized,” that is, Mary preferred targeting her left body side more than her right. (Yes, Mary is a “left-trunker.”) Mary even adapted her showering behavior depending on the diameter of the hose: she preferred showering with a 24-mm hose over a 13-mm hose and preferred to use her trunk to shower rather than a 32-mm hose.

It’s not known where Mary learned to use a hose, but the authors suggest that elephants might have an intuitive understanding of how hoses work because of the similarity to their trunks. “Bathing and spraying themselves with water, mud, or dust are very common behaviors in elephants and important for body temperature regulation as well as skin care,” they wrote. “Mary’s behavior fits with other instances of tool use in elephants related to body care.”

Perhaps even more intriguing was Anchali’s behavior. While Anchali did not use the hose to shower, she nonetheless exhibited complex behavior in manipulating the hose: lifting it, kinking the hose, regrasping the kink, and compressing the kink. The latter, in particular, often resulted in reduced water flow while Mary was showering. Anchali eventually figured out how to further disrupt the water flow by placing her trunk on the hose and lowering her body onto it. Control experiments were inconclusive about whether Anchali was deliberately sabotaging Mary’s shower; the two elephants had been at odds and behaved aggressively toward each other at shower times. But similar cognitively complex behavior has been observed in elephants.

“When Anchali came up with a second behavior that disrupted water flow to Mary, I became pretty convinced that she is trying to sabotage Mary,” Brecht said. “Do elephants play tricks on each other in the wild? When I saw Anchali’s kink and clamp for the first time, I broke out in laughter. So, I wonder, does Anchali also think this is funny, or is she just being mean?

Current Biology, 2024. DOI: 10.1016/j.cub.2024.10.017  (About DOIs).

This elephant figured out how to use a hose to shower Read More »

new-secret-math-benchmark-stumps-ai-models-and-phds-alike

New secret math benchmark stumps AI models and PhDs alike

Epoch AI allowed Fields Medal winners Terence Tao and Timothy Gowers to review portions of the benchmark. “These are extremely challenging,” Tao said in feedback provided to Epoch. “I think that in the near term basically the only way to solve them, short of having a real domain expert in the area, is by a combination of a semi-expert like a graduate student in a related field, maybe paired with some combination of a modern AI and lots of other algebra packages.”

A chart showing AI model success on the FrontierMath problems, taken from Epoch AI's research paper.

A chart showing AI models’ limited success on the FrontierMath problems, taken from Epoch AI’s research paper. Credit: Epoch AI

To aid in the verification of correct answers during testing, the FrontierMath problems must have answers that can be automatically checked through computation, either as exact integers or mathematical objects. The designers made problems “guessproof” by requiring large numerical answers or complex mathematical solutions, with less than a 1 percent chance of correct random guesses.

Mathematician Evan Chen, writing on his blog, explained how he thinks that FrontierMath differs from traditional math competitions like the International Mathematical Olympiad (IMO). Problems in that competition typically require creative insight while avoiding complex implementation and specialized knowledge, he says. But for FrontierMath, “they keep the first requirement, but outright invert the second and third requirement,” Chen wrote.

While IMO problems avoid specialized knowledge and complex calculations, FrontierMath embraces them. “Because an AI system has vastly greater computational power, it’s actually possible to design problems with easily verifiable solutions using the same idea that IOI or Project Euler does—basically, ‘write a proof’ is replaced by ‘implement an algorithm in code,'” Chen explained.

The organization plans regular evaluations of AI models against the benchmark while expanding its problem set. They say they will release additional sample problems in the coming months to help the research community test their systems.

New secret math benchmark stumps AI models and PhDs alike Read More »