Features

a-few-weeks-with-the-pocket-386,-an-early-‘90s-style,-half-busted-retro-pc

A few weeks with the Pocket 386, an early-‘90s-style, half-busted retro PC

The Pocket 386 is fun for a while, but the shortcomings and the broken stuff start to wear on you after a while.

Enlarge / The Pocket 386 is fun for a while, but the shortcomings and the broken stuff start to wear on you after a while.

Andrew Cunningham

The Book 8088 was a neat experiment, but as a clone of the original IBM PC, it was pretty limited in what it could do. Early MS-DOS apps and games worked fine, and the very first Windows versions ran… technically. Just not the later ones that could actually run Windows software.

The Pocket 386 laptop is a lot like the Book 8088, but fast-forwarded to the next huge evolution in the PC’s development. Intel’s 80386 processors not only jumped from 16-bit operation to 32-bit, but they implemented different memory modes that could take advantage of many megabytes of memory while maintaining compatibility with apps that only recognized the first 640KB.

Expanded software compatibility makes this one more appealing to retro-computing enthusiasts since (like a vintage 386) it will do just about everything an 8088 can do, with the added benefit of a whole lot more speed and much better compatibility with seminal versions of Windows. It’s much more convenient to have all this hardware squeezed into a little laptop than in a big, clunky vintage desktop with slowly dying capacitors in it.

But as with the Book 8088, there are implementation problems. Some of them are dealbreakers. The Pocket 386 is still an interesting curio, but some of what’s broken makes it too unreliable and frustrating to really be usable as a vintage system once the novelty wears off.

The 80386

A close-up of the Pocket 386's tiny keyboard.

Enlarge / A close-up of the Pocket 386’s tiny keyboard.

Andrew Cunningham

When we talked about the Book 8088, most of our discussion revolved around a single PC: the 1981 IBM PC 5150, the original machine from which a wave of “IBM compatibles” and the modern PC industry sprung. Restricted to 1MB of RAM and 16-bit applications—most of which could only access the first 640KB of memory—the limits of an 8088-based PC mean there are only so many operating systems and applications you can realistically run.

The 80386 is seven years newer than the original 8086, and it’s capable of a whole lot more. The CPU came with many upgrades over the 8086 and 80286, but there are three that are particularly relevant for us: for one, it’s a 32-bit processor capable of addressing up to 4GB of RAM (strictly in theory, for vintage software). It introduced a much-improved “protected mode” that allowed for improved multitasking and the use of virtual memory. And it also included a so-called virtual 8086 mode, which could run multiple “real mode” MS-DOS applications simultaneously from within an operating system running in protected mode.

The result is a chip that is backward-compatible with the vast majority of software that could run on an 8088- or 8086-based PC—notwithstanding certain games or apps written specifically for the old IBM PC’s 4.77 MHz clock speed or other quirks particular to its hardware—but with the power necessary to credibly run some operating systems with graphical user interfaces.

Moving on to the Pocket 386’s specific implementation of the CPU, this is an 80386SX, the weaker of the two 386 variants. You might recall that the Intel 8088 CPU was still a 16-bit processor internally, but it used an 8-bit external bus to cut down on costs, retaining software compatibility with the 8086 but reducing the speed of communication between the CPU and other components in the system. The 386SX is the same way—like the more powerful 80386DX, it remained a 32-bit processor internally, capable of running 32-bit software. But it was connected to the rest of the system by a 16-bit external bus, which limited its performance. The amount of RAM it could address was also limited to 16MB.

(This DX/SX split is the source of some confusion; in the 486 generation, the DX suffix was used to denote a chip with a built-in floating-point unit, while 486SX processors didn’t include one. Both 386 variants still required a separate FPU for people who wanted one, the Intel 80387.)

While the Book 8088 uses vintage PC processors (usually a NEC V20, a pin-compatible 8088 upgrade), the Pocket 386 is using a slightly different version of the 80386SX core that wouldn’t have appeared in actual consumer PCs. Manufactured by a company called Ali, the M6117C is a late-’90s version of the 386SX core combined with a chipset intended for embedded systems rather than consumer PCs.

A few weeks with the Pocket 386, an early-‘90s-style, half-busted retro PC Read More »

the-summit-1-is-not-peak-mountain-bike,-but-it’s-a-great-all-rounder

The Summit 1 is not peak mountain bike, but it’s a great all-rounder

Image of a blue hard tail mountain bike leaning against a grey stone wall.

John Timmer

As I mentioned in another recent review, I’ve been checking out electric hardtail mountain bikes lately. Their relative simplicity compared to full-suspension models tends to allow companies to hit a lower price point without sacrificing much in terms of component quality, potentially opening up mountain biking to people who might not otherwise consider it. The first e-hardtail I checked out, Aventon’s Ramblas, fits this description to a T, offering a solid trail riding experience at a price that’s competitive with similar offerings from major manufacturers.

Velotric’s Summit 1 has a slightly different take on the equation. The company has made a few compromises that allowed it to bring the price down to just under $2,000, which is significantly lower than a lot of the competition. The result is something that’s a bit of a step down on some more challenging trails. But it still can do about 90 percent of what most alternatives offer, and it’s probably a better all-around bicycle for people who intend to also use it for commuting or errand-running.

Making the Summit

Velotric is another e-bike-only company, and we’ve generally been impressed by its products, which offer a fair bit of value for their price. The Summit 1 seems to be a reworking of its T-series of bikes (which also impressed us) into mountain bike form. You get a similar app experience and integration of the bike into Apple’s Find My system, though the company has ditched the thumbprint reader, which is supposed to function as a security measure. Velotric has also done some nice work adapting its packaging to smooth out the assembly process, placing different parts in labeled sub-boxes.

Velotric has made it easier to find what you need during assembly.

Enlarge / Velotric has made it easier to find what you need during assembly.

John Timmer

These didn’t help me avoid all glitches during assembly, though. I ended up having to take apart the front light assembly and remove the handlebars clamp to get the light attached to the bike—all contrary to the instructions. And connecting the color-coded electric cables was more difficult than necessary because two cables had the same color. But it only started up in one of the possible combinations, so it wasn’t difficult to sort out.

The Summit 1’s frame is remarkably similar to the Ramblas; if there wasn’t branding on it, you might need to resort to looking over the components to figure out which one you were looking at. Like the Ramblas, it has a removable battery with a cover that protects from splashes, but it probably won’t stay watertight through any significant fords. The bike also lacks an XL size option, and as usual, the Large was just a bit small for my legs.

The biggest visible difference is at the cranks, which is not where the motor resides on the Summit. Instead, you’ll find that on the rear hub, which typically means a slight step down in performance, though it is often considerably cheaper. For the Summit, the step down seemed very slight. I could definitely feel it in some contexts, but I’m pretty unusual in terms of the number of different hub and mid-motor configurations I’ve experienced (which is my way of saying that most people would never notice).

The Summit 1 has a hub motor on the rear wheel and a relatively compact set of gears.

Enlarge / The Summit 1 has a hub motor on the rear wheel and a relatively compact set of gears.

John Timmer

There are a number of additional price/performance compromises to be found. The biggest is the drivetrain in the back, which has a relatively paltry eight gears and lacks the very large gear rings you’d typically find on mountain bikes without a front derailleur—meaning almost all of them these days. This isn’t as much of a problem as it might seem because the bike is built around a power assist that can easily handle the sort of hills those big gear rings were meant for. But it is an indication of the ways Velotric has kept its costs down. Those gears are paired with a Shimano Altus rear derailleur, which is controlled by a standard dual-trigger shifter and a plastic indicator to track which gear you’re in.

The bike also lacks a dropper seat that you can get out of your way during bouncy descents. Because the frame was small for me anyway, I didn’t really feel its absence. The Summit does have a dedicated mountain bike fork from a Chinese manufacturer called YDH that included an easy-to-access dial that lets you adjust the degree of cushioning you get on the fly. One nice touch is a setting that locks the forks if you’re going to be on smooth pavement for a while. I’m not sure who makes the rims, as I was unable to interpret the graphics on them. But the tires were well-labeled with Kenda, a brand that shows up on a number of other mountain bikes.

Overall, it wasn’t that hard to spot the places Velotric made compromises to bring the bike in at under $2,000. The striking thing was just how few of them there were. The obvious question is whether you’d notice them in practice. We’ll get back to that after we go over the bike’s electronics.

The Summit 1 is not peak mountain bike, but it’s a great all-rounder Read More »

secure-boot-is-completely-broken-on-200+-models-from-5-big-device-makers

Secure Boot is completely broken on 200+ models from 5 big device makers

Secure Boot is completely broken on 200+ models from 5 big device makers

sasha85ru | Getty Imates

In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat. The threat was the specter of malware that could infect the BIOS, the firmware that loaded the operating system each time a computer booted up. From there, it could remain immune to detection and removal and could load even before the OS and security apps did.

The threat of such BIOS-dwelling malware was largely theoretical and fueled in large part by the creation of ICLord Bioskit by a Chinese researcher in 2007. ICLord was a rootkit, a class of malware that gains and maintains stealthy root access by subverting key protections built into the operating system. The proof of concept demonstrated that such BIOS rootkits weren’t only feasible; they were also powerful. In 2011, the threat became a reality with the discovery of Mebromi, the first-known BIOS rootkit to be used in the wild.

Keenly aware of Mebromi and its potential for a devastating new class of attack, the Secure Boot architects hashed out a complex new way to shore up security in the pre-boot environment. Built into UEFI—the Unified Extensible Firmware Interface that would become the successor to BIOS—Secure Boot used public-key cryptography to block the loading of any code that wasn’t signed with a pre-approved digital signature. To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.

An unlimited Secure Boot bypass

On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what’s known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon/Ryzen2000_4000.git, and it’s not clear when it was taken down.

The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

“It’s a big problem,” said Martin Smolár, a malware analyst specializing in rootkits who reviewed the Binarly research and spoke to me about it. “It’s basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically… execute any malware or untrusted code during system boot. Of course, privileged access is required, but that’s not a problem in many cases.”

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef: 87: 81: 23: 00: 84: 47: 17:0b:b3:cd: 87:3a:f4. A table appearing at the end of this article lists each one.

The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings “DO NOT SHIP” or “DO NOT TRUST.”

Test certificate provided by AMI.

Enlarge / Test certificate provided by AMI.

Binarly

Secure Boot is completely broken on 200+ models from 5 big device makers Read More »

spacex-just-stomped-the-competition-for-a-new-contract—that’s-not-great

SpaceX just stomped the competition for a new contract—that’s not great

A rocket sits on a launch pad during a purple- and gold-streaked dawn.

Enlarge / With Dragon and Falcon, SpaceX has become an essential contractor for NASA.

SpaceX

There is an emerging truth about NASA’s push toward commercial contracts that is increasingly difficult to escape: Companies not named SpaceX are struggling with NASA’s approach of awarding firm, fixed-price contracts for space services.

This belief is underscored by the recent award of an $843 million contract to SpaceX for a heavily modified Dragon spacecraft that will be used to deorbit the International Space Station by 2030.

The recently released source selection statement for the “US Deorbit Vehicle” contract, a process led by NASA head of space operations Ken Bowersox, reveals that the competition was a total stomp. SpaceX faced just a single serious competitor in this process, Northrop Grumman. And in all three categories—price, mission suitability, and past performance—SpaceX significantly outclassed Northrop.

Although it’s wonderful that NASA has an excellent contractor in SpaceX, it’s not healthy in the long term that there are so few credible competitors. Moreover, a careful reading of the source selection statement reveals that NASA had to really work to get a competition at all.

“I was really happy that we got proposals from the companies that we did,” Bowersox said during a media teleconference last week. “The companies that sent us proposals are both great companies, and it was awesome to see that interest. I would have expected a few more [proposals], honestly, but I was very happy to get the ones that we got.”

Commercial initiatives struggling

NASA’s push into “commercial” space began nearly two decades ago with a program to deliver cargo to the International Space Station. The space agency initially selected SpaceX and Rocketplane Kistler to develop rockets and spacecraft to accomplish this, but after Kistler missed milestones, the company was subsequently replaced by Orbital Sciences Corporation. The cargo delivery program was largely successful, resulting in the Cargo Dragon (SpaceX) and Cygnus (Orbital Sciences) spacecraft. It continues to this day.

A commercial approach generally means that NASA pays a “fixed” price for a service rather than paying a contractor’s costs plus a fee. It also means that NASA hopes to become one of many customers. The idea is that, as the first mover, NASA is helping to stimulate a market by which its fixed-priced contractors can also sell their services to other entities—both private companies and other space agencies.

NASA has since extended this commercial approach to crew, with SpaceX and Boeing winning large contracts in 2014. However, only SpaceX has flown operational astronaut missions, while Boeing remains in the development and test phase, with its ongoing Crew Flight Test. Whereas SpaceX has sold half a dozen private crewed missions on Dragon, Boeing has yet to announce any.

Such a commercial approach has also been tried with lunar cargo delivery through the “Commercial Lunar Payload Services” program, as well as larger lunar landers (Human Landing System), next-generation spacesuits, and commercial space stations. Each of these programs has a mixed record at best. For example, NASA’s inspector general was highly critical of the lunar cargo program in a recent report, and one of the two spacesuit contractors, Collins Aerospace, recently dropped out because it could not execute on its fixed-price contract.

Some of NASA’s most important traditional space contractors, including Lockheed Martin, Boeing, and Northrop Grumman, have all said they are reconsidering whether to participate in fixed-price contract competitions in the future. For example, Northrop CEO Kathy Warden said last August, “We are being even more disciplined moving forward in ensuring that we work with the government to have the appropriate use of fixed-price contracts.”

So the large traditional space contractors don’t like fixed-price contracts, and many new space companies are struggling to survive in this environment.

SpaceX just stomped the competition for a new contract—that’s not great Read More »

we’re-building-nuclear-spaceships-again—this-time-for-real 

We’re building nuclear spaceships again—this time for real 

Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

Enlarge / Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

DARPA

Phoebus 2A, the most powerful space nuclear reactor ever made, was fired up at Nevada Test Site on June 26, 1968. The test lasted 750 seconds and confirmed it could carry first humans to Mars. But Phoebus 2A did not take anyone to Mars. It was too large, it cost too much, and it didn’t mesh with Nixon’s idea that we had no business going anywhere further than low-Earth orbit.

But it wasn’t NASA that first called for rockets with nuclear engines. It was the military that wanted to use them for intercontinental ballistic missiles. And now, the military wants them again.

Nuclear-powered ICBMs

The work on nuclear thermal rockets (NTRs) started with the Rover program initiated by the US Air Force in the mid-1950s. The concept was simple on paper. Take tanks of liquid hydrogen and use turbopumps to feed this hydrogen through a nuclear reactor core to heat it up to very high temperatures and expel it through the nozzle to generate thrust. Instead of causing the gas to heat and expand by burning it in a combustion chamber, the gas was heated by coming into contact with a nuclear reactor.

Tokino, vectorized by CommiM at en.wikipedia

The key advantage was fuel efficiency. “Specific impulse,” a measurement that’s something like the gas mileage of a rocket, could be calculated from the square root of the exhaust gas temperature divided by the molecular weight of the propellant. This meant the most efficient propellant for rockets was hydrogen because it had the lowest molecular weight.

In chemical rockets, hydrogen had to be mixed with an oxidizer, which increased the total molecular weight of the propellant but was necessary for combustion to happen. Nuclear rockets didn’t need combustion and could work with pure hydrogen, which made them at least twice as efficient. The Air Force wanted to efficiently deliver nuclear warheads to targets around the world.

The problem was that running stationary reactors on Earth was one thing; making them fly was quite another.

Space reactor challenge

Fuel rods made with uranium 235 oxide distributed in a metal or ceramic matrix comprise the core of a standard fission reactor. Fission happens when a slow-moving neutron is absorbed by a uranium 235 nucleus and splits it into two lighter nuclei, releasing huge amounts of energy and excess, very fast neutrons. These excess neutrons normally don’t trigger further fissions, as they move too fast to get absorbed by other uranium nuclei.

Starting a chain reaction that keeps the reactor going depends on slowing them down with a moderator, like water, that “moderates” their speed. This reaction is kept at moderate levels using control rods made of neutron-absorbing materials, usually boron or cadmium, that limit the number of neutrons that can trigger fission. Reactors are dialed up or down by moving the control rods in and out of the core.

Translating any of this to a flying reactor is a challenge. The first problem is the fuel. The hotter you make the exhaust gas, the more you increase specific impulse, so NTRs needed the core to operate at temperatures reaching 3,000 K—nearly 1,800 K higher than ground-based reactors. Manufacturing fuel rods that could survive such temperatures proved extremely difficult.

Then there was the hydrogen itself, which is extremely corrosive at these temperatures, especially when interacting with those few materials that are stable at 3,000 K. Finally, standard control rods had to go, too, because on the ground, they were gravitationally dropped into the core, and that wouldn’t work in flight.

Los Alamos Scientific Laboratory proposed a few promising NTR designs that addressed all these issues in 1955 and 1956, but the program really picked up pace after it was transferred to NASA and Atomic Energy Commission (AEC) in 1958, There, the idea was rebranded as NERVA, Nuclear Engine for Rocket Vehicle Applications. NASA and AEC, blessed with nearly unlimited budget, got busy building space reactors—lots of them.

We’re building nuclear spaceships again—this time for real  Read More »

gazelle-eclipse-c380+-e-bike-review:-a-smart,-smooth-ride-at-a-halting-price

Gazelle Eclipse C380+ e-bike review: A smart, smooth ride at a halting price

Gazelle Eclipse C380+ HMB review —

It’s a powerful, comfortable, fun, and very smart ride. Is that enough?

Gazelle Eclipse C380+ in front of a railing, overlooking a river crosswalk in Navy Yard, Washington, D.C.

Kevin Purdy

Let me get three negative points about the Gazelle Eclipse out of the way first. First, it’s a 62-pound e-bike, so it’s tough to get moving without its battery. Second, its rack is a thick, non-standard size, so you might need new bags for it. Third—and this is the big one—with its $6,000 suggested retail price, it’s expensive, and you will probably feel nervous about locking it anywhere you don’t completely trust.

Apart from those issues, though, this e-bike is great fun. When I rode the Eclipse (the C380+ HMB version of it), I felt like Batman on a day off, or maybe Bruce Wayne doing reconnaissance as a bike enthusiast. The matte gray color, the black hardware, and the understated but impressively advanced tech certainly helped. But I felt prepared to handle anything that was thrown at me without having to think about it much. Brutally steep hills, poorly maintained gravel paths, curbs, stop lights, or friends trying to outrun me on their light road bikes—the Eclipse was ready.

It assists up to 28 miles per hour (i.e., Class 3) and provides up to 85 Nm of torque, and the front suspension absorbs shocks without shaking your grip confidence. It has integrated lights, the display can show you navigation while your phone is tucked away, and the automatic assist changing option balances your mechanical and battery levels, leaving you to just pedal and look.

  • The little shifter guy, who will take a few rides to get used to, is either really clever or overthinking it.

    Kevin Purdy

  • The Bosch Kiox 300 is the only screen I’ve had on an e-bike that I ever put time into customizing and optimizing.

    Kevin Purdy

  • The drivetrain on the C80+ is a remarkable thing, and it’s well-hidden inside matte aluminum.

    Kevin Purdy

  • The shocks on the Eclipse are well-tuned for rough roads, if not actual mountains. (The author is aware the headlamp was at an angle in this shot).

    Kevin Purdy

  • The electric assist changer on the left handlebar, and the little built-in bell that you always end up replacing on new e-bikes for something much louder.

    Kevin Purdy

What kind of bike is this? A fun one.

The Eclipse comes in two main variants, the 11-speed, chain-and-derailleur model T11+ HMB and the stepless Enviolo hub and Gates Carbon belt-based C380+ HMB. Both come in three sizes (45, 50, and 55 cm), in one of two colors (Anthracite Grey, Thyme Green for the T11+, and Metallic Orange for the C380+), and with either a low-step or high-step version, the latter with a sloping top bar. Most e-bikes come in two sizes if you’re lucky, typically “Medium” and “Large,” and their suggested height spans are far too generous. The T11+ starts at $5,500 and the C380+ starts at $6,000.

The Eclipse’s posture is an “active” one, seemingly halfway between the upright Dutch style and a traditional road or flat-bar bike. It’s perfect for this kind of ride. The front shocks have a maximum of 75 mm of travel, which won’t impress your buddies riding real trails but will make gravel, dirt, wooden bridges, and woodland trails a potential. Everything about the Eclipse tells you to stop worrying about whether you have the right kind of bike for a ride and just start pedaling.

“But I’m really into exercise riding, and I need lots of metrics and data, during and after the ride,” I hear some of you straw people saying. That’s why the Eclipse has the Bosch Kiox 300, a center display that is, for an e-bike, remarkably readable, navigable, and informative. You can see your max and average speed, distance, which assist levels you spent time in, power output, cadence, and more. You can push navigation directions from Komoot or standard maps apps from your phone to the display, using Bosch’s Flow app. And, of course, you can connect to Strava.

Halfway between maximum efficiency and careless joyriding, the Eclipse offers a feature that I can only hope makes it down to cheaper e-bikes over time: automatic assist changing. Bikes that have both gears and motor assist levels can sometimes leave you guessing as to which one you should change when approaching a hill or starting from a dead stop. Set the Eclipse to automatic assist and you only have to worry about the right-hand grip shifter. There are no gear numbers; there is a little guy on a bike, and as you raise or lower the gearing, the road he’s approaching get steep or flat.

Gazelle Eclipse C380+ e-bike review: A smart, smooth ride at a halting price Read More »

peer-review-is-essential-for-science-unfortunately,-it’s-broken.

Peer review is essential for science. Unfortunately, it’s broken.

Peer review is essential for science. Unfortunately, it’s broken.

Aurich Lawson | Getty Images

Rescuing Science: Restoring Trust in an Age of Doubt was the most difficult book I’ve ever written. I’m a cosmologist—I study the origins, structure, and evolution of the Universe. I love science. I live and breathe science. If science were a breakfast cereal, I’d eat it every morning. And at the height of the COVID-19 pandemic, I watched in alarm as public trust in science disintegrated.

But I don’t know how to change people’s minds. I don’t know how to convince someone to trust science again. So as I started writing my book, I flipped the question around: is there anything we can do to make the institution of science more worthy of trust?

The short answer is yes. The long answer takes an entire book. In the book, I explore several different sources of mistrust—the disincentives scientists face when they try to communicate with the public, the lack of long-term careers, the complicitness of scientists when their work is politicized, and much more—and offer proactive steps we can take to address these issues to rebuild trust.

The section below is taken from a chapter discussing the relentless pressure to publish that scientists face, and the corresponding explosion in fraud that this pressure creates. Fraud can take many forms, from the “hard fraud” of outright fabrication of data, to many kinds of “soft fraud” that include plagiarism, manipulation of data, and careful selection of methods to achieve a desired result. The more that fraud thrives, the more that the public loses trust in science. Addressing this requires a fundamental shift in the incentive and reward structures that scientists work in. A difficult task to be sure, but not an impossible one—and one that I firmly believe will be worth the effort.

Modern science is hard, complex, and built from many layers and many years of hard work. And modern science, almost everywhere, is based on computation. Save for a few (and I mean very few) die-hard theorists who insist on writing things down with pen and paper, there is almost an absolute guarantee that with any paper in any field of science that you could possibly read, a computer was involved in some step of the process.

Whether it’s studying bird droppings or the collisions of galaxies, modern-day science owes its very existence—and continued persistence—to the computer. From the laptop sitting on an unkempt desk to a giant machine that fills up a room, “S. Transistor” should be the coauthor on basically all three million journal articles published every year.

The sheer complexity of modern science, and its reliance on customized software, renders one of the frontline defenses against soft and hard fraud useless. That defense is peer review.

The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the “wrongness” of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.

That’s now largely impossible with the modern scientific enterprise so reliant on computers.

To makes matters worse, many of the software codes used in science are not publicly available. I’ll say this again because it’s kind of wild to even contemplate: there are millions of papers published every year that rely on computer software to make the results happen, and that software is not available for other scientists to scrutinize to see if it’s legit or not. We simply have to trust it, but the word “trust” is very near the bottom of the scientist’s priority list.

Why don’t scientists make their code available? It boils down to the same reason that scientists don’t do many things that would improve the process of science: there’s no incentive. In this case, you don’t get any h-index points for releasing your code on a website. You only get them for publishing papers.

This infinitely agitates me when I peer-review papers. How am I supposed to judge the correctness of an article if I can’t see the entire process? What’s the point of searching for fraud when the computer code that’s sitting behind the published result can be shaped and molded to give any result you want, and nobody will be the wiser?

I’m not even talking about intentional computer-based fraud here; this is even a problem for detecting basic mistakes. If you make a mistake in a paper, a referee or an editor can spot it. And science is better off for it. If you make a mistake in your code… who checks it? As long as the results look correct, you’ll go ahead and publish it and the peer reviewer will go ahead and accept it. And science is worse off for it.

Science is getting more complex over time and is becoming increasingly reliant on software code to keep the engine going. This makes fraud of both the hard and soft varieties easier to accomplish. From mistakes that you pass over because you’re going too fast, to using sophisticated tools that you barely understand but use to get the result that you wanted, to just totally faking it, science is becoming increasingly wrong.

Peer review is essential for science. Unfortunately, it’s broken. Read More »

the-yellowstone-supervolcano-destroyed-an-ecosystem-but-saved-it-for-us

The Yellowstone supervolcano destroyed an ecosystem but saved it for us

Set in stone —

50 years of excavation unveiled the story of a catastrophic event and its aftermath.

Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Enlarge / Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Rick E. Otto, University of Nebraska State Museum

Death was everywhere. Animal corpses littered the landscape and were mired in the local waterhole as ash swept around everything in its path. For some, death happened quickly; for others, it was slow and painful.

This was the scene in the aftermath of a supervolcanic eruption in Idaho, approximately 1,600 kilometers (900 miles) away. It was an eruption so powerful that it obliterated the volcano itself, leaving a crater 80 kilometers (50 miles) wide and spewing clouds of ash that the wind carried over long distances, killing almost everything that inhaled it. This was particularly true here, in this location in Nebraska, where animals large and small succumbed to the eruption’s deadly emissions.

Eventually, all traces of this horrific event were buried; life continued, evolved, and changed. That’s why, millions of years later in the summer of 1971, Michael Voorhies was able to enjoy another delightful day of exploring.

Finding rhinos

He was, as he had been each summer between academic years, creating a geologic map of his hometown in Nebraska. This meant going from farm to farm and asking if he could walk through the property to survey the rocks and look for fossils. “I’m basically just a kid at heart, and being a paleontologist in the summer was my idea of heaven,” Voorhies, now retired from the University of Georgia, told Ars.

What caught his eye on one particular farm was a layer of volcanic ash—something treasured by geologists and paleontologists, who use it to get the age of deposits. But as he got closer, he also noticed exposed bone. “Finding what was obviously a lower jaw which was still attached to the skull, now that was really quite interesting!” he said. “Mostly what you find are isolated bones and teeth.”

That skull belonged to a juvenile rhino. Voorhies and some of his students returned to the site to dig further, uncovering the rest of the rhino’s completely articulated remains (meaning the bones of its skeleton were connected as they would be in life). More digging produced the intact skeletons of another five or six rhinos. That was enough to get National Geographic funding for a massive excavation that took place between 1978 and 1979. Crews amassed, among numerous other animals, the remarkable total of 70 complete rhino skeletons.

To put this into perspective, most fossil sites—even spectacular locations preserving multiple animals—are composed primarily of disarticulated skeletons, puzzle pieces that paleontologists painstakingly put back together. Here, however, was something no other site had ever before produced: vast numbers of complete skeletons preserved where they died.

Realizing there was still more yet to uncover, Voorhies and others appealed to the larger Nebraska community to help preserve the area. Thanks to hard work and substantial local donations, the Ashfall Fossil Beds park opened to the public in 1991, staffed by two full-time employees.

Fossils discovered are now left in situ, meaning they remain exposed exactly where they are found, protected by a massive structure called the Hubbard Rhino Barn. Excavations are conducted within the barn at a much slower and steadier pace than those in the ’70s due in large part to the small, rotating number of seasonal employees—mostly college students—who excavate further each summer.

The Rhino Barn protects the fossil bed from the elements.

Enlarge / The Rhino Barn protects the fossil bed from the elements.

Photos by Rick E. Otto, University of Nebraska State Museum

A full ecosystem

Almost 50 years of excavation and research have unveiled the story of a catastrophic event and its aftermath, which took place in a Nebraska that nobody would recognize—one where species like rhinoceros, camels, and saber-toothed deer were a common sight.

But to understand that story, we have to set the stage. The area we know today as Ashfall Fossil Beds was actually a waterhole during the Miocene, one frequented by a diversity of animals. We know this because there are fossils of those animals in a layer of sand at the very bottom of the waterhole, a layer that was not impacted by the supervolcanic eruption.

Rick Otto was one of the students who excavated fossils in 1978. He became Ashfall’s superintendent in 1991 and retired in late 2023. “There were animals dying a natural death around the Ashfall waterhole before the volcanic ash storm took place,” Otto told Ars, which explains the fossils found in that sand. After being scavenged, their bodies may have been trampled by some of the megafauna visiting the waterhole, which would have “worked those bones into the sand.”

The Yellowstone supervolcano destroyed an ecosystem but saved it for us Read More »

tool-preventing-ai-mimicry-cracked;-artists-wonder-what’s-next

Tool preventing AI mimicry cracked; artists wonder what’s next

Tool preventing AI mimicry cracked; artists wonder what’s next

Aurich Lawson | Getty Images

For many artists, it’s a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.

Defenses against AI training exist—like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists’ styles. But they don’t provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists’ brands and replace them in the market.

In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer’s style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it’s not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist’s downfall, potentially watering down demand for their own work each time they promote new pieces online.

Unsurprisingly, artists have increasingly sought protections to diminish or dodge these AI risks. As tech companies update their products’ terms—like when Meta suddenly announced that it was training AI on a billion Facebook and Instagram user photos last December—artists frantically survey the landscape for new defenses. That’s why, counting among those offering scarce AI protections available today, The Glaze Project recently reported a dramatic surge in requests for its free tools.

Designed to help prevent style mimicry and even poison AI models to discourage data scraping without an artist’s consent or compensation, The Glaze Project’s tools are now in higher demand than ever. University of Chicago professor Ben Zhao, who created the tools, told Ars that the backlog for approving a “skyrocketing” number of requests for access is “bad.” And as he recently posted on X (formerly Twitter), an “explosion in demand” in June is only likely to be sustained as AI threats continue to evolve. For the foreseeable future, that means artists searching for protections against AI will have to wait.

Even if Zhao’s team did nothing but approve requests for WebGlaze, its invite-only web-based version of Glaze, “we probably still won’t keep up,” Zhao said. He’s warned artists on X to expect delays.

Compounding artists’ struggles, at the same time as demand for Glaze is spiking, the tool has come under attack by security researchers who claimed it was not only possible but easy to bypass Glaze’s protections. For security researchers and some artists, this attack calls into question whether Glaze can truly protect artists in these embattled times. But for thousands of artists joining the Glaze queue, the long-term future looks so bleak that any promise of protections against mimicry seems worth the wait.

Attack cracking Glaze sparks debate

Millions have downloaded Glaze already, and many artists are waiting weeks or even months for access to WebGlaze, mostly submitting requests for invites on social media. The Glaze Project vets every request to verify that each user is human and ensure bad actors don’t abuse the tools, so the process can take a while.

The team is currently struggling to approve hundreds of requests submitted daily through direct messages on Instagram and Twitter in the order they are received, and artists requesting access must be patient through prolonged delays. Because these platforms’ inboxes aren’t designed to sort messages easily, any artist who follows up on a request gets bumped to the back of the line—as their message bounces to the top of the inbox and Zhao’s team, largely volunteers, continues approving requests from the bottom up.

“This is obviously a problem,” Zhao wrote on X while discouraging artists from sending any follow-ups unless they’ve already gotten an invite. “We might have to change the way we do invites and rethink the future of WebGlaze to keep it sustainable enough to support a large and growing user base.”

Glaze interest is likely also spiking due to word of mouth. Reid Southen, a freelance concept artist for major movies, is advocating for all artists to use Glaze. Reid told Ars that WebGlaze is especially “nice” because it’s “available for free for people who don’t have the GPU power to run the program on their home machine.”

Tool preventing AI mimicry cracked; artists wonder what’s next Read More »

surface-pro-11-and-laptop-7-review:-an-apple-silicon-moment-for-windows

Surface Pro 11 and Laptop 7 review: An Apple Silicon moment for Windows

Microsoft's Surface Pro 11, the first flagship Surface to ship exclusively using Arm processors.

Enlarge / Microsoft’s Surface Pro 11, the first flagship Surface to ship exclusively using Arm processors.

Andrew Cunningham

Microsoft has been trying to make Windows-on-Arm-processors a thing for so long that, at some point, I think I just started assuming it was never actually going to happen.

The first effort was Windows RT, which managed to run well enough on the piddly Arm hardware available at the time but came with a perplexing new interface and couldn’t run any apps designed for regular Intel- and AMD-based Windows PCs. Windows RT failed, partly because a version of Windows that couldn’t run Windows apps and didn’t use a familiar Windows interface was ignoring two big reasons why people keep using Windows.

Windows-on-Arm came back in the late 2010s, with better performance and a translation layer for 32-bit Intel apps in tow. This version of Windows, confined mostly to oddball Surface hardware and a handful of barely promoted models from the big PC OEMs, has quietly percolated for years. It has improved slowly and gradually, as have the Qualcomm processors that have powered these devices.

That brings us to this year’s flagship Microsoft Surface hardware: the 7th-edition Surface Laptop and the 11th-edition Surface Pro.

These devices are Microsoft’s first mainstream, flagship Surface devices to use Arm chips, whereas previous efforts have been side projects or non-default variants. Both hardware and software have improved enough that I finally feel I could recommend a Windows-on-Arm device to a lot of people without having to preface it with a bunch of exceptions.

Unfortunately, Microsoft has chosen to launch this impressive and capable Arm hardware and improved software alongside a bunch of generative AI features, including the Recall screen recorder, a feature that became so radioactively unpopular so quickly that Microsoft was forced to delay it to address major security problems (and perception problems stemming from the security problems).

The remaining AI features are so superfluous that I’ll ignore them in this review and cover them later on when we look closer at Windows 11’s 24H2 update. This is hardware that is good enough that it doesn’t need buzzy AI features to sell it. Windows on Arm continues to present difficulties, but the new Surface Pro and Surface Laptop—and many of the other Arm-based Copilot+ PCs that have launched in the last couple of weeks—are a whole lot better than Arm PCs were even a year or two ago.

Familiar on the outside

The Surface Laptop 7 (left) and Surface Pro 11 (right) are either similar or identical to their Intel-powered predecessors on the outside.

Enlarge / The Surface Laptop 7 (left) and Surface Pro 11 (right) are either similar or identical to their Intel-powered predecessors on the outside.

Andrew Cunningham

When Apple released the first couple of Apple Silicon Macs back in late 2020, the one thing the company pointedly did not change was the exterior design. Apple didn’t comment much on it at the time, but the subliminal message was that these were just Macs, they looked the same as other Macs, and there was nothing to worry about.

Microsoft’s new flagship Surface hardware, powered exclusively by Arm-based chips for the first time rather than a mix of Arm and Intel/AMD, takes a similar approach: inwardly overhauled, externally unremarkable. These are very similar to the last (and the current) Intel-powered Surface Pro and Surface Laptop designs, and in the case of the Surface Pro, they actually look identical.

Both PCs still include some of the defining elements of Surface hardware designs. Both have screens with 3:2 aspect ratios that make them taller than most typical laptop displays, which still use 16: 10 or 16:9 aspect ratios. Those screens also support touch input via fingers or the Surface Pen, and they still use gently rounded corners (which Windows doesn’t formally recognize in-software, so the corners of your windows will get cut off, not that it has ever been a problem for me).

Surface Pro 11 and Laptop 7 review: An Apple Silicon moment for Windows Read More »

30-years-later,-freedos-is-still-keeping-the-dream-of-the-command-prompt-alive

30 years later, FreeDOS is still keeping the dream of the command prompt alive

Preparing to install the floppy disk edition of FreeDOS 1.3 in a virtual machine.

Enlarge / Preparing to install the floppy disk edition of FreeDOS 1.3 in a virtual machine.

Andrew Cunningham

Two big things happened in the world of text-based disk operating systems in June 1994.

The first is that Microsoft released MS-DOS version 6.22, the last version of its long-running operating system that would be sold to consumers as a standalone product. MS-DOS would continue to evolve for a few years after this, but only as an increasingly invisible loading mechanism for Windows.

The second was that a developer named Jim Hall wrote a post announcing something called “PD-DOS.” Unhappy with Windows 3.x and unexcited by the project we would come to know as Windows 95, Hall wanted to break ground on a new “public domain” version of DOS that could keep the traditional command-line interface alive as most of the world left it behind for more user-friendly but resource-intensive graphical user interfaces.

PD-DOS would soon be renamed FreeDOS, and 30 years and many contributions later, it stands as the last MS-DOS-compatible operating system still under active development.

While it’s not really usable as a standalone modern operating system in the Internet age—among other things, DOS is not really innately aware of “the Internet” as a concept—FreeDOS still has an important place in today’s computing firmament. It’s there for people who need to run legacy applications on modern systems, whether it’s running inside of a virtual machine or directly on the hardware; it’s also the best way to get an actively maintained DOS offshoot running on legacy hardware going as far back as the original IBM PC and its Intel 8088 CPU.

To mark FreeDOS’ 20th anniversary in 2014, we talked with Hall and other FreeDOS maintainers about its continued relevance, the legacy of DOS, and the developers’ since-abandoned plans to add ambitious modern features like multitasking and built-in networking support (we also tried, earnestly but with mixed success, to do a modern day’s work using only FreeDOS). The world of MS-DOS-compatible operating systems moves slowly enough that most of this information is still relevant; FreeDOS was at version 1.1 back in 2014, and it’s on version 1.3 now.

For the 30th anniversary, we’ve checked in with Hall again about how the last decade or so has treated the FreeDOS project, why it’s still important, and how it continues to draw new users into the fold. We also talked, strange as it might seem, about what the future might hold for this inherently backward-looking operating system.

FreeDOS is still kicking, even as hardware evolves beyond it

Running AsEasyAs, a Lotus 1-2-3-compatible spreadsheet program, in FreeDOS.

Running AsEasyAs, a Lotus 1-2-3-compatible spreadsheet program, in FreeDOS.

Jim Hall

If the last decade hasn’t ushered in The Year of FreeDOS On The Desktop, Hall says that interest in and usage of the operating system has stayed fairly level since 2014. The difference is that, as time has gone on, more users are encountering FreeDOS as their first DOS-compatible operating system, not as an updated take on Microsoft and IBM’s dusty old ’80s- and ’90s-era software.

“Compared to about 10 years ago, I’d say the interest level in FreeDOS is about the same,” Hall told Ars in an email interview. “Our developer community has remained about the same over that time, I think. And judging by the emails that people send me to ask questions, or the new folks I see asking questions on our freedos-user or freedos-devel email lists, or the people talking about FreeDOS on the Facebook group and other forums, I’d say there are still about the same number of people who are participating in the FreeDOS community in some way.”

“I get a lot of questions around September and October from people who ask, basically, ‘I installed FreeDOS, but I don’t know how to use it. What do I do?’ And I think these people learned about FreeDOS in a university computer science course and wanted to learn more about it—or maybe they are already working somewhere and they read an article about it, never heard of this “DOS” thing before, and wanted to try it out. Either way, I think more folks in the user community are learning about “DOS” at the same time they are learning about FreeDOS.”

30 years later, FreeDOS is still keeping the dream of the command prompt alive Read More »

the-world’s-toughest-race-starts-saturday,-and-it’s-delightfully-hard-to-call-this-year

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year

Is it Saturday yet? —

Setting the stage for what could be a wild ride across France.

The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

Enlarge / The peloton passing through a sunflowers field during the stage eight of the 110th Tour de France in 2023.

David Ramos/Getty Images

Most readers probably did not anticipate seeing a Tour de France preview on Ars Technica, but here we are. Cycling is a huge passion of mine and several other staffers, and this year, a ton of intrigue surrounds the race, which has a fantastic route. So we’re here to spread Tour fever.

The three-week race starts Saturday, paradoxically in the Italian region of Tuscany. Usually, there is a dominant rider, or at most two, and a clear sense of who is likely to win the demanding race. But this year, due to rider schedules, a terrible crash in early April, and new contenders, there is more uncertainty than usual. A solid case could be made for at least four riders to win this year’s Tour de France.

For people who aren’t fans of pro road cycling—which has to be at least 99 percent of the United States—there’s a great series on Netflix called Unchained to help get you up to speed. The second season, just released, covers last year’s Tour de France and introduces you to most of the protagonists in the forthcoming edition. If this article sparks your interest, I recommend checking it out.

Anyway, for those who are cycling curious, I want to set the stage for this year’s race by saying a little bit about the four main contenders, from most likely to least likely to win, and provide some of the backstory to what could very well be a dramatic race this year.

Tadej Pogačar

Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d'Italia in May.

Enlarge / Tadej Pogacar of Slovenia and UAE Team Emirates won the Giro d’Italia in May.

Tim de Waele/Getty Images

  • Slovenia
  • 25 years old
  • UAE Team Emirates
  • Odds: -190

Pogačar burst onto the scene in 2019 at the very young age of 20 by finishing third in the Vuelta a España, one of the three grand tours of cycling. He then went on to win the 2020 and 2021 Tours de France, first by surprising fellow countryman Primož Roglič (more on him below) in 2020 and then utterly dominating in 2021. Given his youth, it seemed he would be the premiere grand tour competitor for the next decade.

But then another slightly older rider, a teammate of Roglič’s named Jonas Vingegaard, emerged in 2022 and won the next two races. Last year, in fact, Vingegaard cracked Pogačar by 7 minutes and 29 seconds in the Tour, a huge winning margin, especially for two riders of relatively close talent. This established Vingegaard as the alpha male of grand tour cyclists, having proven himself a better climber and time trialist than Pogačar, especially in the highest and hardest stages.

So this year, Pogačar decided to change up his strategy. Instead of focusing on the Tour de France, Pogačar participated in the first grand tour of the season, the Giro d’Italia, which occurred in May. He likely did so for a couple of reasons. First of all, he almost certainly received a generous appearance fee from the Italian organizers. And secondly, riding the Giro would give him a ready excuse for not beating Vingegaard in France.

Why is this? Because there are just five weeks between the end of the Giro and the start of the Tour. So if a rider peaks for the Giro and exerts himself in winning the race, it is generally thought that he can’t arrive at the Tour in winning form. He will be a few percent off, not having ideal preparation.

Predictably, Pogačar smashed the lesser competition at the Giro and won the race by 9 minutes and 56 seconds. Because he was so far ahead, he was able to take the final week of the race a bit easier. The general thinking in the cycling community is that Pogačar is arriving at the Tour in excellent but not peak form. But given everything else that has happened so far this season, the bettors believe that will be enough for him to win. Maybe.

The world’s toughest race starts Saturday, and it’s delightfully hard to call this year Read More »