Features

nasa-is-about-to-make-its-most-important-safety-decision-in-nearly-a-generation

NASA is about to make its most important safety decision in nearly a generation

Boeing's Starliner spacecraft, seen docked at the International Space Station through the window of a SpaceX Dragon spacecraft.

Enlarge / Boeing’s Starliner spacecraft, seen docked at the International Space Station through the window of a SpaceX Dragon spacecraft.

As soon as this week, NASA officials will make perhaps the agency’s most consequential safety decision in human spaceflight in 21 years.

NASA astronauts Butch Wilmore and Suni Williams are nearly 10 weeks into a test flight that was originally set to last a little more than one week. The two retired US Navy test pilots were the first people to fly into orbit on Boeing’s Starliner spacecraft when it launched on June 5. Now, NASA officials aren’t sure Starliner is safe enough to bring the astronauts home.

Three of the managers at the center of the pending decision, Ken Bowersox and Steve Stich from NASA and Boeing’s LeRoy Cain, either had key roles in the ill-fated final flight of Space Shuttle Columbia in 2003 or felt the consequences of the accident.

At that time, officials misjudged the risk. Seven astronauts died, and the Space Shuttle Columbia was destroyed as it reentered the atmosphere over Texas. Bowersox, Stich, and Cain weren’t the people making the call on the health of Columbia‘s heat shield in 2003, but they had front-row seats to the consequences.

Bowersox was an astronaut on the International Space Station when NASA lost Columbia. He and his crewmates were waiting to hitch a ride home on the next Space Shuttle mission, which was delayed two-and-a-half years in the wake of the Columbia accident. Instead, Bowersox’s crew came back to Earth later that year on a Russian Soyuz capsule. After retiring from the astronaut corps, Bowersox worked at SpaceX and is now the head of NASA’s spaceflight operations directorate.

Stich and Cain were NASA flight directors in 2003, and they remain well-respected in human spaceflight circles. Stich is now the manager of NASA’s commercial crew program, and Cain is now a Boeing employee and chair of the company’s Starliner mission director. For the ongoing Starliner mission, Bowersox, Stich, and Cain are in the decision-making chain.

All three joined NASA in the late 1980s, soon after the Challenger accident. They have seen NASA attempt to reshape its safety culture after both of NASA’s fatal Space Shuttle tragedies. After Challenger, NASA’s astronaut office had a more central role in safety decisions, and the agency made efforts to listen to dissent from engineers. Still, human flaws are inescapable, and NASA’s culture was unable to alleviate them during Columbia‘s last flight in 2003.

NASA knew launching a Space Shuttle in cold weather reduced the safety margin on its solid rocket boosters, which led to the Challenger accident. And shuttle managers knew foam routinely fell off the external fuel tank. In a near-miss, one of these foam fragments hit a shuttle booster but didn’t damage it, just two flights prior to Columbia‘s STS-107 mission.

“I have wondered if some in management roles today that were here when we lost Challenger and Columbia remember that in both of those tragedies, there were those that were not comfortable proceeding,” Milt Heflin, a retired NASA flight director who spent 47 years at the agency, wrote in an email to Ars. “Today, those memories are still around.”

“I suspect Stich and Cain are paying attention to the right stuff,” Heflin wrote.

The question facing NASA’s leadership today? Should the two astronauts return to Earth from the International Space Station in Boeing’s Starliner spacecraft, with its history of thruster failures and helium leaks, or should they come home on a SpaceX Dragon capsule?

Under normal conditions, the first option is the choice everyone at NASA would like to make. It would be least disruptive to operations at the space station and would potentially maintain a clearer future for Boeing’s Starliner program, which NASA would like to become operational for regular crew rotation flights to the station.

But some people at NASA aren’t convinced this is the right call. Engineers still don’t fully understand why five of the Starliner spacecraft’s thrusters overheated and lost power as the capsule approached the space station for docking in June. Four of these five control jets are now back in action with near-normal performance, but managers would like to be sure the same thrusters—and maybe more—won’t fail again as Starliner departs the station and heads for reentry.

NASA is about to make its most important safety decision in nearly a generation Read More »

elon-musk’s-lawsuit-over-alleged-x-ad-boycott-“a-very-weak-case,”-professor-says

Elon Musk’s lawsuit over alleged X ad boycott “a very weak case,” professor says

Illustration with three pictures of Elon Musk. In two of the photos there are dollar signs over Musk's eyes, in the other photo there are X logos instead.

Aurich Lawson | Getty Images

Antitrust law professors aren’t impressed by Elon Musk’s lawsuit alleging a supposed X advertising boycott amounts to an antitrust violation. Based on the initial complaint filed by Musk’s X Corp., it looks like “a very weak case,” Vanderbilt Law School Associate Dean for Research Rebecca Haw Allensworth told Ars.

“Given how difficult this will be to win, I would call it an unusual strategy,” she said.

The lawsuit against the World Federation of Advertisers (WFA) and several large corporations says that the alleged boycott is “a naked restraint of trade without countervailing benefits to competition or consumers.” The “collective action among competing advertisers to dictate brand safety standards to be applied by social media platforms shortcuts the competitive process and allows the collective views of a group of advertisers with market power to override the interests of consumers,” X claims.

Musk already won a victory of sorts as the WFA yesterday shut down the Global Alliance for Responsible Media (GARM) initiative that is the main subject of X’s allegations. “GARM is a small, not-for-profit initiative, and recent allegations that unfortunately misconstrue its purpose and activities have caused a distraction and significantly drained its resources and finances. GARM therefore is making the difficult decision to discontinue its activities,” the WFA said.

But the GARM shutdown won’t result in Musk’s company obtaining any financial damages unless X also wins in court. The company formerly named Twitter sued in a federal court in Texas, part of the conservative 5th Circuit, a venue that Musk likely believes will be more favorable to him than a court in another state. The District Court judge overseeing the lawsuit is also handling Musk’s case against Media Matters for America, a nonprofit that conducted research on ads being placed next to pro-Nazi content on X.

Texas is one of three states, along with Louisiana and Mississippi, where appeals go to the US Court of Appeals for the 5th Circuit. “The 5th Circuit is well known as the most conservative circuit in the country,” Professor Stephen Calkins of Wayne State University Law School told Ars.

“The law here is very unfavorable to X”

Despite the potentially friendly Texas court venue, Musk’s X faces a high legal bar in proving that it was the victim of an illegal boycott.

Allensworth said X must show “that the defendants did actually enter into an agreement—that they had a deal with each other to pull advertising spend from X as a group, not that each brand did it individually to protect their own brand status or make their own statement about Elon Musk. The law here is very unfavorable to X, but the complaint describes a lot of conduct that could support a jury or judge finding an agreement. But it’s a fact question, and we only have half the story.”

A bigger problem for Musk “is that X must show that the boycott harmed competition, not just that it harmed X,” Allensworth said. “The complaint is far from clear on what competition was harmed. A typical boycott will harm competition among the boycotters, but that doesn’t seem to be what the complaint is about. The complaint says the competition that was harmed was between platforms (like X/Twitter and Facebook, for example) but that’s a bit garbled. Again, we may know more as the suit develops.”

There’s one more problem that may be even bigger than the first two, according to Allensworth. Even if X proves there was an explicit agreement to pull advertising and that a boycott harmed competition, the advertisers would have a strong defense under the First Amendment’s right to speech.

“Concerted refusals to deal (boycotts) are not vulnerable to antitrust suit if they are undertaken to make a statement—essentially to engage in speech,” Allensworth explained. “It would seem here like that was the purpose of this boycott (akin to lunch counter boycotts in the ’60s, which were beyond the reach of the antitrust laws). Given that the Supreme Court has only increased First Amendment rights for corporations recently, I think this defense is very strong.”

All of those factors “add up, to me, to a very weak case,” Allensworth told Ars. But she cautions that at this early stage of litigation, “there’s a lot we don’t know; no one can judge a case based on the complaint alone—that’s the point of the adversarial system.”

An X court win wouldn’t force companies to advertise on the platform. But “if somehow they prevail, X could ask for treble damages—three times the revenue they lost because of the boycott,” Allensworth said.

Elon Musk’s lawsuit over alleged X ad boycott “a very weak case,” professor says Read More »

all-the-possible-ways-to-destroy-google’s-monopoly-in-search

All the possible ways to destroy Google’s monopoly in search

All the possible ways to destroy Google’s monopoly in search

Aurich Lawson

After US District Judge Amit Mehta ruled that Google has a monopoly in two markets—general search services and general text advertising—everybody is wondering how Google might be forced to change its search business.

Specifically, the judge ruled that Google’s exclusive deals with browser and device developers secured Google’s monopoly. These so-called default agreements funneled the majority of online searches to Google search engine result pages (SERPs), where results could be found among text ads that have long generated the bulk of Google’s revenue.

At trial, Mehta’s ruling noted, it was estimated that if Google lost its most important default deal with Apple, Google “would lose around 65 percent of its revenue, even assuming that it could retain some users without the Safari default.”

Experts told Ars that disrupting these default deals is the most obvious remedy that the US Department of Justice will seek to restore competition in online search. Other remedies that may be sought range from least painful for Google (mandating choice screens in browsers and devices) to most painful (requiring Google to divest from either Chrome or Android, where it was found to be self-preferencing).

But the remedies phase of litigation may have to wait until after Google’s appeal, which experts said could take years to litigate before any remedies are ever proposed in court. Whether Google could be successful in appealing the ruling is currently being debated, with anti-monopoly advocates backing Mehta’s ruling as “rock solid” and critics suggesting that the ruling’s fresh takes on antitrust law are open to attack.

Google declined Ars’ request to comment on appropriate remedies or its plan to appeal.

Previously, Google’s president of global affairs, Kent Walker, confirmed in a statement that the tech giant would be appealing the ruling because the court found that “Google is ‘the industry’s highest quality search engine, which has earned Google the trust of hundreds of millions of daily users,’ that Google ‘has long been the best search engine, particularly on mobile devices,’ ‘has continued to innovate in search,’ and that ‘Apple and Mozilla occasionally assess Google’s search quality relative to its rivals and find Google’s to be superior.'”

“Given this, and that people are increasingly looking for information in more and more ways, we plan to appeal,” Walker said. “As this process continues, we will remain focused on making products that people find helpful and easy to use.”

But Mehta found that Google was wielding its outsize influence in the search industry to block rivals from competing by locking browsers and devices into agreements ensuring that all searches went to Google SERPs. None of the pro-competitive benefits that Google claimed justified the exclusive deals persuaded Mehta, who ruled that “importantly,” Google “exercised its monopoly power by charging supra-competitive prices for general search text ads”—and thus earned “monopoly profits.”

While experts think the appeal process will delay litigation on remedies, Google seems to think that Mehta may rule on potential remedies before Google can proceed with its appeal. Walker told Google employees that a ruling on remedies may arrive in the next few months, The Wall Street Journal reported. Ars will continue monitoring for updates on this timeline.

As the DOJ’s case against Google’s search business has dragged on, reports have long suggested that a loss for Google could change the way that nearly the entire world searches the Internet.

Adam Epstein—the president and co-CEO of adMarketplace, which bills itself as “the largest consumer search technology company outside of Google and Bing”—told Ars that innovations in search could result in a broader landscape of more dynamic search experiences that draw from sources beyond Google and allow searchers to skip Google’s SERPs entirely. If that happens, the coming years could make Google’s ubiquitous search experience today a distant memory.

“By the end of this decade, going to a search engine results page will seem quaint,” Epstein predicted. “The court’s decision sets the stage for a remedy that will dramatically improve the search experience for everyone connected to the web. The era of innovation in search is just around the corner.”

The DOJ has not meaningfully discussed potential remedies it will seek, but Jonathan Kanter, assistant attorney general of the Justice Department’s antitrust division, celebrated the ruling.

“This landmark decision holds Google accountable,” Kanter said. “It paves the path for innovation for generations to come and protects access to information for all Americans.”

All the possible ways to destroy Google’s monopoly in search Read More »

path-to-precision:-targeted-cancer-drugs-go-from-table-to-trials-to-bedside

Path to precision: Targeted cancer drugs go from table to trials to bedside

Path to precision: Targeted cancer drugs go from table to trials to bedside

Aurich Lawson

In 1972, Janet Rowley sat at her dining room table and cut tiny chromosomes from photographs she had taken in her laboratory. One by one, she snipped out the small figures her children teasingly called paper dolls. She then carefully laid them out in 23 matching pairs—and warned her kids not to sneeze.

The physician-scientist had just mastered a new chromosome-staining technique in a year-long sabbatical at Oxford. But it was in the dining room of her Chicago home where she made the discovery that would dramatically alter the course of cancer research.

Rowley's 1973 partial karyotype showing the 9;22 translocation

Enlarge / Rowley’s 1973 partial karyotype showing the 9;22 translocation

Looking over the chromosomes of a patient with acute myeloid leukemia (AML), she realized that segments of chromosomes 8 and 21 had broken off and swapped places—a genetic trade called a translocation. She looked at the chromosomes of other AML patients and saw the same switch: the 8;21 translocation.

Later that same year, she saw another translocation, this time in patients with a different type of blood cancer, called chronic myelogenous leukemia (CML). Patients with CML were known to carry a puzzling abnormality in chromosome 22 that made it appear shorter than normal. The abnormality was called the Philadelphia chromosome after its discovery by two researchers in Philadelphia in 1959. But it wasn’t until Rowley pored over her meticulously set dining table that it became clear why chromosome 22 was shorter—a chunk of it had broken off and traded places with a small section of chromosome 9, a 9;22 translocation.

Rowley had the first evidence that genetic abnormalities were the cause of cancer. She published her findings in 1973, with the CML translocation published in a single-author study in Nature. In the years that followed, she strongly advocated for the idea that the abnormalities were significant for cancer. But she was initially met with skepticism. At the time, many researchers considered chromosomal abnormalities to be a result of cancer, not the other way around. Rowley’s findings were rejected from the prestigious New England Journal of Medicine. “I got sort of amused tolerance at the beginning,” she said before her death in 2013.

The birth of targeted treatments

But the evidence mounted quickly. In 1977, Rowley and two of her colleagues at the University of Chicago identified another chromosomal translocation—15;17—that causes a rare blood cancer called acute promyelocytic leukemia. By 1990, over 70 translocations had been identified in cancers.

The significance mounted quickly as well. Following Rowley’s discovery of the 9;22 translocation in CML, researchers figured out that the genetic swap creates a fusion of two genes. Part of the ABL gene normally found on chromosome 9 becomes attached to the BCR gene on chromosome 22, creating the cancer-driving BCR::ABL fusion gene on chromosome 22. This genetic merger codes for a signaling protein—a tyrosine kinase—that is permanently stuck in “active” mode. As such, it perpetually triggers signaling pathways that lead white blood cells to grow uncontrollably.

Schematic of the 9;22 translocation and the creation of the BCR::ABL fusion gene.

Enlarge / Schematic of the 9;22 translocation and the creation of the BCR::ABL fusion gene.

By the mid-1990s, researchers had developed a drug that blocks the BCR-ABL protein, a tyrosine kinase inhibitor (TKI) called imatinib. For patients in the chronic phase of CML—about 90 percent of CML patients—imatinib raised the 10-year survival rate from less than 50 percent to a little over 80 percent. Imatinib (sold as Gleevec or Glivec) earned approval from the Food and Drug Administration in 2001, marking the first approval for a cancer therapy targeting a known genetic alteration.

With imatinib’s success, targeted cancer therapies—aka precision medicine—took off. By the early 2000s, there was widespread interest among researchers to precisely identify the genetic underpinnings of cancer. At the same time, the revolutionary development of next-generation genetic sequencing acted like jet fuel for the soaring field. The technology eased the identification of mutations and genetic abnormalities driving cancers. Sequencing is now considered standard care in the diagnosis, treatment, and management of many cancers.

The development of gene-targeting cancer therapies skyrocketed. Classes of TKIs, like imatinib, expanded particularly fast. There are now over 50 FDA-approved TKIs targeting a wide variety of cancers. For instance, the TKIs lapatinib, neratinib, tucatinib, and pyrotinib target human epidermal growth factor receptor 2 (HER2), which runs amok in some breast and gastric cancers. The TKI ruxolitinib targets Janus kinase 2, which is often mutated in the rare blood cancer myelofibrosis and the slow-growing blood cancer polycythemia vera. CML patients, meanwhile, now have five TKI therapies to choose from.

Path to precision: Targeted cancer drugs go from table to trials to bedside Read More »

a-few-weeks-with-the-pocket-386,-an-early-‘90s-style,-half-busted-retro-pc

A few weeks with the Pocket 386, an early-‘90s-style, half-busted retro PC

The Pocket 386 is fun for a while, but the shortcomings and the broken stuff start to wear on you after a while.

Enlarge / The Pocket 386 is fun for a while, but the shortcomings and the broken stuff start to wear on you after a while.

Andrew Cunningham

The Book 8088 was a neat experiment, but as a clone of the original IBM PC, it was pretty limited in what it could do. Early MS-DOS apps and games worked fine, and the very first Windows versions ran… technically. Just not the later ones that could actually run Windows software.

The Pocket 386 laptop is a lot like the Book 8088, but fast-forwarded to the next huge evolution in the PC’s development. Intel’s 80386 processors not only jumped from 16-bit operation to 32-bit, but they implemented different memory modes that could take advantage of many megabytes of memory while maintaining compatibility with apps that only recognized the first 640KB.

Expanded software compatibility makes this one more appealing to retro-computing enthusiasts since (like a vintage 386) it will do just about everything an 8088 can do, with the added benefit of a whole lot more speed and much better compatibility with seminal versions of Windows. It’s much more convenient to have all this hardware squeezed into a little laptop than in a big, clunky vintage desktop with slowly dying capacitors in it.

But as with the Book 8088, there are implementation problems. Some of them are dealbreakers. The Pocket 386 is still an interesting curio, but some of what’s broken makes it too unreliable and frustrating to really be usable as a vintage system once the novelty wears off.

The 80386

A close-up of the Pocket 386's tiny keyboard.

Enlarge / A close-up of the Pocket 386’s tiny keyboard.

Andrew Cunningham

When we talked about the Book 8088, most of our discussion revolved around a single PC: the 1981 IBM PC 5150, the original machine from which a wave of “IBM compatibles” and the modern PC industry sprung. Restricted to 1MB of RAM and 16-bit applications—most of which could only access the first 640KB of memory—the limits of an 8088-based PC mean there are only so many operating systems and applications you can realistically run.

The 80386 is seven years newer than the original 8086, and it’s capable of a whole lot more. The CPU came with many upgrades over the 8086 and 80286, but there are three that are particularly relevant for us: for one, it’s a 32-bit processor capable of addressing up to 4GB of RAM (strictly in theory, for vintage software). It introduced a much-improved “protected mode” that allowed for improved multitasking and the use of virtual memory. And it also included a so-called virtual 8086 mode, which could run multiple “real mode” MS-DOS applications simultaneously from within an operating system running in protected mode.

The result is a chip that is backward-compatible with the vast majority of software that could run on an 8088- or 8086-based PC—notwithstanding certain games or apps written specifically for the old IBM PC’s 4.77 MHz clock speed or other quirks particular to its hardware—but with the power necessary to credibly run some operating systems with graphical user interfaces.

Moving on to the Pocket 386’s specific implementation of the CPU, this is an 80386SX, the weaker of the two 386 variants. You might recall that the Intel 8088 CPU was still a 16-bit processor internally, but it used an 8-bit external bus to cut down on costs, retaining software compatibility with the 8086 but reducing the speed of communication between the CPU and other components in the system. The 386SX is the same way—like the more powerful 80386DX, it remained a 32-bit processor internally, capable of running 32-bit software. But it was connected to the rest of the system by a 16-bit external bus, which limited its performance. The amount of RAM it could address was also limited to 16MB.

(This DX/SX split is the source of some confusion; in the 486 generation, the DX suffix was used to denote a chip with a built-in floating-point unit, while 486SX processors didn’t include one. Both 386 variants still required a separate FPU for people who wanted one, the Intel 80387.)

While the Book 8088 uses vintage PC processors (usually a NEC V20, a pin-compatible 8088 upgrade), the Pocket 386 is using a slightly different version of the 80386SX core that wouldn’t have appeared in actual consumer PCs. Manufactured by a company called Ali, the M6117C is a late-’90s version of the 386SX core combined with a chipset intended for embedded systems rather than consumer PCs.

A few weeks with the Pocket 386, an early-‘90s-style, half-busted retro PC Read More »

the-summit-1-is-not-peak-mountain-bike,-but-it’s-a-great-all-rounder

The Summit 1 is not peak mountain bike, but it’s a great all-rounder

Image of a blue hard tail mountain bike leaning against a grey stone wall.

John Timmer

As I mentioned in another recent review, I’ve been checking out electric hardtail mountain bikes lately. Their relative simplicity compared to full-suspension models tends to allow companies to hit a lower price point without sacrificing much in terms of component quality, potentially opening up mountain biking to people who might not otherwise consider it. The first e-hardtail I checked out, Aventon’s Ramblas, fits this description to a T, offering a solid trail riding experience at a price that’s competitive with similar offerings from major manufacturers.

Velotric’s Summit 1 has a slightly different take on the equation. The company has made a few compromises that allowed it to bring the price down to just under $2,000, which is significantly lower than a lot of the competition. The result is something that’s a bit of a step down on some more challenging trails. But it still can do about 90 percent of what most alternatives offer, and it’s probably a better all-around bicycle for people who intend to also use it for commuting or errand-running.

Making the Summit

Velotric is another e-bike-only company, and we’ve generally been impressed by its products, which offer a fair bit of value for their price. The Summit 1 seems to be a reworking of its T-series of bikes (which also impressed us) into mountain bike form. You get a similar app experience and integration of the bike into Apple’s Find My system, though the company has ditched the thumbprint reader, which is supposed to function as a security measure. Velotric has also done some nice work adapting its packaging to smooth out the assembly process, placing different parts in labeled sub-boxes.

Velotric has made it easier to find what you need during assembly.

Enlarge / Velotric has made it easier to find what you need during assembly.

John Timmer

These didn’t help me avoid all glitches during assembly, though. I ended up having to take apart the front light assembly and remove the handlebars clamp to get the light attached to the bike—all contrary to the instructions. And connecting the color-coded electric cables was more difficult than necessary because two cables had the same color. But it only started up in one of the possible combinations, so it wasn’t difficult to sort out.

The Summit 1’s frame is remarkably similar to the Ramblas; if there wasn’t branding on it, you might need to resort to looking over the components to figure out which one you were looking at. Like the Ramblas, it has a removable battery with a cover that protects from splashes, but it probably won’t stay watertight through any significant fords. The bike also lacks an XL size option, and as usual, the Large was just a bit small for my legs.

The biggest visible difference is at the cranks, which is not where the motor resides on the Summit. Instead, you’ll find that on the rear hub, which typically means a slight step down in performance, though it is often considerably cheaper. For the Summit, the step down seemed very slight. I could definitely feel it in some contexts, but I’m pretty unusual in terms of the number of different hub and mid-motor configurations I’ve experienced (which is my way of saying that most people would never notice).

The Summit 1 has a hub motor on the rear wheel and a relatively compact set of gears.

Enlarge / The Summit 1 has a hub motor on the rear wheel and a relatively compact set of gears.

John Timmer

There are a number of additional price/performance compromises to be found. The biggest is the drivetrain in the back, which has a relatively paltry eight gears and lacks the very large gear rings you’d typically find on mountain bikes without a front derailleur—meaning almost all of them these days. This isn’t as much of a problem as it might seem because the bike is built around a power assist that can easily handle the sort of hills those big gear rings were meant for. But it is an indication of the ways Velotric has kept its costs down. Those gears are paired with a Shimano Altus rear derailleur, which is controlled by a standard dual-trigger shifter and a plastic indicator to track which gear you’re in.

The bike also lacks a dropper seat that you can get out of your way during bouncy descents. Because the frame was small for me anyway, I didn’t really feel its absence. The Summit does have a dedicated mountain bike fork from a Chinese manufacturer called YDH that included an easy-to-access dial that lets you adjust the degree of cushioning you get on the fly. One nice touch is a setting that locks the forks if you’re going to be on smooth pavement for a while. I’m not sure who makes the rims, as I was unable to interpret the graphics on them. But the tires were well-labeled with Kenda, a brand that shows up on a number of other mountain bikes.

Overall, it wasn’t that hard to spot the places Velotric made compromises to bring the bike in at under $2,000. The striking thing was just how few of them there were. The obvious question is whether you’d notice them in practice. We’ll get back to that after we go over the bike’s electronics.

The Summit 1 is not peak mountain bike, but it’s a great all-rounder Read More »

secure-boot-is-completely-broken-on-200+-models-from-5-big-device-makers

Secure Boot is completely broken on 200+ models from 5 big device makers

Secure Boot is completely broken on 200+ models from 5 big device makers

sasha85ru | Getty Imates

In 2012, an industry-wide coalition of hardware and software makers adopted Secure Boot to protect against a long-looming security threat. The threat was the specter of malware that could infect the BIOS, the firmware that loaded the operating system each time a computer booted up. From there, it could remain immune to detection and removal and could load even before the OS and security apps did.

The threat of such BIOS-dwelling malware was largely theoretical and fueled in large part by the creation of ICLord Bioskit by a Chinese researcher in 2007. ICLord was a rootkit, a class of malware that gains and maintains stealthy root access by subverting key protections built into the operating system. The proof of concept demonstrated that such BIOS rootkits weren’t only feasible; they were also powerful. In 2011, the threat became a reality with the discovery of Mebromi, the first-known BIOS rootkit to be used in the wild.

Keenly aware of Mebromi and its potential for a devastating new class of attack, the Secure Boot architects hashed out a complex new way to shore up security in the pre-boot environment. Built into UEFI—the Unified Extensible Firmware Interface that would become the successor to BIOS—Secure Boot used public-key cryptography to block the loading of any code that wasn’t signed with a pre-approved digital signature. To this day, key players in security—among them Microsoft and the US National Security Agency—regard Secure Boot as an important, if not essential, foundation of trust in securing devices in some of the most critical environments, including in industrial control and enterprise networks.

An unlimited Secure Boot bypass

On Thursday, researchers from security firm Binarly revealed that Secure Boot is completely compromised on more than 200 device models sold by Acer, Dell, Gigabyte, Intel, and Supermicro. The cause: a cryptographic key underpinning Secure Boot on those models that was compromised in 2022. In a public GitHub repository committed in December of that year, someone working for multiple US-based device manufacturers published what’s known as a platform key, the cryptographic key that forms the root-of-trust anchor between the hardware device and the firmware that runs on it. The repository was located at https://github.com/raywu-aaeon/Ryzen2000_4000.git, and it’s not clear when it was taken down.

The repository included the private portion of the platform key in encrypted form. The encrypted file, however, was protected by a four-character password, a decision that made it trivial for Binarly, and anyone else with even a passing curiosity, to crack the passcode and retrieve the corresponding plain text. The disclosure of the key went largely unnoticed until January 2023, when Binarly researchers found it while investigating a supply-chain incident. Now that the leak has come to light, security experts say it effectively torpedoes the security assurances offered by Secure Boot.

“It’s a big problem,” said Martin Smolár, a malware analyst specializing in rootkits who reviewed the Binarly research and spoke to me about it. “It’s basically an unlimited Secure Boot bypass for these devices that use this platform key. So until device manufacturers or OEMs provide firmware updates, anyone can basically… execute any malware or untrusted code during system boot. Of course, privileged access is required, but that’s not a problem in many cases.”

Binarly researchers said their scans of firmware images uncovered 215 devices that use the compromised key, which can be identified by the certificate serial number 55:fb:ef: 87: 81: 23: 00: 84: 47: 17:0b:b3:cd: 87:3a:f4. A table appearing at the end of this article lists each one.

The researchers soon discovered that the compromise of the key was just the beginning of a much bigger supply-chain breakdown that raises serious doubts about the integrity of Secure Boot on more than 300 additional device models from virtually all major device manufacturers. As is the case with the platform key compromised in the 2022 GitHub leak, an additional 21 platform keys contain the strings “DO NOT SHIP” or “DO NOT TRUST.”

Test certificate provided by AMI.

Enlarge / Test certificate provided by AMI.

Binarly

Secure Boot is completely broken on 200+ models from 5 big device makers Read More »

spacex-just-stomped-the-competition-for-a-new-contract—that’s-not-great

SpaceX just stomped the competition for a new contract—that’s not great

A rocket sits on a launch pad during a purple- and gold-streaked dawn.

Enlarge / With Dragon and Falcon, SpaceX has become an essential contractor for NASA.

SpaceX

There is an emerging truth about NASA’s push toward commercial contracts that is increasingly difficult to escape: Companies not named SpaceX are struggling with NASA’s approach of awarding firm, fixed-price contracts for space services.

This belief is underscored by the recent award of an $843 million contract to SpaceX for a heavily modified Dragon spacecraft that will be used to deorbit the International Space Station by 2030.

The recently released source selection statement for the “US Deorbit Vehicle” contract, a process led by NASA head of space operations Ken Bowersox, reveals that the competition was a total stomp. SpaceX faced just a single serious competitor in this process, Northrop Grumman. And in all three categories—price, mission suitability, and past performance—SpaceX significantly outclassed Northrop.

Although it’s wonderful that NASA has an excellent contractor in SpaceX, it’s not healthy in the long term that there are so few credible competitors. Moreover, a careful reading of the source selection statement reveals that NASA had to really work to get a competition at all.

“I was really happy that we got proposals from the companies that we did,” Bowersox said during a media teleconference last week. “The companies that sent us proposals are both great companies, and it was awesome to see that interest. I would have expected a few more [proposals], honestly, but I was very happy to get the ones that we got.”

Commercial initiatives struggling

NASA’s push into “commercial” space began nearly two decades ago with a program to deliver cargo to the International Space Station. The space agency initially selected SpaceX and Rocketplane Kistler to develop rockets and spacecraft to accomplish this, but after Kistler missed milestones, the company was subsequently replaced by Orbital Sciences Corporation. The cargo delivery program was largely successful, resulting in the Cargo Dragon (SpaceX) and Cygnus (Orbital Sciences) spacecraft. It continues to this day.

A commercial approach generally means that NASA pays a “fixed” price for a service rather than paying a contractor’s costs plus a fee. It also means that NASA hopes to become one of many customers. The idea is that, as the first mover, NASA is helping to stimulate a market by which its fixed-priced contractors can also sell their services to other entities—both private companies and other space agencies.

NASA has since extended this commercial approach to crew, with SpaceX and Boeing winning large contracts in 2014. However, only SpaceX has flown operational astronaut missions, while Boeing remains in the development and test phase, with its ongoing Crew Flight Test. Whereas SpaceX has sold half a dozen private crewed missions on Dragon, Boeing has yet to announce any.

Such a commercial approach has also been tried with lunar cargo delivery through the “Commercial Lunar Payload Services” program, as well as larger lunar landers (Human Landing System), next-generation spacesuits, and commercial space stations. Each of these programs has a mixed record at best. For example, NASA’s inspector general was highly critical of the lunar cargo program in a recent report, and one of the two spacesuit contractors, Collins Aerospace, recently dropped out because it could not execute on its fixed-price contract.

Some of NASA’s most important traditional space contractors, including Lockheed Martin, Boeing, and Northrop Grumman, have all said they are reconsidering whether to participate in fixed-price contract competitions in the future. For example, Northrop CEO Kathy Warden said last August, “We are being even more disciplined moving forward in ensuring that we work with the government to have the appropriate use of fixed-price contracts.”

So the large traditional space contractors don’t like fixed-price contracts, and many new space companies are struggling to survive in this environment.

SpaceX just stomped the competition for a new contract—that’s not great Read More »

we’re-building-nuclear-spaceships-again—this-time-for-real 

We’re building nuclear spaceships again—this time for real 

Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

Enlarge / Artist concept of the Demonstration for Rocket to Agile Cislunar Operations (DRACO) spacecraft.

DARPA

Phoebus 2A, the most powerful space nuclear reactor ever made, was fired up at Nevada Test Site on June 26, 1968. The test lasted 750 seconds and confirmed it could carry first humans to Mars. But Phoebus 2A did not take anyone to Mars. It was too large, it cost too much, and it didn’t mesh with Nixon’s idea that we had no business going anywhere further than low-Earth orbit.

But it wasn’t NASA that first called for rockets with nuclear engines. It was the military that wanted to use them for intercontinental ballistic missiles. And now, the military wants them again.

Nuclear-powered ICBMs

The work on nuclear thermal rockets (NTRs) started with the Rover program initiated by the US Air Force in the mid-1950s. The concept was simple on paper. Take tanks of liquid hydrogen and use turbopumps to feed this hydrogen through a nuclear reactor core to heat it up to very high temperatures and expel it through the nozzle to generate thrust. Instead of causing the gas to heat and expand by burning it in a combustion chamber, the gas was heated by coming into contact with a nuclear reactor.

Tokino, vectorized by CommiM at en.wikipedia

The key advantage was fuel efficiency. “Specific impulse,” a measurement that’s something like the gas mileage of a rocket, could be calculated from the square root of the exhaust gas temperature divided by the molecular weight of the propellant. This meant the most efficient propellant for rockets was hydrogen because it had the lowest molecular weight.

In chemical rockets, hydrogen had to be mixed with an oxidizer, which increased the total molecular weight of the propellant but was necessary for combustion to happen. Nuclear rockets didn’t need combustion and could work with pure hydrogen, which made them at least twice as efficient. The Air Force wanted to efficiently deliver nuclear warheads to targets around the world.

The problem was that running stationary reactors on Earth was one thing; making them fly was quite another.

Space reactor challenge

Fuel rods made with uranium 235 oxide distributed in a metal or ceramic matrix comprise the core of a standard fission reactor. Fission happens when a slow-moving neutron is absorbed by a uranium 235 nucleus and splits it into two lighter nuclei, releasing huge amounts of energy and excess, very fast neutrons. These excess neutrons normally don’t trigger further fissions, as they move too fast to get absorbed by other uranium nuclei.

Starting a chain reaction that keeps the reactor going depends on slowing them down with a moderator, like water, that “moderates” their speed. This reaction is kept at moderate levels using control rods made of neutron-absorbing materials, usually boron or cadmium, that limit the number of neutrons that can trigger fission. Reactors are dialed up or down by moving the control rods in and out of the core.

Translating any of this to a flying reactor is a challenge. The first problem is the fuel. The hotter you make the exhaust gas, the more you increase specific impulse, so NTRs needed the core to operate at temperatures reaching 3,000 K—nearly 1,800 K higher than ground-based reactors. Manufacturing fuel rods that could survive such temperatures proved extremely difficult.

Then there was the hydrogen itself, which is extremely corrosive at these temperatures, especially when interacting with those few materials that are stable at 3,000 K. Finally, standard control rods had to go, too, because on the ground, they were gravitationally dropped into the core, and that wouldn’t work in flight.

Los Alamos Scientific Laboratory proposed a few promising NTR designs that addressed all these issues in 1955 and 1956, but the program really picked up pace after it was transferred to NASA and Atomic Energy Commission (AEC) in 1958, There, the idea was rebranded as NERVA, Nuclear Engine for Rocket Vehicle Applications. NASA and AEC, blessed with nearly unlimited budget, got busy building space reactors—lots of them.

We’re building nuclear spaceships again—this time for real  Read More »

gazelle-eclipse-c380+-e-bike-review:-a-smart,-smooth-ride-at-a-halting-price

Gazelle Eclipse C380+ e-bike review: A smart, smooth ride at a halting price

Gazelle Eclipse C380+ HMB review —

It’s a powerful, comfortable, fun, and very smart ride. Is that enough?

Gazelle Eclipse C380+ in front of a railing, overlooking a river crosswalk in Navy Yard, Washington, D.C.

Kevin Purdy

Let me get three negative points about the Gazelle Eclipse out of the way first. First, it’s a 62-pound e-bike, so it’s tough to get moving without its battery. Second, its rack is a thick, non-standard size, so you might need new bags for it. Third—and this is the big one—with its $6,000 suggested retail price, it’s expensive, and you will probably feel nervous about locking it anywhere you don’t completely trust.

Apart from those issues, though, this e-bike is great fun. When I rode the Eclipse (the C380+ HMB version of it), I felt like Batman on a day off, or maybe Bruce Wayne doing reconnaissance as a bike enthusiast. The matte gray color, the black hardware, and the understated but impressively advanced tech certainly helped. But I felt prepared to handle anything that was thrown at me without having to think about it much. Brutally steep hills, poorly maintained gravel paths, curbs, stop lights, or friends trying to outrun me on their light road bikes—the Eclipse was ready.

It assists up to 28 miles per hour (i.e., Class 3) and provides up to 85 Nm of torque, and the front suspension absorbs shocks without shaking your grip confidence. It has integrated lights, the display can show you navigation while your phone is tucked away, and the automatic assist changing option balances your mechanical and battery levels, leaving you to just pedal and look.

  • The little shifter guy, who will take a few rides to get used to, is either really clever or overthinking it.

    Kevin Purdy

  • The Bosch Kiox 300 is the only screen I’ve had on an e-bike that I ever put time into customizing and optimizing.

    Kevin Purdy

  • The drivetrain on the C80+ is a remarkable thing, and it’s well-hidden inside matte aluminum.

    Kevin Purdy

  • The shocks on the Eclipse are well-tuned for rough roads, if not actual mountains. (The author is aware the headlamp was at an angle in this shot).

    Kevin Purdy

  • The electric assist changer on the left handlebar, and the little built-in bell that you always end up replacing on new e-bikes for something much louder.

    Kevin Purdy

What kind of bike is this? A fun one.

The Eclipse comes in two main variants, the 11-speed, chain-and-derailleur model T11+ HMB and the stepless Enviolo hub and Gates Carbon belt-based C380+ HMB. Both come in three sizes (45, 50, and 55 cm), in one of two colors (Anthracite Grey, Thyme Green for the T11+, and Metallic Orange for the C380+), and with either a low-step or high-step version, the latter with a sloping top bar. Most e-bikes come in two sizes if you’re lucky, typically “Medium” and “Large,” and their suggested height spans are far too generous. The T11+ starts at $5,500 and the C380+ starts at $6,000.

The Eclipse’s posture is an “active” one, seemingly halfway between the upright Dutch style and a traditional road or flat-bar bike. It’s perfect for this kind of ride. The front shocks have a maximum of 75 mm of travel, which won’t impress your buddies riding real trails but will make gravel, dirt, wooden bridges, and woodland trails a potential. Everything about the Eclipse tells you to stop worrying about whether you have the right kind of bike for a ride and just start pedaling.

“But I’m really into exercise riding, and I need lots of metrics and data, during and after the ride,” I hear some of you straw people saying. That’s why the Eclipse has the Bosch Kiox 300, a center display that is, for an e-bike, remarkably readable, navigable, and informative. You can see your max and average speed, distance, which assist levels you spent time in, power output, cadence, and more. You can push navigation directions from Komoot or standard maps apps from your phone to the display, using Bosch’s Flow app. And, of course, you can connect to Strava.

Halfway between maximum efficiency and careless joyriding, the Eclipse offers a feature that I can only hope makes it down to cheaper e-bikes over time: automatic assist changing. Bikes that have both gears and motor assist levels can sometimes leave you guessing as to which one you should change when approaching a hill or starting from a dead stop. Set the Eclipse to automatic assist and you only have to worry about the right-hand grip shifter. There are no gear numbers; there is a little guy on a bike, and as you raise or lower the gearing, the road he’s approaching get steep or flat.

Gazelle Eclipse C380+ e-bike review: A smart, smooth ride at a halting price Read More »

peer-review-is-essential-for-science-unfortunately,-it’s-broken.

Peer review is essential for science. Unfortunately, it’s broken.

Peer review is essential for science. Unfortunately, it’s broken.

Aurich Lawson | Getty Images

Rescuing Science: Restoring Trust in an Age of Doubt was the most difficult book I’ve ever written. I’m a cosmologist—I study the origins, structure, and evolution of the Universe. I love science. I live and breathe science. If science were a breakfast cereal, I’d eat it every morning. And at the height of the COVID-19 pandemic, I watched in alarm as public trust in science disintegrated.

But I don’t know how to change people’s minds. I don’t know how to convince someone to trust science again. So as I started writing my book, I flipped the question around: is there anything we can do to make the institution of science more worthy of trust?

The short answer is yes. The long answer takes an entire book. In the book, I explore several different sources of mistrust—the disincentives scientists face when they try to communicate with the public, the lack of long-term careers, the complicitness of scientists when their work is politicized, and much more—and offer proactive steps we can take to address these issues to rebuild trust.

The section below is taken from a chapter discussing the relentless pressure to publish that scientists face, and the corresponding explosion in fraud that this pressure creates. Fraud can take many forms, from the “hard fraud” of outright fabrication of data, to many kinds of “soft fraud” that include plagiarism, manipulation of data, and careful selection of methods to achieve a desired result. The more that fraud thrives, the more that the public loses trust in science. Addressing this requires a fundamental shift in the incentive and reward structures that scientists work in. A difficult task to be sure, but not an impossible one—and one that I firmly believe will be worth the effort.

Modern science is hard, complex, and built from many layers and many years of hard work. And modern science, almost everywhere, is based on computation. Save for a few (and I mean very few) die-hard theorists who insist on writing things down with pen and paper, there is almost an absolute guarantee that with any paper in any field of science that you could possibly read, a computer was involved in some step of the process.

Whether it’s studying bird droppings or the collisions of galaxies, modern-day science owes its very existence—and continued persistence—to the computer. From the laptop sitting on an unkempt desk to a giant machine that fills up a room, “S. Transistor” should be the coauthor on basically all three million journal articles published every year.

The sheer complexity of modern science, and its reliance on customized software, renders one of the frontline defenses against soft and hard fraud useless. That defense is peer review.

The practice of peer review was developed in a different era, when the arguments and analysis that led to a paper’s conclusion could be succinctly summarized within the paper itself. Want to know how the author arrived at that conclusion? The derivation would be right there. It was relatively easy to judge the “wrongness” of an article because you could follow the document from beginning to end, from start to finish, and have all the information you needed to evaluate it right there at your fingerprints.

That’s now largely impossible with the modern scientific enterprise so reliant on computers.

To makes matters worse, many of the software codes used in science are not publicly available. I’ll say this again because it’s kind of wild to even contemplate: there are millions of papers published every year that rely on computer software to make the results happen, and that software is not available for other scientists to scrutinize to see if it’s legit or not. We simply have to trust it, but the word “trust” is very near the bottom of the scientist’s priority list.

Why don’t scientists make their code available? It boils down to the same reason that scientists don’t do many things that would improve the process of science: there’s no incentive. In this case, you don’t get any h-index points for releasing your code on a website. You only get them for publishing papers.

This infinitely agitates me when I peer-review papers. How am I supposed to judge the correctness of an article if I can’t see the entire process? What’s the point of searching for fraud when the computer code that’s sitting behind the published result can be shaped and molded to give any result you want, and nobody will be the wiser?

I’m not even talking about intentional computer-based fraud here; this is even a problem for detecting basic mistakes. If you make a mistake in a paper, a referee or an editor can spot it. And science is better off for it. If you make a mistake in your code… who checks it? As long as the results look correct, you’ll go ahead and publish it and the peer reviewer will go ahead and accept it. And science is worse off for it.

Science is getting more complex over time and is becoming increasingly reliant on software code to keep the engine going. This makes fraud of both the hard and soft varieties easier to accomplish. From mistakes that you pass over because you’re going too fast, to using sophisticated tools that you barely understand but use to get the result that you wanted, to just totally faking it, science is becoming increasingly wrong.

Peer review is essential for science. Unfortunately, it’s broken. Read More »

the-yellowstone-supervolcano-destroyed-an-ecosystem-but-saved-it-for-us

The Yellowstone supervolcano destroyed an ecosystem but saved it for us

Set in stone —

50 years of excavation unveiled the story of a catastrophic event and its aftermath.

Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Enlarge / Interior view of the Rhino Barn. Exposed fossil skeletons left in-situ for research and public viewing.

Rick E. Otto, University of Nebraska State Museum

Death was everywhere. Animal corpses littered the landscape and were mired in the local waterhole as ash swept around everything in its path. For some, death happened quickly; for others, it was slow and painful.

This was the scene in the aftermath of a supervolcanic eruption in Idaho, approximately 1,600 kilometers (900 miles) away. It was an eruption so powerful that it obliterated the volcano itself, leaving a crater 80 kilometers (50 miles) wide and spewing clouds of ash that the wind carried over long distances, killing almost everything that inhaled it. This was particularly true here, in this location in Nebraska, where animals large and small succumbed to the eruption’s deadly emissions.

Eventually, all traces of this horrific event were buried; life continued, evolved, and changed. That’s why, millions of years later in the summer of 1971, Michael Voorhies was able to enjoy another delightful day of exploring.

Finding rhinos

He was, as he had been each summer between academic years, creating a geologic map of his hometown in Nebraska. This meant going from farm to farm and asking if he could walk through the property to survey the rocks and look for fossils. “I’m basically just a kid at heart, and being a paleontologist in the summer was my idea of heaven,” Voorhies, now retired from the University of Georgia, told Ars.

What caught his eye on one particular farm was a layer of volcanic ash—something treasured by geologists and paleontologists, who use it to get the age of deposits. But as he got closer, he also noticed exposed bone. “Finding what was obviously a lower jaw which was still attached to the skull, now that was really quite interesting!” he said. “Mostly what you find are isolated bones and teeth.”

That skull belonged to a juvenile rhino. Voorhies and some of his students returned to the site to dig further, uncovering the rest of the rhino’s completely articulated remains (meaning the bones of its skeleton were connected as they would be in life). More digging produced the intact skeletons of another five or six rhinos. That was enough to get National Geographic funding for a massive excavation that took place between 1978 and 1979. Crews amassed, among numerous other animals, the remarkable total of 70 complete rhino skeletons.

To put this into perspective, most fossil sites—even spectacular locations preserving multiple animals—are composed primarily of disarticulated skeletons, puzzle pieces that paleontologists painstakingly put back together. Here, however, was something no other site had ever before produced: vast numbers of complete skeletons preserved where they died.

Realizing there was still more yet to uncover, Voorhies and others appealed to the larger Nebraska community to help preserve the area. Thanks to hard work and substantial local donations, the Ashfall Fossil Beds park opened to the public in 1991, staffed by two full-time employees.

Fossils discovered are now left in situ, meaning they remain exposed exactly where they are found, protected by a massive structure called the Hubbard Rhino Barn. Excavations are conducted within the barn at a much slower and steadier pace than those in the ’70s due in large part to the small, rotating number of seasonal employees—mostly college students—who excavate further each summer.

The Rhino Barn protects the fossil bed from the elements.

Enlarge / The Rhino Barn protects the fossil bed from the elements.

Photos by Rick E. Otto, University of Nebraska State Museum

A full ecosystem

Almost 50 years of excavation and research have unveiled the story of a catastrophic event and its aftermath, which took place in a Nebraska that nobody would recognize—one where species like rhinoceros, camels, and saber-toothed deer were a common sight.

But to understand that story, we have to set the stage. The area we know today as Ashfall Fossil Beds was actually a waterhole during the Miocene, one frequented by a diversity of animals. We know this because there are fossils of those animals in a layer of sand at the very bottom of the waterhole, a layer that was not impacted by the supervolcanic eruption.

Rick Otto was one of the students who excavated fossils in 1978. He became Ashfall’s superintendent in 1991 and retired in late 2023. “There were animals dying a natural death around the Ashfall waterhole before the volcanic ash storm took place,” Otto told Ars, which explains the fossils found in that sand. After being scavenged, their bodies may have been trampled by some of the megafauna visiting the waterhole, which would have “worked those bones into the sand.”

The Yellowstone supervolcano destroyed an ecosystem but saved it for us Read More »