Author name: Kris Guyer

flour,-water,-salt,-github:-the-bread-code-is-a-sourdough-baking-framework

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework

One year ago, I didn’t know how to bake bread. I just knew how to follow a recipe.

If everything went perfectly, I could turn out something plain but palatable. But should anything change—temperature, timing, flour, Mercury being in Scorpio—I’d turn out a partly poofy pancake. I presented my partly poofy pancakes to people, and they were polite, but those platters were not particularly palatable.

During a group vacation last year, a friend made fresh sourdough loaves every day, and we devoured it. He gladly shared his knowledge, his starter, and his go-to recipe. I took it home, tried it out, and made a naturally leavened, artisanal pancake.

I took my confusion to YouTube, where I found Hendrik Kleinwächter’s “The Bread Code” channel and his video promising a course on “Your First Sourdough Bread.” I watched and learned a lot, but I couldn’t quite translate 30 minutes of intensive couch time to hours of mixing, raising, slicing, and baking. Pancakes, part three.

It felt like there had to be more to this. And there was—a whole GitHub repository more.

The Bread Code gave Kleinwächter a gratifying second career, and it’s given me bread I’m eager to serve people. This week alone, I’m making sourdough Parker House rolls, a rosemary olive loaf for Friendsgiving, and then a za’atar flatbread and standard wheat loaf for actual Thanksgiving. And each of us has learned more about perhaps the most important aspect of coding, bread, teaching, and lots of other things: patience.

Hendrik Kleinwächter on his Bread Code channel, explaining his book.

Resources, not recipes

The Bread Code is centered around a book, The Sourdough Framework. It’s an open source codebase that self-compiles into new LaTeX book editions and is free to read online. It has one real bread loaf recipe, if you can call a 68-page middle-section journey a recipe. It has 17 flowcharts, 15 tables, and dozens of timelines, process illustrations, and photos of sourdough going both well and terribly. Like any cookbook, there’s a bit about Kleinwächter’s history with this food, and some sourdough bread history. Then the reader is dropped straight into “How Sourdough Works,” which is in no way a summary.

“To understand the many enzymatic reactions that take place when flour and water are mixed, we must first understand seeds and their role in the lifecycle of wheat and other grains,” Kleinwächter writes. From there, we follow a seed through hibernation, germination, photosynthesis, and, through humans’ grinding of these seeds, exposure to amylase and protease enzymes.

I had arrived at this book with these specific loaf problems to address. But first, it asks me to consider, “What is wheat?” This sparked vivid memories of Computer Science 114, in which a professor, asked to troubleshoot misbehaving code, would instead tell students to “Think like a compiler,” or “Consider the recursive way to do it.”

And yet, “What is wheat” did help. Having a sense of what was happening inside my starter, and my dough (which is really just a big, slow starter), helped me diagnose what was going right or wrong with my breads. Extra-sticky dough and tightly arrayed holes in the bread meant I had let the bacteria win out over the yeast. I learned when to be rough with the dough to form gluten and when to gently guide it into shape to preserve its gas-filled form.

I could eat a slice of each loaf and get a sense of how things had gone. The inputs, outputs, and errors could be ascertained and analyzed more easily than in my prior stance, which was, roughly, “This starter is cursed and so am I.” Using hydration percentages, measurements relative to protein content, a few tests, and troubleshooting steps, I could move closer to fresh, delicious bread. Framework: accomplished.

I have found myself very grateful lately that Kleinwächter did not find success with 30-minute YouTube tutorials. Strangely, so has he.

Sometimes weird scoring looks pretty neat. Kevin Purdy

The slow bread of childhood dreams

“I have had some successful startups; I have also had disastrous startups,” Kleinwächter said in an interview. “I have made some money, then I’ve been poor again. I’ve done so many things.”

Most of those things involve software. Kleinwächter is a German full-stack engineer, and he has founded firms and worked at companies related to blogging, e-commerce, food ordering, travel, and health. He tried to escape the boom-bust startup cycle by starting his own digital agency before one of his products was acquired by hotel booking firm Trivago. After that, he needed a break—and he could afford to take one.

“I went to Naples, worked there in a pizzeria for a week, and just figured out, ‘What do I want to do with my life?’ And I found my passion. My passion is to teach people how to make amazing bread and pizza at home,” Kleinwächter said.

Kleinwächter’s formative bread experiences—weekend loaves baked by his mother, awe-inspiring pizza from Italian ski towns, discovering all the extra ingredients in a supermarket’s version of the dark Schwarzbrot—made him want to bake his own. Like me, he started with recipes, and he wasted a lot of time and flour turning out stuff that produced both failures and a drive for knowledge. He dug in, learned as much as he could, and once he had his head around the how and why, he worked on a way to guide others along the path.

Bugs and syntax errors in baking

When using recipes, there’s a strong, societally reinforced idea that there is one best, tested, and timed way to arrive at a finished food. That’s why we have America’s Test Kitchen, The Food Lab, and all manner of blogs and videos promoting food “hacks.” I should know; I wrote up a whole bunch of them as a young Lifehacker writer. I’m still a fan of such things, from the standpoint of simply getting food done.

As such, the ultimate “hack” for making bread is to use commercial yeast, i.e., dried “active” or “instant” yeast. A manufacturer has done the work of selecting and isolating yeast at its prime state and preserving it for you. Get your liquids and dough to a yeast-friendly temperature and you’ve removed most of the variables; your success should be repeatable. If you just want bread, you can make the iconic no-knead bread with prepared yeast and very little intervention, and you’ll probably get bread that’s better than you can get at the grocery store.

Baking sourdough—or “naturally leavened,” or with “levain”—means a lot of intervention. You are cultivating and maintaining a small ecosystem of yeast and bacteria, unleashing them onto flour, water, and salt, and stepping in after they’ve produced enough flavor and lift—but before they eat all the stretchy gluten bonds. What that looks like depends on many things: your water, your flours, what you fed your starter, how active it was when you added it, the air in your home, and other variables. Most important is your ability to notice things over long periods of time.

When things go wrong, debugging can be tricky. I was able to personally ask Kleinwächter what was up with my bread, because I was interviewing him for this article. There were many potential answers, including:

  • I should recognize, first off, that I was trying to bake the hardest kind of bread: Freestanding wheat-based sourdough
  • You have to watch—and smell—your starter to make sure it has the right mix of yeast to bacteria before you use it
  • Using less starter (lower “inoculation”) would make it easier not to over-ferment
  • Eyeballing my dough rise in a bowl was hard; try measuring a sample in something like an aliquot tube
  • Winter and summer are very different dough timings, even with modern indoor climate control.

But I kept with it. I was particularly susceptible to wanting things to go quicker and demanding to see a huge rise in my dough before baking. This ironically leads to the flattest results, as the bacteria eats all the gluten bonds. When I slowed down, changed just one thing at a time, and looked deeper into my results, I got better.

Screenshot of Kleinwaechter's YouTube page, with video titles like

The Bread Code YouTube page and the ways in which one must cater to algorithms.

Credit: The Bread Code

The Bread Code YouTube page and the ways in which one must cater to algorithms. Credit: The Bread Code

YouTube faces and TikTok sausage

Emailing and trading video responses with Kleinwächter, I got the sense that he, too, has learned to go the slow, steady route with his Bread Code project.

For a while, he was turning out YouTube videos, and he wanted them to work. “I’m very data-driven and very analytical. I always read the video metrics, and I try to optimize my videos,” Kleinwächter said. “Which means I have to use a clickbait title, and I have to use a clickbait-y thumbnail, plus I need to make sure that I catch people in the first 30 seconds of the video.” This, however, is “not good for us as humans because it leads to more and more extreme content.”

Kleinwächter also dabbled in TikTok, making videos in which, leaning into his German heritage, “the idea was to turn everything into a sausage.” The metrics and imperatives on TikTok were similar to those on YouTube but hyperscaled. He could put hours or days into a video, only for 1 percent of his 200,000 YouTube subscribers to see it unless he caught the algorithm wind.

The frustrations inspired him to slow down and focus on his site and his book. With his community’s help, The Bread Code has just finished its second Kickstarter-backed printing run of 2,000 copies. There’s a Discord full of bread heads eager to diagnose and correct each other’s loaves and occasional pull requests from inspired readers. Kleinwächter has seen people go from buying what he calls “Turbo bread” at the store to making their own, and that’s what keeps him going. He’s not gambling on an attention-getting hit, but he’s in better control of how his knowledge and message get out.

“I think homemade bread is something that’s super, super undervalued, and I see a lot of benefits to making it yourself,” Kleinwächter said. “Good bread just contains flour, water, and salt—nothing else.”

Loaf that is split across the middle-top, with flecks of olives showing.

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn.

Credit: Kevin Purdy

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn. Credit: Kevin Purdy

You gotta keep doing it—that’s the hard part

I can’t say it has been entirely smooth sailing ever since I self-certified with The Bread Code framework. I know what level of fermentation I’m aiming for, but I sometimes get home from an outing later than planned, arriving at dough that’s trying to escape its bucket. My starter can be very temperamental when my house gets dry and chilly in the winter. And my dough slicing (scoring), being the very last step before baking, can be rushed, resulting in some loaves with weird “ears,” not quite ready for the bakery window.

But that’s all part of it. Your sourdough starter is a collection of organisms that are best suited to what you’ve fed them, developed over time, shaped by their environment. There are some modern hacks that can help make good bread, like using a pH meter. But the big hack is just doing it, learning from it, and getting better at figuring out what’s going on. I’m thankful that folks like Kleinwächter are out there encouraging folks like me to slow down, hack less, and learn more.

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework Read More »

found-in-the-wild:-the-world’s-first-unkillable-uefi-bootkit-for-linux

Found in the wild: The world’s first unkillable UEFI bootkit for Linux

Over the past decade, a new class of infections has threatened Windows users. By infecting the firmware that runs immediately before the operating system loads, these UEFI bootkits continue to run even when the hard drive is replaced or reformatted. Now the same type of chip-dwelling malware has been found in the wild for backdooring Linux machines.

Researchers at security firm ESET said Wednesday that Bootkitty—the name unknown threat actors gave to their Linux bootkit—was uploaded to VirusTotal earlier this month. Compared to its Windows cousins, Bootkitty is still relatively rudimentary, containing imperfections in key under-the-hood functionality and lacking the means to infect all Linux distributions other than Ubuntu. That has led the company researchers to suspect the new bootkit is likely a proof-of-concept release. To date, ESET has found no evidence of actual infections in the wild.

The ASCII logo that Bootkitty is capable of rendering. Credit: ESET

Be prepared

Still, Bootkitty suggests threat actors may be actively developing a Linux version of the same sort of unkillable bootkit that previously was found only targeting Windows machines.

“Whether a proof of concept or not, Bootkitty marks an interesting move forward in the UEFI threat landscape, breaking the belief about modern UEFI bootkits being Windows-exclusive threats,” ESET researchers wrote. “Even though the current version from VirusTotal does not, at the moment, represent a real threat to the majority of Linux systems, it emphasizes the necessity of being prepared for potential future threats.”

A rootkit is a piece of malware that runs in the deepest regions of the operating system it infects. It leverages this strategic position to hide information about its presence from the operating system itself. A bootkit, meanwhile, is malware that infects the boot-up process in much the same way. Bootkits for the UEFI—short for Unified Extensible Firmware Interface—lurk in the chip-resident firmware that runs each time a machine boots. These sorts of bootkits can persist indefinitely, providing a stealthy means for backdooring the operating system even before it has fully loaded and enabled security defenses such as antivirus software.

The bar for installing a bootkit is high. An attacker first must gain administrative control of the targeted machine, either through physical access while it’s unlocked or somehow exploiting a critical vulnerability in the OS. Under those circumstances, attackers already have the ability to install OS-resident malware. Bootkits, however, are much more powerful since they (1) run before the OS does and (2) are, at least practically speaking, undetectable and unremovable.

Found in the wild: The world’s first unkillable UEFI bootkit for Linux Read More »

fcc-approves-starlink-plan-for-cellular-phone-service,-with-some-limits

FCC approves Starlink plan for cellular phone service, with some limits

Eliminating cellular dead zones

Starlink says it will offer texting service this year as well as voice and data services in 2025. Starlink does not yet have FCC approval to exceed certain emissions limits, which the company has said will be detrimental for real-time voice and video communications.

For the operations approved yesterday, Starlink is required to coordinate with other spectrum users and cease transmissions when any harmful interference is detected. “We hope to activate employee beta service in the US soon,” wrote Ben Longmier, SpaceX’s senior director of satellite engineering.

Longmier made a pitch to cellular carriers. “Any telco that signs up with Starlink Direct to Cell can completely eliminate cellular dead zones for their entire country for text and data services. This includes coastal waterways and the ocean areas in between land for island nations,” he wrote.

Starlink launched its first satellites with cellular capabilities in January 2024. “Of the more than 2,600 Gen2 Starlink satellites in low Earth orbit, around 320 are equipped with a direct-to-smartphone payload, enough to enable the texting services SpaceX has said it could launch this year,” SpaceNews wrote yesterday.

Yesterday’s FCC order also lets Starlink operate up to 7,500 second-generation satellites in altitudes between 340 km and 360 km, in addition to the previously approved altitudes between 525 km and 535 km. SpaceX is seeking approval for another 22,488 satellites but the FCC continued to defer action on that request. The FCC order said:

Authorization to permit SpaceX to operate up to 7,500 Gen2 satellites in lower altitude shells will enable SpaceX to begin providing lower-latency satellite service to support growing demand in rural and remote areas that lack terrestrial wireless service options. This partial grant also strikes the right balance between allowing SpaceX’s operations at lower altitudes to provide low-latency satellite service and permitting the Commission to continue to monitor SpaceX’s constellation and evaluate issues previously raised on the record.

Coordination with NASA

SpaceX is required to coordinate “with NASA to ensure protection of the International Space Station (ISS), ISS visiting vehicles, and launch windows for NASA science missions,” the FCC said. “SpaceX may only deploy and operate at altitudes below 400 km the total number of satellites for which it has completed physical coordination with NASA under the parties’ Space Act Agreement.”

FCC approves Starlink plan for cellular phone service, with some limits Read More »

google’s-plan-to-keep-ai-out-of-search-trial-remedies-isn’t-going-very-well

Google’s plan to keep AI out of search trial remedies isn’t going very well


DOJ: AI is not its own market

Judge: AI will likely play “larger role” in Google search remedies as market shifts.

Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google’s AI products may be restricted as an appropriate remedy following the government’s win in the search monopoly trial.

According to Law360, Mehta said that “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google’s anticompetitive behavior, the judge wants to hear much more about how each side views AI’s role in Google’s search empire during the remedies stage of litigation than he did during the search trial.

“AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase,” Mehta said. “Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?”

To fight the DOJ’s proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google’s ability to compete, the tech company is currently trying to pry into Microsoft’s AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has “core relevance” to the remedies litigation, Law360 reported.

The DOJ has asked for a wide range of remedies to stop Google from potentially using AI to entrench its market dominance in search and search text advertising. They include a ban on exclusive agreements with publishers to train on content, which the DOJ fears might allow Google to block AI rivals from licensing data, potentially posing a barrier to entry in both markets. Under the proposed remedies, Google would also face restrictions on investments in or acquisitions of AI products, as well as mergers with AI companies.

Additionally, the DOJ wants Mehta to stop Google from any potential self-preferencing, such as making an AI product mandatory on Android devices Google controls or preventing a rival from distribution on Android devices.

The government seems very concerned that Google may use its ownership of Android to play games in the emerging AI sector. They’ve further recommended an order preventing Google from discouraging partners from working with rivals, degrading the quality of rivals’ AI products on Android devices, or otherwise “coercing” manufacturers or other Android partners into giving Google’s AI products “better treatment.”

Importantly, if the court orders AI remedies linked to Google’s control of Android, Google could risk a forced sale of Android if Mehta grants the DOJ’s request for “contingent structural relief” requiring divestiture of Android if behavioral remedies don’t destroy the current monopolies.

Finally, the government wants Google to be required to allow publishers to opt out of AI training without impacting their search rankings. (Currently, opting out of AI scraping automatically opts sites out of Google search indexing.)

All of this, the DOJ alleged, is necessary to clear the way for a thriving search market as AI stands to shake up the competitive landscape.

“The promise of new technologies, including advances in artificial intelligence (AI), may present an opportunity for fresh competition,” the DOJ said in a court filing. “But only a comprehensive set of remedies can thaw the ecosystem and finally reverse years of anticompetitive effects.”

At the status conference Tuesday, DOJ attorney David Dahlquist reiterated to Mehta that these remedies are needed so that Google’s illegal conduct in search doesn’t extend to this “new frontier” of search, Law360 reported. Dahlquist also clarified that the DOJ views these kinds of AI products “as new access points for search, rather than a whole new market.”

“We’re very concerned about Google’s conduct being a barrier to entry,” Dahlquist said.

Google could not immediately be reached for comment. But the search giant has maintained that AI is beyond the scope of the search trial.

During the status conference, Google attorney John E. Schmidtlein disputed that AI remedies are relevant. While he agreed that “AI is key to the future of search,” he warned that “extraordinary” proposed remedies would “hobble” Google’s AI innovation, Law360 reported.

Microsoft shields confidential AI deals

Microsoft is predictably protective of its AI deals, arguing in a court filing that its “highly confidential agreements with OpenAI, Perplexity AI, Inflection, and G42 are not relevant to the issues being litigated” in the Google trial.

According to Microsoft, Google is arguing that it needs this information to “shed light” on things like “the extent to which the OpenAI partnership has driven new traffic to Bing and otherwise affected Microsoft’s competitive standing” or what’s required by “terms upon which Bing powers functionality incorporated into Perplexity’s search service.”

These insights, Google seemingly hopes, will convince Mehta that Google’s AI deals and investments are the norm in the AI search sector. But Microsoft is currently blocking access, arguing that “Google has done nothing to explain why” it “needs access to the terms of Microsoft’s highly confidential agreements with other third parties” when Microsoft has already offered to share documents “regarding the distribution and competitive position” of its AI products.

Microsoft also opposes Google’s attempts to review how search click-and-query data is used to train OpenAI’s models. Those requests would be better directed at OpenAI, Microsoft said.

If Microsoft gets its way, Google’s discovery requests will be limited to just Microsoft’s content licensing agreements for Copilot. Microsoft alleged those are the only deals “related to the general search or the general search text advertising markets” at issue in the trial.

On Tuesday, Microsoft attorney Julia Chapman told Mehta that Microsoft had “agreed to provide documents about the data used to train its own AI model and also raised concerns about the competitive sensitivity of Microsoft’s agreements with AI companies,” Law360 reported.

It remains unclear at this time if OpenAI will be forced to give Google the click-and-query data Google seeks. At the status hearing, Mehta ordered OpenAI to share “financial statements, information about the training data for ChatGPT, and assessments of the company’s competitive position,” Law360 reported.

But the DOJ may also be interested in seeing that data. In their proposed final judgment, the government forecasted that “query-based AI solutions” will “provide the most likely long-term path for a new generation of search competitors.”

Because of that prediction, any remedy “must prevent Google from frustrating or circumventing” court-ordered changes “by manipulating the development and deployment of new technologies like query-based AI solutions.” Emerging rivals “will depend on the absence of anticompetitive constraints to evolve into full-fledged competitors and competitive threats,” the DOJ alleged.

Mehta seemingly wants to see the evidence supporting the DOJ’s predictions, which could end up exposing carefully guarded secrets of both Google’s and its biggest rivals’ AI deals.

On Tuesday, the judge noted that integration of AI into search engines had already evolved what search results pages look like. And from his “very layperson’s perspective,” it seems like AI’s integration into search engines will continue moving “very quickly,” as both parties seem to agree.

Whether he buys into the DOJ’s theory that Google could use its existing advantage as the world’s greatest gatherer of search query data to block rivals from keeping pace is still up in the air, but the judge seems moved by the DOJ’s claim that “AI has the ability to affect market dynamics in these industries today as well as tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Google’s plan to keep AI out of search trial remedies isn’t going very well Read More »

after-telling-cadillac-to-pound-sand,-f1-does-180,-grants-entry-for-2026

After telling Cadillac to pound sand, F1 does 180, grants entry for 2026

The United States will have a second team competing in Formula 1 from 2026, when Cadillac Formula 1 will join the sport as its 11th team. The result is a complete 180 for the sport’s owner, which was highly resistant to the initial bid, first announced at the beginning of 2023.

“As the pinnacle of motorsports, F1 demands boundary-pushing innovation and excellence. It’s an honor for General Motors and Cadillac to join the world’s premier racing series, and we’re committed to competing with passion and integrity to elevate the sport for race fans around the world,” said GM President Mark Reuss. “This is a global stage for us to demonstrate GM’s engineering expertise and technology leadership at an entirely new level.”

Team first, engines later

We will have to wait until 2028 to see that full engineering potential on display. Even with the incoming changes to the technical regulations, it’s far more than the work of a minute to develop a new F1 hybrid powertrain, let alone a competitive package. Audi has been working on its F1 powertrain since at least 2023, as has Red Bull, which decided to make its internal combustion engine in-house, like Ferrari or Mercedes, with partner Ford providing the electrification.

GM’s decision to throw Cadillac’s hat into the ring came with the caveat that its powertrain wouldn’t be ready until 2028—two years after it actually wants to enter the sport. That means for 2026 and 2027, Cadillac F1 will use customer engines from another manufacturer, in this case Ferrari. From 2028, we can expect a GM-designed V6 hybrid under Cadillac F1’s engine covers.

As McLaren has demonstrated this year, customer powertrains are no impediment to success, and Alpine (née Renault) is going so far as to give up its own in-house powertrain program in favor of customer engines (and most likely, a for sale sign as the French automaker looks set to walk away from the sport once again).

After telling Cadillac to pound sand, F1 does 180, grants entry for 2026 Read More »

nasa-awards-spacex-a-contract-for-one-of-the-few-things-it-hasn’t-done-yet

NASA awards SpaceX a contract for one of the few things it hasn’t done yet

Notably, the Dragonfly launch was one of the first times United Launch Alliance has been eligible to bid its new Vulcan rocket for a NASA launch contract. NASA officials gave the green light for the Vulcan rocket to compete head-to-head with SpaceX’s Falcon 9 and Falcon Heavy after ULA’s new launcher had a successful debut launch earlier this year. With this competition, SpaceX came out on top.

A half-life of 88 years

NASA’s policy for new space missions is to use solar power whenever possible. For example, Europa Clipper was originally supposed to use a nuclear power generator, but engineers devised a way for the spacecraft to use expansive solar panels to capture enough sunlight to produce electricity, even at Jupiter’s vast distance from the Sun.

But there are some missions where this isn’t feasible. One of these is Dragonfly, which will soar through the soupy nitrogen-methane atmosphere of Titan. Saturn’s largest moon is shrouded in cloud cover, and Titan is nearly 10 times farther from the Sun than Earth, so its surface is comparatively dim.

The Dragonfly mission, seen here in an artist’s concept, is slated to launch no earlier than 2027 on a mission to explore Saturn’s moon Titan. Credit: NASA/JHUAPL/Steve Gribben

Dragonfly will launch with about 10.6 pounds (4.8 kilograms) of plutonium-238 to fuel its power generator. Plutonium-238 has a half-life of 88 years. With no moving parts, RTGs have proven quite reliable, powering spacecraft for many decades. NASA’s twin Voyager probes are approaching 50 years since launch.

The Dragonfly rotorcraft will launch cocooned inside a transit module and entry capsule, then descend under parachute through Titan’s atmosphere, which is four times denser than Earth’s. Finally, Dragonfly will detach from its descent module and activate its eight rotors to reach a safe landing.

Once on Titan, Dragonfly is designed to hop from place to place on numerous flights, exploring environments rich in organic molecules, the building blocks of life. This is one of NASA’s most exciting, and daring, robotic missions of all time.

After launching from NASA’s Kennedy Space Center in Florida in July 2028, it will take Dragonfly about six years to reach Titan. When NASA selected the Dragonfly mission to begin development in 2019, the agency hoped to launch the mission in 2026. NASA later directed Dragonfly managers to target a launch in 2027, and then 2028, requiring the mission to change from a medium-lift to a heavy-lift rocket.

Dragonfly has also faced rising costs NASA blames on the COVID-19 pandemic and supply chain issues and an in-depth redesign since the mission’s selection in 2019. Collectively, these issues caused Dragonfly’s total budget to grow to $3.35 billion, more than double its initial projected cost.

NASA awards SpaceX a contract for one of the few things it hasn’t done yet Read More »

we’re-closer-to-re-creating-the-sounds-of-parasaurolophus

We’re closer to re-creating the sounds of Parasaurolophus

The duck-billed dinosaur Parasaurolophus is distinctive for its prominent crest, which some scientists have suggested served as a kind of resonating chamber to produce low-frequency sounds. Nobody really knows what Parasaurolophus sounded like, however. Hongjun Lin of New York University is trying to change that by constructing his own model of the dinosaur’s crest and its acoustical characteristics. Lin has not yet reproduced the call of Parasaurolophus, but he talked about his progress thus far at a virtual meeting of the Acoustical Society of America.

Lin was inspired in part by the dinosaur sounds featured in the Jurassic Park film franchise, which were a combination of sounds from other animals like baby whales and crocodiles. “I’ve been fascinated by giant animals ever since I was a kid. I’d spend hours reading books, watching movies, and imagining what it would be like if dinosaurs were still around today,” he said during a press briefing. “It wasn’t until college that I realized the sounds we hear in movies and shows—while mesmerizing—are completely fabricated using sounds from modern animals. That’s when I decided to dive deeper and explore what dinosaurs might have actually sounded like.”

A skull and partial skeleton of Parasaurolophus were first discovered in 1920 along the Red Deer River in Alberta, Canada, and another partial skull was discovered the following year in New Mexico. There are now three known species of Parasaurolophus; the name means “near crested lizard.” While no complete skeleton has yet been found, paleontologists have concluded that the adult dinosaur likely stood about 16 feet tall and weighed between 6,000 to 8,000 pounds. Parasaurolophus was an herbivore that could walk on all four legs while foraging for food but may have run on two legs.

It’s that distinctive crest that has most fascinated scientists over the last century, particularly its purpose. Past hypotheses have included its use as a snorkel or as a breathing tube while foraging for food; as an air trap to keep water out of the lungs; or as an air reservoir so the dinosaur could remain underwater for longer periods. Other scientists suggested the crest was designed to help move and support the head or perhaps used as a weapon while combating other Parasaurolophus. All of these, plus a few others, have largely been discredited.

We’re closer to re-creating the sounds of Parasaurolophus Read More »

android-will-soon-instantly-log-you-in-to-your-apps-on-new-devices

Android will soon instantly log you in to your apps on new devices

If you lose your iPhone or buy an upgrade, you could reasonably expect to be up and running after an hour, presuming you backed up your prior model. Your Apple stuff all comes over, sure, but most of your third-party apps will still be signed in.

Doing the same swap with an Android device is more akin to starting three-quarters fresh. After one or two Android phones, you learn to bake in an extra hour of rapid-fire logging in to all your apps. Password managers, or just using a Google account as your authentication, are a godsend.

That might change relatively soon, as Google has announced a new Restore Credentials feature, which should do what it says in the name. Android apps can “seamlessly onboard users to their accounts on a new device,” with the restore keys handled by Android’s native backup and restore process. The experience, says Google, is “delightful” and seamless. You can even get the same notifications on the new device as you were receiving on the old.

Android will soon instantly log you in to your apps on new devices Read More »

qubit-that-makes-most-errors-obvious-now-available-to-customers

Qubit that makes most errors obvious now available to customers


Can a small machine that makes error correction easier upend the market?

A graphic representation of the two resonance cavities that can hold photons, along with a channel that lets the photon move between them. Credit: Quantum Circuits

We’re nearing the end of the year, and there are typically a flood of announcements regarding quantum computers around now, in part because some companies want to live up to promised schedules. Most of these involve evolutionary improvements on previous generations of hardware. But this year, we have something new: the first company to market with a new qubit technology.

The technology is called a dual-rail qubit, and it is intended to make the most common form of error trivially easy to detect in hardware, thus making error correction far more efficient. And, while tech giant Amazon has been experimenting with them, a startup called Quantum Circuits is the first to give the public access to dual-rail qubits via a cloud service.

While the tech is interesting on its own, it also provides us with a window into how the field as a whole is thinking about getting error-corrected quantum computing to work.

What’s a dual-rail qubit?

Dual-rail qubits are variants of the hardware used in transmons, the qubits favored by companies like Google and IBM. The basic hardware unit links a loop of superconducting wire to a tiny cavity that allows microwave photons to resonate. This setup allows the presence of microwave photons in the resonator to influence the behavior of the current in the wire and vice versa. In a transmon, microwave photons are used to control the current. But there are other companies that have hardware that does the reverse, controlling the state of the photons by altering the current.

Dual-rail qubits use two of these systems linked together, allowing photons to move from the resonator to the other. Using the superconducting loops, it’s possible to control the probability that a photon will end up in the left or right resonator. The actual location of the photon will remain unknown until it’s measured, allowing the system as a whole to hold a single bit of quantum information—a qubit.

This has an obvious disadvantage: You have to build twice as much hardware for the same number of qubits. So why bother? Because the vast majority of errors involve the loss of the photon, and that’s easily detected. “It’s about 90 percent or more [of the errors],” said Quantum Circuits’ Andrei Petrenko. “So it’s a huge advantage that we have with photon loss over other errors. And that’s actually what makes the error correction a lot more efficient: The fact that photon losses are by far the dominant error.”

Petrenko said that, without doing a measurement that would disrupt the storage of the qubit, it’s possible to determine if there is an odd number of photons in the hardware. If that isn’t the case, you know an error has occurred—most likely a photon loss (gains of photons are rare but do occur). For simple algorithms, this would be a signal to simply start over.

But it does not eliminate the need for error correction if we want to do more complex computations that can’t make it to completion without encountering an error. There’s still the remaining 10 percent of errors, which are primarily something called a phase flip that is distinct to quantum systems. Bit flips are even more rare in dual-rail setups. Finally, simply knowing that a photon was lost doesn’t tell you everything you need to know to fix the problem; error-correction measurements of other parts of the logical qubit are still needed to fix any problems.

The layout of the new machine. Each qubit (gray square) involves a left and right resonance chamber (blue dots) that a photon can move between. Each of the qubits has connections that allow entanglement with its nearest neighbors. Credit: Quantum Circuits

In fact, the initial hardware that’s being made available is too small to even approach useful computations. Instead, Quantum Circuits chose to link eight qubits with nearest-neighbor connections in order to allow it to host a single logical qubit that enables error correction. Put differently: this machine is meant to enable people to learn how to use the unique features of dual-rail qubits to improve error correction.

One consequence of having this distinctive hardware is that the software stack that controls operations needs to take advantage of its error detection capabilities. None of the other hardware on the market can be directly queried to determine whether it has encountered an error. So, Quantum Circuits has had to develop its own software stack to allow users to actually benefit from dual-rail qubits. Petrenko said that the company also chose to provide access to its hardware via its own cloud service because it wanted to connect directly with the early adopters in order to better understand their needs and expectations.

Numbers or noise?

Given that a number of companies have already released multiple revisions of their quantum hardware and have scaled them into hundreds of individual qubits, it may seem a bit strange to see a company enter the market now with a machine that has just a handful of qubits. But amazingly, Quantum Circuits isn’t alone in planning a relatively late entry into the market with hardware that only hosts a few qubits.

Having talked with several of them, there is a logic to what they’re doing. What follows is my attempt to convey that logic in a general form, without focusing on any single company’s case.

Everyone agrees that the future of quantum computation is error correction, which requires linking together multiple hardware qubits into a single unit termed a logical qubit. To get really robust, error-free performance, you have two choices. One is to devote lots of hardware qubits to the logical qubit, so you can handle multiple errors at once. Or you can lower the error rate of the hardware, so that you can get a logical qubit with equivalent performance while using fewer hardware qubits. (The two options aren’t mutually exclusive, and everyone will need to do a bit of both.)

The two options pose very different challenges. Improving the hardware error rate means diving into the physics of individual qubits and the hardware that controls them. In other words, getting lasers that have fewer of the inevitable fluctuations in frequency and energy. Or figuring out how to manufacture loops of superconducting wire with fewer defects or handle stray charges on the surface of electronics. These are relatively hard problems.

By contrast, scaling qubit count largely involves being able to consistently do something you already know how to do. So, if you already know how to make good superconducting wire, you simply need to make a few thousand instances of that wire instead of a few dozen. The electronics that will trap an atom can be made in a way that will make it easier to make them thousands of times. These are mostly engineering problems, and generally of similar complexity to problems we’ve already solved to make the electronics revolution happen.

In other words, within limits, scaling is a much easier problem to solve than errors. It’s still going to be extremely difficult to get the millions of hardware qubits we’d need to error correct complex algorithms on today’s hardware. But if we can get the error rate down a bit, we can use smaller logical qubits and might only need 10,000 hardware qubits, which will be more approachable.

Errors first

And there’s evidence that even the early entries in quantum computing have reasoned the same way. Google has been working iterations of the same chip design since its 2019 quantum supremacy announcement, focusing on understanding the errors that occur on improved versions of that chip. IBM made hitting the 1,000 qubit mark a major goal but has since been focused on reducing the error rate in smaller processors. Someone at a quantum computing startup once told us it would be trivial to trap more atoms in its hardware and boost the qubit count, but there wasn’t much point in doing so given the error rates of the qubits on the then-current generation machine.

The new companies entering this market now are making the argument that they have a technology that will either radically reduce the error rate or make handling the errors that do occur much easier. Quantum Circuits clearly falls into the latter category, as dual-rail qubits are entirely about making the most common form of error trivial to detect. The former category includes companies like Oxford Ionics, which has indicated it can perform single-qubit gates with a fidelity of over 99.9991 percent. Or Alice & Bob, which stores qubits in the behavior of multiple photons in a single resonance cavity, making them very robust to the loss of individual photons.

These companies are betting that they have distinct technology that will let them handle error rate issues more effectively than established players. That will lower the total scaling they need to do, and scaling will be an easier problem overall—and one that they may already have the pieces in place to handle. Quantum Circuits’ Petrenko, for example, told Ars, “I think that we’re at the point where we’ve gone through a number of iterations of this qubit architecture where we’ve de-risked a number of the engineering roadblocks.” And Oxford Ionics told us that if they could make the electronics they use to trap ions in their hardware once, it would be easy to mass manufacture them.

None of this should imply that these companies will have it easy compared to a startup that already has experience with both reducing errors and scaling, or a giant like Google or IBM that has the resources to do both. But it does explain why, even at this stage in quantum computing’s development, we’re still seeing startups enter the field.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

Qubit that makes most errors obvious now available to customers Read More »

automatic-braking-systems-save-lives-now-they’ll-need-to-work-at-62-mph.

Automatic braking systems save lives. Now they’ll need to work at 62 mph.

Otherwise, drivers will get mad. “The mainstream manufacturers have to be a little careful because they don’t want to create customer dissatisfaction by making the system too twitchy,” says Brannon, at AAA. Tesla drivers, for example, have proven very tolerant of “beta testing” and quirks. Your average driver, maybe less so.

Based on its own research, IIHS has pushed automakers to install AEB systems able to operate at faster speeds on their cars. Kidd says IIHS research suggests there have been no systemic, industry-wide issues with safety and automatic emergency braking. Fewer and fewer drivers seem to be turning off their AEB systems out of annoyance. (The new rules make it so drivers can’t turn them off.) But US regulators have investigated a handful of automakers, including General Motors and Honda, for automatic emergency braking issues that have reportedly injured more than 100 people, though automakers have reportedly fixed the issue.

New complexities

Getting cars to fast-brake at even higher speeds will require a series of tech advances, experts say. AEB works by bringing in data from sensors. That information is then turned over to automakers’ custom-tuned classification systems, which are trained to recognize certain situations and road users—that’s a stopped car in the middle of the road up ahead or there’s a person walking across the road up there—and intervene.

So to get AEB to work in higher-speed situations, the tech will have to “see” further down the road. Most of today’s new cars come loaded up with sensors, including cameras and radar, which can collect vital data. But the auto industry trade group argues that the Feds have underestimated the amount of new hardware—including, possibly, more expensive lidar units—that will have to be added to cars.

Brake-makers will have to tinker with components to allow quicker stops, which will require the pressurized fluid that moves through a brake’s hydraulic lines to go even faster. Allowing cars to detect hazards at further distances could require different types of hardware, including sometimes-expensive sensors. “Some vehicles might just need a software update, and some might not have the right sensor suite,” says Bhavana Chakraborty, an engineering director at Bosch, an automotive supplier that builds safety systems. Those without the right hardware will need updates “across the board,” she says, to get to the levels of safety demanded by the federal government.

Automatic braking systems save lives. Now they’ll need to work at 62 mph. Read More »

apple-tv+-spent-$20b-on-original-content-if-only-people-actually-watched.

Apple TV+ spent $20B on original content. If only people actually watched.

For example, Apple TV+ is embracing bundles, which is thought to help prevent subscribers from canceling streaming subscriptions. People can currently get Apple TV+ from a Comcast streaming bundle.

And as of last month people can subscribe to and view Apple TV+ through Amazon Prime Video. As my colleague Samuel Axon explained in October, this contradicts Apple’s long-standing approach to streaming “because Apple has long held ambitions of doing exactly what Amazon is doing here: establishing itself as the central library, viewing, search, and payment hub for a variety of subscription offerings.” But without support from Netflix, “Apple’s attempt to make the TV app a universal hub of content has been continually stymied,” Axon noted.

Something has got to give

With the broader streaming industry dealing with high production costs, disappointed subscribers, and growing competition, Apple, like many stakeholders, is looking for new approaches to entertainment. For Apple, that also reportedly includes fewer theatrical releases.

It may also one day mean joining what some streaming subscribers see as the dark side of streaming: advertisements. Apple TV+ currently remains ad-free, but there are suspicions that this could change, with Apple reportedly meeting with the United Kingdom’s TV ratings body recently about ad tracking and its hiring of ad executives.

Apple’s ad-free platform and comparatively low subscription prices are some of the biggest draws for Apple TV+ subscribers, however, which would make changes to either benefit controversial.

But after five years on the market and a reported $20 billion in spending, Apple can’t be happy with 0.3 percent of available streaming viewership. Awards and prestige help put Apple TV+ on the map, but Apple needs more subscribers and eyeballs on its expensive content to have a truly successful streaming business.

Apple TV+ spent $20B on original content. If only people actually watched. Read More »

cable-companies-and-trump’s-fcc-chair-agree:-data-caps-are-good-for-you

Cable companies and Trump’s FCC chair agree: Data caps are good for you

Many Internet users filed comments asking the FCC to ban data caps. A coalition of consumer advocacy groups filed comments saying that “data caps are another profit-driving tool for ISPs at the expense of consumers and the public interest.”

“Data caps have a negative impact on all consumers but the effects are felt most acutely in low-income households,” stated comments filed by Public Knowledge, the Open Technology Institute at New America, the Benton Institute for Broadband & Society, and the National Consumer Law Center.

Consumer groups: Caps don’t manage congestion

The consumer groups said the COVID-19 pandemic “made it more apparent how data caps are artificially imposed restrictions that negatively impact consumers, discriminate against the use of certain high-data services, and are not necessary to address network congestion, which is generally not present on home broadband networks.”

“Unlike speed tiers, data caps do not effectively manage network congestion or peak usage times, because they do not influence real-time network load,” the groups also said. “Instead, they enable further price discrimination by pushing consumers toward more expensive plans with higher or unlimited data allowances. They are price discrimination dressed up as network management.”

Jessica Rosenworcel, who has been FCC chairwoman since 2021, argued last month that consumer complaints show the FCC inquiry is necessary. “The mental toll of constantly thinking about how much you use a service that is essential for modern life is real as is the frustration of so many consumers who tell us they believe these caps are costly and unfair,” Rosenworcel said.

ISPs lifting caps during the pandemic “suggest[s] that our networks have the capacity to meet consumer demand without these restrictions,” she said, adding that “some providers do not have them at all” and “others lifted them in network merger conditions.”

Cable companies and Trump’s FCC chair agree: Data caps are good for you Read More »