Author name: Kris Guyer

cheerios-effect-inspires-novel-robot-design

Cheerios effect inspires novel robot design

There’s a common popular science demonstration involving “soap boats,” in which liquid soap poured onto the surface of water creates a propulsive flow driven by gradients in surface tension. But it doesn’t last very long since the soapy surfactants rapidly saturate the water surface, eliminating that surface tension. Using ethanol to create similar “cocktail boats” can significantly extend the effect because the alcohol evaporates rather than saturating the water.

That simple classroom demonstration could also be used to propel tiny robotic devices across liquid surfaces to carry out various environmental or industrial tasks, according to a preprint posted to the physics arXiv. The authors also exploited the so-called “Cheerios effect” as a means of self-assembly to create clusters of tiny ethanol-powered robots.

As previously reported, those who love their Cheerios for breakfast are well acquainted with how those last few tasty little “O”s tend to clump together in the bowl: either drifting to the center or to the outer edges. The “Cheerios effect is found throughout nature, such as in grains of pollen (or, alternatively, mosquito eggs or beetles) floating on top of a pond; small coins floating in a bowl of water; or fire ants clumping together to form life-saving rafts during floods. A 2005 paper in the American Journal of Physics outlined the underlying physics, identifying the culprit as a combination of buoyancy, surface tension, and the so-called “meniscus effect.”

It all adds up to a type of capillary action. Basically, the mass of the Cheerios is insufficient to break the milk’s surface tension. But it’s enough to put a tiny dent in the surface of the milk in the bowl, such that if two Cheerios are sufficiently close, the curved surface in the liquid (meniscus) will cause them to naturally drift toward each other. The “dents” merge and the “O”s clump together. Add another Cheerio into the mix, and it, too, will follow the curvature in the milk to drift toward its fellow “O”s.

Physicists made the first direct measurements of the various forces at work in the phenomenon in 2019. And they found one extra factor underlying the Cheerios effect: The disks tilted toward each other as they drifted closer in the water. So the disks pushed harder against the water’s surface, resulting in a pushback from the liquid. That’s what leads to an increase in the attraction between the two disks.

Cheerios effect inspires novel robot design Read More »

people-will-share-misinformation-that-sparks-“moral-outrage”

People will share misinformation that sparks “moral outrage”


People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.

Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Tracking the outrage

The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.

Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.

The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.

“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.

Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.

Going with the flow

The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

Flawed human nature

Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.

The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.

The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.

Science, 2024.  DOI: 10.1126/science.adl2829

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

People will share misinformation that sparks “moral outrage” Read More »

company-claims-1,000-percent-price-hike-drove-it-from-vmware-to-open-source-rival

Company claims 1,000 percent price hike drove it from VMware to open source rival

Companies have been discussing migrating off of VMware since Broadcom’s takeover a year ago led to higher costs and other controversial changes. Now we have an inside look at one of the larger customers that recently made the move.

According to a report from The Register today, Beeks Group, a cloud operator headquartered in the United Kingdom, has moved most of its 20,000-plus virtual machines (VMs) off VMware and to OpenNebula, an open source cloud and edge computing platform. Beeks Group sells virtual private servers and bare metal servers to financial service providers. It still has some VMware VMs, but “the majority” of its machines are currently on OpenNebula, The Register reported.

Beeks’ head of production management, Matthew Cretney, said that one of the reasons for Beeks’ migration was a VMware bill for “10 times the sum it previously paid for software licenses,” per The Register.

According to Beeks, OpenNebula has enabled the company to dedicate more of its 3,000 bare metal server fleet to client loads instead of to VM management, as it had to with VMware. With OpenNebula purportedly requiring less management overhead, Beeks is reporting a 200 percent increase in VM efficiency since it now has more VMs on each server.

Beeks also pointed to customers viewing VMware as non-essential and a decline in VMware support services and innovation as drivers for it migrating from VMware.

Broadcom didn’t respond to Ars Technica’s request for comment.

Broadcom loses VMware customers

Broadcom will likely continue seeing some of VMware’s older customers decrease or abandon reliance on VMware offerings. But Broadcom has emphasized the financial success it has seen (PDF) from its VMware acquisition, suggesting that it will continue with its strategy even at the risk of losing some business.

Company claims 1,000 percent price hike drove it from VMware to open source rival Read More »

vintage-digicams-aren’t-just-a-fad-they’re-an-artistic-statement.

Vintage digicams aren’t just a fad. They’re an artistic statement.


In the age of AI images, some photographers are embracing the quirky flaws of vintage digital cameras.

Spanish director Isabel Coixet films with a digicam on the red carpet ahead of the premiere of the film “The International” on the opening night of the 59th Berlinale Film Festival in Berlin in 2009. Credit: JOHN MACDOUGALL/AFP via Getty Images

Today’s young adults grew up in a time when their childhoods were documented with smartphone cameras instead of dedicated digital or film cameras. It’s not surprising that, perhaps as a reaction to the ubiquity of the phone, some young creative photographers are leaving their handsets in their pockets in favor of compact point-and-shoot digital cameras—the very type that camera manufacturers are actively discontinuing.

Much of the buzz among this creative class has centered around premium, chic models like the Fujifilm X100 and Ricoh GR, or for the self-anointed “digicam girlies” on TikTok, zoom point-and-shoots like the Canon PowerShot G7 and Sony RX100 models, which can be great for selfies.

But other shutterbugs are reaching back into the past 20 years or more to add a vintage “Y2K aesthetic” to their work. The MySpace look is strong with a lot of photographers shooting with authentic early-2000s “digicams,” aiming their cameras—flashes a-blazing—at their friends and capturing washed-out, low-resolution, grainy photos that look a whole lot like 2003.

Wired logo

“It’s so wild to me cause I’m an elder millennial,” says Ali O’Keefe, who runs the photography channel Two Months One Camera on YouTube. “My childhood is captured on film … but for [young people], theirs were probably all captured on, like, Canon SD1000s,” she says, referencing a popular mid-aughts point-and-shoot.

It’s not just the retro sensibility they’re after, but also a bit of cool cred. Everyone from Ayo Edibiri to Kendall Jenner is helping fuel digicam fever by publicly taking snaps with a vintage pocket camera.

The rise of the vintage digicam marks at least the second major nostalgia boom in the photography space. More than 15 years ago, a film resurgence brought thousands of cameras from the 1970s and ’80s out of closets and into handbags and backpacks. Companies like Impossible Project and Film Ferrania started up production of Polaroid-compatible and 35-mm film, respectively, firing up manufacturing equipment that otherwise would have been headed to the scrap heap. Traditional film companies like Kodak and Ilford have seen sales skyrocket. Unfortunately, the price of film stock also increased significantly, with film processing also getting more costly. (Getting a roll developed and digitally scanned now typically costs between $15 and $20.)

For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.

What’s a digicam?

One of the biggest points of contention among enthusiasts is the definition of “digicam.” For some, any old digital camera falls under the banner, while other photographers have limited the term’s scope to a specific vintage or type. Sofia Lee, photographer and co-founder of the online community digicam.love, has narrowed her definition over time.

“There’s a separation between what I define as a tool that I will be using in my artistic practice versus what the community at large would consider to be culturally acceptable, like at a meetup,” Lee stated. “I started off looking at any digital camera I could get my hands on. But increasingly I’m focused more on the early 2000s. And actually, I actually keep getting earlier and earlier … I would say from 2000 to 2003 or 2004 maybe.”

Lee has found that she’s best served by funky old point-and-shoot cameras, and doesn’t use old digital single-lens reflex cameras, which can deliver higher quality images comparable to today’s equipment. Lee says DSLR images are “too clean, too crisp, too nice” for her work. “When I’m picking a camera, I’m looking for a certain kind of noise, a certain kind of character to them that can’t be reproduced through filters or editing, or some other process,” Lee says. Her all-time favorite model is a forgotten camera from 2001, the Kyocera Finecam S3. A contemporary review gave the model a failing grade, citing its reliance on the then-uncommon SD memory card format, along with its propensity to turn out soft photos lacking in detail.

“It’s easier to say what isn’t a digicam, like DSLRs or cameras with interchangeable lenses,” says Zuzanna Neupauer, a digicam user and member of digicam.love. But the definition gets even narrower from there. “I personally won’t use any new models, and I restrict myself to digicams made before 2010,” Neupauer says.

Not everyone is as partisan. Popular creators Ali O’Keefe and James Warner both cover interchangeable lens cameras from the 2000s extensively on their YouTube channels, focusing on vintage digital equipment, relishing in devices with quirky designs or those that represent evolutionary dead-ends. Everything from Sigma’s boxy cameras with exotic sensors to Olympus’ weird, early DSLRs based on a short-lived lens system get attention in their videos. It’s clear that although many vintage enthusiasts prefer the simple, compact nature of a point-and-shoot camera, the overall digicam trend has increased interest in digital imaging’s many forms.

Digital archeology

The digital photography revolution that occurred around the turn of the century saw a Cambrian explosion of different types and designs of cameras. Sony experimented with swiveling two-handers that could be science fiction zap guns, and had cameras that wrote JPEGs to floppy disks and CDs. Minolta created modular cameras that could be decoupled, the optics tethered to the LCD body with a cord, like photographic nunchaku. “There are a lot of brands that are much less well known,” says Lee. “And in the early 2000s in particular, it was really like the Wild West.”

Today’s enthusiasts spelunking into the digital past are encountering challenges related to the passage of time, with some brands no longer offering firmware updates, drivers, or PDF copies of manuals for these old models. In many cases, product news and reviews sites are the only reminder that some cameras ever existed. But many of those sites have fallen off the internet entirely.

“Steve’s Digicams went offline,” says O’Keefe in reference to the popular camera news website that went offline after the founder, Steve Sanders, died in 2017. “It was tragic because it had so much information.”

“Our interests naturally align with archaeology,” says Sofia Lee. “A lot of us were around when the cameras were made. But there were a number of events in the history of digicams where an entire line of cameras just massively died off. That’s something that we are constantly confronted with.”

Hocus focus

YouTubers like Warner and O’Keefe helped raise interest in cameras with Charged-Coupled Device technology, an older type of imaging sensor that fell out of use around 2010. CCD-based cameras have developed a cult following, and certain models have retained their value surprisingly well for their age. Fans liken the results of CCD captures to shooting film without the associated hassle or cost. While the digicam faithful have shown that older cameras can yield pleasing results, there’s no guaranteed “CCD magic” sprinkled on those photos.

“[I] think I’ve maybe unfortunately been one of the ones to make it sound like CCD sensors in and of themselves are making the colors different,” says Warner, who makes classic digital camera videos on his channel Snappiness.

“CCDs differ from [newer] CMOS sensors in the layout of their electronics but at heart they’re both made up of photosensitive squares of silicon behind a series of color filters from which color information about the scene can be derived,” says Richard Butler, managing editor at DPReview. (Disclosure: I worked at DPReview as a part-time editor in 2022 and 2023.) DPReview, in its 25th year, is a valuable library of information about old digital cameras, and an asset to vintage digital obsessives.

“I find it hard to think of CCD images as filmlike, but it’s fair to say that the images of cameras from that time may have had a distinct aesthetic,” Butler says. “As soon as you have an aesthetic with which an era was captured, there’s a nostalgia about that look. It’s fair to say that early digital cameras inadvertently defined the appearance of contemporary photos.”

There’s one area where old CCD sensors can show a difference: They don’t capture as much light and dark information as other types of sensors, and therefore the resulting images can have less detail in the shadows and highlights. A careful photographer can get contrasty, vibrant images with a different, yet still digital, vibe. Digicam photographer Jermo Swaab says he prefers “contrasty scenes and crushed blacks … I yearn for images that look like a memory or retro-futuristic dream.”

Modern photographs, by default, are super sharp, artificially vibrant, with high dynamic range that makes the image pop off the screen. In order to get the most out of a tiny sensor and lens, smartphones put shots through a computationally intense pipeline of automated editing, quickly combining multiple captures to extract every fine detail possible, and eradicate pesky noise. Digital cameras shoot a single image at a time by default. Especially with older, lower resolution digital cameras, this can give images a noisier, dreamier appearance that digicam fans love.

“If you take a picture with your smartphone, it’s automatically HDR. And we’re just used to that today but that’s not at all how cameras have worked in the past,” Warner says. Ali O’Keefe agrees, saying that “especially as we lean more and more into AI where everything is super polished to the point of hyperreal, digicams are crappy, and the artifacts and the noise and the lens imperfections give you something that is not replicable.”

Lee also is chasing unique, noisy photos from compact cameras with small sensors: “I actually always shoot at max ISO, which is the opposite of how I think people shot their cameras back in the day. I’m curious about finding the undesirable aspects of it and [getting] aesthetic inspiration from the undesirable aspects of a camera.”

Her favorite Kyocera camera is known for its high-quality build and noisy pics. She describes it as ”all metal, like a briefcase,” of the sort that Arnold Schwarzenegger carries in Total Recall. “These cameras are considered legendary in the experimental scene,” she says of the Kyocera. “The unique thing about the Finecam S3 is that it produces a diagonal noise pattern.”

A time to buy, a time to sell

The gold rush for vintage digital gear has, unsurprisingly, led to rising prices on the resale market. What was once a niche for oddballs and collectors has become a potential goldmine, driven by all that social media hype.

“The joke is that when someone makes a video about a camera, the price jumps,” says Warner. “I’ve actually tracked that using eBay’s TerraPeak sale monitoring tool where you can see the history of up to two years of sales for a certain search query. There’s definitely strong correlation to a [YouTube] video’s release and the price of that item going up on eBay in certain situations.”

“It is kind of amazing how hard it is to find things now,” laments says O’Keefe. “I used to be able to buy [Panasonic] LX3s, one of my favorite point and shoots of all time, a dime a dozen. Now they’re like 200 bucks if you can find a working one.”

O’Keefe says she frequently interacts with social media users who went online looking for their dream camera only to have gotten scammed. “A person who messaged me this morning was just devastated,” she says. “Scams are rampant now because they’ve picked up on this market being sort of a zeitgeist thing.” She recommends sticking with sellers on platforms that have clear protections in place for dealing with scams and fraud, like eBay. “I have never had an issue getting refunded when the item didn’t work.”

Even when dealing with a trustworthy seller, vintage digital camera collecting is not for the faint of heart. “If I’m interested in a camera, I make sure that the batteries are still made because some are no longer in production,” says O’Keefe. She warns that even if a used camera comes with its original batteries, those cells will most likely not hold a charge.

When there are no new batteries to be had, Sofia Lee and her cohort have resuscitated vintage cameras using modern tech: “With our Kyoceras, one of the biggest issues is the batteries are no longer in production and they all die really quickly. What we ended up doing is using 5V DC cables that connect them to USB, then we shoot them tethered to a power bank. So if you see someone shooting with a Kyocera, they’re almost always holding the power bank and a digicam in their other hand.”

And then there’s the question of where to store all those JPEGs. “A lot of people don’t think about memory card format, so that can get tricky,” cautions Warner. Many vintage cameras use the CompactFlash format, and those are still widely supported. But just as many digicams use deprecated storage formats like Olympus’s xD or Sony’s MemoryStick. ”They don’t make those cards anymore,” Warner says. “Some of them have adapters you can use but some [cameras] don’t work with the adapters.”

Even if the batteries and memory cards get sorted out, Sofia Lee underscores that every piece of vintage equipment has an expiration date. “There is this looming threat, when it comes to digicams—this is a finite resource.” Like with any other vintage tech, over time, capacitors go bad, gears break, sensors corrode, and, in some circumstances, rubber grips devulcanize back into a sticky goo.

Lee’s beloved Kyoceras are one such victim of the ravages of time. “I’ve had 15 copies pass through my hands. Around 11 of them were dead on arrival, and three died within a year. That means I have one left right now. It’s basically a special occasions-only camera, because I just never know when it’s going to die.”

These photographers have learned that it’s sometimes better to move on from a potential ticking time bomb, especially if the device is still in demand. O’Keefe points to the Epson R-D1 as an example. This digital rangefinder from printer-maker Epson, with gauges on the top made by Epson’s watchmaking arm Seiko, was originally sold as a Leica alternative, but now it fetches Leica-like premium prices. “I actually sold mine a year and a half ago,” she says. “I loved it, it was beautiful. But there’s a point for me, where I can see that this thing is certainly going to die, probably in the next five years. So I did sell that one, but it is such an awesome experience to shoot. Cause what other digital camera has a lever that actually winds the shutter?”

#NoBadCameras

For a group of people with a recent influx of newbies, the digicam community seems to be adjusting well. Sofia Lee says the growing popularity of digicams is an opportunity to meet new collaborators in a field where it used to be hard to connect with like-minded folks. “I love that there are more people interested in this, because when I was first getting into it I was considered totally crazy,” she says.

Despite the definition of digicam morphing to include a wider array of cameras, Lee seems to be accepting of all comers. “I’m rather permissive in allowing people to explore what they consider is right,” says Lee. While not every camera is “right” for every photographer, many of them agree on one thing: Resurrecting used equipment is a win for the planet, and a way to resist the constant upgrade churn of consumer technology.

“It’s interesting to look at what is considered obsolete,” Lee says. “From a carbon standpoint, the biggest footprint is at the moment of manufacture, which means that every piece of technology has this unfulfilled potential.” O’Keefe agrees: “I love it from an environmental perspective. Do we really need to drive waste [by releasing] a new camera every few months?”

For James Warner, part of the appeal is using lower-cost equipment that more people can afford. And with that lower cost of entry comes easier access to the larger creator community. “With some clubs you’re not invited if you don’t have the nice stuff,” he says. “But they feel welcome and like they can participate in photography on a budget.”

O’Keefe has even coined the hashtag #NoBadCameras. She believes all digicams have unique characteristics, and that if a curious photographer just takes the time to get to know the device, it can deliver good results. “Don’t be precious about it,” she says. “Just pick something up, shoot it, and have fun.”

This story originally appeared on wired.com.

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Vintage digicams aren’t just a fad. They’re an artistic statement. Read More »

flour,-water,-salt,-github:-the-bread-code-is-a-sourdough-baking-framework

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework

One year ago, I didn’t know how to bake bread. I just knew how to follow a recipe.

If everything went perfectly, I could turn out something plain but palatable. But should anything change—temperature, timing, flour, Mercury being in Scorpio—I’d turn out a partly poofy pancake. I presented my partly poofy pancakes to people, and they were polite, but those platters were not particularly palatable.

During a group vacation last year, a friend made fresh sourdough loaves every day, and we devoured it. He gladly shared his knowledge, his starter, and his go-to recipe. I took it home, tried it out, and made a naturally leavened, artisanal pancake.

I took my confusion to YouTube, where I found Hendrik Kleinwächter’s “The Bread Code” channel and his video promising a course on “Your First Sourdough Bread.” I watched and learned a lot, but I couldn’t quite translate 30 minutes of intensive couch time to hours of mixing, raising, slicing, and baking. Pancakes, part three.

It felt like there had to be more to this. And there was—a whole GitHub repository more.

The Bread Code gave Kleinwächter a gratifying second career, and it’s given me bread I’m eager to serve people. This week alone, I’m making sourdough Parker House rolls, a rosemary olive loaf for Friendsgiving, and then a za’atar flatbread and standard wheat loaf for actual Thanksgiving. And each of us has learned more about perhaps the most important aspect of coding, bread, teaching, and lots of other things: patience.

Hendrik Kleinwächter on his Bread Code channel, explaining his book.

Resources, not recipes

The Bread Code is centered around a book, The Sourdough Framework. It’s an open source codebase that self-compiles into new LaTeX book editions and is free to read online. It has one real bread loaf recipe, if you can call a 68-page middle-section journey a recipe. It has 17 flowcharts, 15 tables, and dozens of timelines, process illustrations, and photos of sourdough going both well and terribly. Like any cookbook, there’s a bit about Kleinwächter’s history with this food, and some sourdough bread history. Then the reader is dropped straight into “How Sourdough Works,” which is in no way a summary.

“To understand the many enzymatic reactions that take place when flour and water are mixed, we must first understand seeds and their role in the lifecycle of wheat and other grains,” Kleinwächter writes. From there, we follow a seed through hibernation, germination, photosynthesis, and, through humans’ grinding of these seeds, exposure to amylase and protease enzymes.

I had arrived at this book with these specific loaf problems to address. But first, it asks me to consider, “What is wheat?” This sparked vivid memories of Computer Science 114, in which a professor, asked to troubleshoot misbehaving code, would instead tell students to “Think like a compiler,” or “Consider the recursive way to do it.”

And yet, “What is wheat” did help. Having a sense of what was happening inside my starter, and my dough (which is really just a big, slow starter), helped me diagnose what was going right or wrong with my breads. Extra-sticky dough and tightly arrayed holes in the bread meant I had let the bacteria win out over the yeast. I learned when to be rough with the dough to form gluten and when to gently guide it into shape to preserve its gas-filled form.

I could eat a slice of each loaf and get a sense of how things had gone. The inputs, outputs, and errors could be ascertained and analyzed more easily than in my prior stance, which was, roughly, “This starter is cursed and so am I.” Using hydration percentages, measurements relative to protein content, a few tests, and troubleshooting steps, I could move closer to fresh, delicious bread. Framework: accomplished.

I have found myself very grateful lately that Kleinwächter did not find success with 30-minute YouTube tutorials. Strangely, so has he.

Sometimes weird scoring looks pretty neat. Kevin Purdy

The slow bread of childhood dreams

“I have had some successful startups; I have also had disastrous startups,” Kleinwächter said in an interview. “I have made some money, then I’ve been poor again. I’ve done so many things.”

Most of those things involve software. Kleinwächter is a German full-stack engineer, and he has founded firms and worked at companies related to blogging, e-commerce, food ordering, travel, and health. He tried to escape the boom-bust startup cycle by starting his own digital agency before one of his products was acquired by hotel booking firm Trivago. After that, he needed a break—and he could afford to take one.

“I went to Naples, worked there in a pizzeria for a week, and just figured out, ‘What do I want to do with my life?’ And I found my passion. My passion is to teach people how to make amazing bread and pizza at home,” Kleinwächter said.

Kleinwächter’s formative bread experiences—weekend loaves baked by his mother, awe-inspiring pizza from Italian ski towns, discovering all the extra ingredients in a supermarket’s version of the dark Schwarzbrot—made him want to bake his own. Like me, he started with recipes, and he wasted a lot of time and flour turning out stuff that produced both failures and a drive for knowledge. He dug in, learned as much as he could, and once he had his head around the how and why, he worked on a way to guide others along the path.

Bugs and syntax errors in baking

When using recipes, there’s a strong, societally reinforced idea that there is one best, tested, and timed way to arrive at a finished food. That’s why we have America’s Test Kitchen, The Food Lab, and all manner of blogs and videos promoting food “hacks.” I should know; I wrote up a whole bunch of them as a young Lifehacker writer. I’m still a fan of such things, from the standpoint of simply getting food done.

As such, the ultimate “hack” for making bread is to use commercial yeast, i.e., dried “active” or “instant” yeast. A manufacturer has done the work of selecting and isolating yeast at its prime state and preserving it for you. Get your liquids and dough to a yeast-friendly temperature and you’ve removed most of the variables; your success should be repeatable. If you just want bread, you can make the iconic no-knead bread with prepared yeast and very little intervention, and you’ll probably get bread that’s better than you can get at the grocery store.

Baking sourdough—or “naturally leavened,” or with “levain”—means a lot of intervention. You are cultivating and maintaining a small ecosystem of yeast and bacteria, unleashing them onto flour, water, and salt, and stepping in after they’ve produced enough flavor and lift—but before they eat all the stretchy gluten bonds. What that looks like depends on many things: your water, your flours, what you fed your starter, how active it was when you added it, the air in your home, and other variables. Most important is your ability to notice things over long periods of time.

When things go wrong, debugging can be tricky. I was able to personally ask Kleinwächter what was up with my bread, because I was interviewing him for this article. There were many potential answers, including:

  • I should recognize, first off, that I was trying to bake the hardest kind of bread: Freestanding wheat-based sourdough
  • You have to watch—and smell—your starter to make sure it has the right mix of yeast to bacteria before you use it
  • Using less starter (lower “inoculation”) would make it easier not to over-ferment
  • Eyeballing my dough rise in a bowl was hard; try measuring a sample in something like an aliquot tube
  • Winter and summer are very different dough timings, even with modern indoor climate control.

But I kept with it. I was particularly susceptible to wanting things to go quicker and demanding to see a huge rise in my dough before baking. This ironically leads to the flattest results, as the bacteria eats all the gluten bonds. When I slowed down, changed just one thing at a time, and looked deeper into my results, I got better.

Screenshot of Kleinwaechter's YouTube page, with video titles like

The Bread Code YouTube page and the ways in which one must cater to algorithms.

Credit: The Bread Code

The Bread Code YouTube page and the ways in which one must cater to algorithms. Credit: The Bread Code

YouTube faces and TikTok sausage

Emailing and trading video responses with Kleinwächter, I got the sense that he, too, has learned to go the slow, steady route with his Bread Code project.

For a while, he was turning out YouTube videos, and he wanted them to work. “I’m very data-driven and very analytical. I always read the video metrics, and I try to optimize my videos,” Kleinwächter said. “Which means I have to use a clickbait title, and I have to use a clickbait-y thumbnail, plus I need to make sure that I catch people in the first 30 seconds of the video.” This, however, is “not good for us as humans because it leads to more and more extreme content.”

Kleinwächter also dabbled in TikTok, making videos in which, leaning into his German heritage, “the idea was to turn everything into a sausage.” The metrics and imperatives on TikTok were similar to those on YouTube but hyperscaled. He could put hours or days into a video, only for 1 percent of his 200,000 YouTube subscribers to see it unless he caught the algorithm wind.

The frustrations inspired him to slow down and focus on his site and his book. With his community’s help, The Bread Code has just finished its second Kickstarter-backed printing run of 2,000 copies. There’s a Discord full of bread heads eager to diagnose and correct each other’s loaves and occasional pull requests from inspired readers. Kleinwächter has seen people go from buying what he calls “Turbo bread” at the store to making their own, and that’s what keeps him going. He’s not gambling on an attention-getting hit, but he’s in better control of how his knowledge and message get out.

“I think homemade bread is something that’s super, super undervalued, and I see a lot of benefits to making it yourself,” Kleinwächter said. “Good bread just contains flour, water, and salt—nothing else.”

Loaf that is split across the middle-top, with flecks of olives showing.

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn.

Credit: Kevin Purdy

A test loaf of rosemary olive sourdough bread. An uneven amount of olive bits ended up on the top and bottom, because there is always more to learn. Credit: Kevin Purdy

You gotta keep doing it—that’s the hard part

I can’t say it has been entirely smooth sailing ever since I self-certified with The Bread Code framework. I know what level of fermentation I’m aiming for, but I sometimes get home from an outing later than planned, arriving at dough that’s trying to escape its bucket. My starter can be very temperamental when my house gets dry and chilly in the winter. And my dough slicing (scoring), being the very last step before baking, can be rushed, resulting in some loaves with weird “ears,” not quite ready for the bakery window.

But that’s all part of it. Your sourdough starter is a collection of organisms that are best suited to what you’ve fed them, developed over time, shaped by their environment. There are some modern hacks that can help make good bread, like using a pH meter. But the big hack is just doing it, learning from it, and getting better at figuring out what’s going on. I’m thankful that folks like Kleinwächter are out there encouraging folks like me to slow down, hack less, and learn more.

Flour, water, salt, GitHub: The Bread Code is a sourdough baking framework Read More »

found-in-the-wild:-the-world’s-first-unkillable-uefi-bootkit-for-linux

Found in the wild: The world’s first unkillable UEFI bootkit for Linux

Over the past decade, a new class of infections has threatened Windows users. By infecting the firmware that runs immediately before the operating system loads, these UEFI bootkits continue to run even when the hard drive is replaced or reformatted. Now the same type of chip-dwelling malware has been found in the wild for backdooring Linux machines.

Researchers at security firm ESET said Wednesday that Bootkitty—the name unknown threat actors gave to their Linux bootkit—was uploaded to VirusTotal earlier this month. Compared to its Windows cousins, Bootkitty is still relatively rudimentary, containing imperfections in key under-the-hood functionality and lacking the means to infect all Linux distributions other than Ubuntu. That has led the company researchers to suspect the new bootkit is likely a proof-of-concept release. To date, ESET has found no evidence of actual infections in the wild.

The ASCII logo that Bootkitty is capable of rendering. Credit: ESET

Be prepared

Still, Bootkitty suggests threat actors may be actively developing a Linux version of the same sort of unkillable bootkit that previously was found only targeting Windows machines.

“Whether a proof of concept or not, Bootkitty marks an interesting move forward in the UEFI threat landscape, breaking the belief about modern UEFI bootkits being Windows-exclusive threats,” ESET researchers wrote. “Even though the current version from VirusTotal does not, at the moment, represent a real threat to the majority of Linux systems, it emphasizes the necessity of being prepared for potential future threats.”

A rootkit is a piece of malware that runs in the deepest regions of the operating system it infects. It leverages this strategic position to hide information about its presence from the operating system itself. A bootkit, meanwhile, is malware that infects the boot-up process in much the same way. Bootkits for the UEFI—short for Unified Extensible Firmware Interface—lurk in the chip-resident firmware that runs each time a machine boots. These sorts of bootkits can persist indefinitely, providing a stealthy means for backdooring the operating system even before it has fully loaded and enabled security defenses such as antivirus software.

The bar for installing a bootkit is high. An attacker first must gain administrative control of the targeted machine, either through physical access while it’s unlocked or somehow exploiting a critical vulnerability in the OS. Under those circumstances, attackers already have the ability to install OS-resident malware. Bootkits, however, are much more powerful since they (1) run before the OS does and (2) are, at least practically speaking, undetectable and unremovable.

Found in the wild: The world’s first unkillable UEFI bootkit for Linux Read More »

fcc-approves-starlink-plan-for-cellular-phone-service,-with-some-limits

FCC approves Starlink plan for cellular phone service, with some limits

Eliminating cellular dead zones

Starlink says it will offer texting service this year as well as voice and data services in 2025. Starlink does not yet have FCC approval to exceed certain emissions limits, which the company has said will be detrimental for real-time voice and video communications.

For the operations approved yesterday, Starlink is required to coordinate with other spectrum users and cease transmissions when any harmful interference is detected. “We hope to activate employee beta service in the US soon,” wrote Ben Longmier, SpaceX’s senior director of satellite engineering.

Longmier made a pitch to cellular carriers. “Any telco that signs up with Starlink Direct to Cell can completely eliminate cellular dead zones for their entire country for text and data services. This includes coastal waterways and the ocean areas in between land for island nations,” he wrote.

Starlink launched its first satellites with cellular capabilities in January 2024. “Of the more than 2,600 Gen2 Starlink satellites in low Earth orbit, around 320 are equipped with a direct-to-smartphone payload, enough to enable the texting services SpaceX has said it could launch this year,” SpaceNews wrote yesterday.

Yesterday’s FCC order also lets Starlink operate up to 7,500 second-generation satellites in altitudes between 340 km and 360 km, in addition to the previously approved altitudes between 525 km and 535 km. SpaceX is seeking approval for another 22,488 satellites but the FCC continued to defer action on that request. The FCC order said:

Authorization to permit SpaceX to operate up to 7,500 Gen2 satellites in lower altitude shells will enable SpaceX to begin providing lower-latency satellite service to support growing demand in rural and remote areas that lack terrestrial wireless service options. This partial grant also strikes the right balance between allowing SpaceX’s operations at lower altitudes to provide low-latency satellite service and permitting the Commission to continue to monitor SpaceX’s constellation and evaluate issues previously raised on the record.

Coordination with NASA

SpaceX is required to coordinate “with NASA to ensure protection of the International Space Station (ISS), ISS visiting vehicles, and launch windows for NASA science missions,” the FCC said. “SpaceX may only deploy and operate at altitudes below 400 km the total number of satellites for which it has completed physical coordination with NASA under the parties’ Space Act Agreement.”

FCC approves Starlink plan for cellular phone service, with some limits Read More »

google’s-plan-to-keep-ai-out-of-search-trial-remedies-isn’t-going-very-well

Google’s plan to keep AI out of search trial remedies isn’t going very well


DOJ: AI is not its own market

Judge: AI will likely play “larger role” in Google search remedies as market shifts.

Google got some disappointing news at a status conference Tuesday, where US District Judge Amit Mehta suggested that Google’s AI products may be restricted as an appropriate remedy following the government’s win in the search monopoly trial.

According to Law360, Mehta said that “the recent emergence of AI products that are intended to mimic the functionality of search engines” is rapidly shifting the search market. Because the judge is now weighing preventive measures to combat Google’s anticompetitive behavior, the judge wants to hear much more about how each side views AI’s role in Google’s search empire during the remedies stage of litigation than he did during the search trial.

“AI and the integration of AI is only going to play a much larger role, it seems to me, in the remedy phase than it did in the liability phase,” Mehta said. “Is that because of the remedies being requested? Perhaps. But is it also potentially because the market that we have all been discussing has shifted?”

To fight the DOJ’s proposed remedies, Google is seemingly dragging its major AI rivals into the trial. Trying to prove that remedies would harm Google’s ability to compete, the tech company is currently trying to pry into Microsoft’s AI deals, including its $13 billion investment in OpenAI, Law360 reported. At least preliminarily, Mehta has agreed that information Google is seeking from rivals has “core relevance” to the remedies litigation, Law360 reported.

The DOJ has asked for a wide range of remedies to stop Google from potentially using AI to entrench its market dominance in search and search text advertising. They include a ban on exclusive agreements with publishers to train on content, which the DOJ fears might allow Google to block AI rivals from licensing data, potentially posing a barrier to entry in both markets. Under the proposed remedies, Google would also face restrictions on investments in or acquisitions of AI products, as well as mergers with AI companies.

Additionally, the DOJ wants Mehta to stop Google from any potential self-preferencing, such as making an AI product mandatory on Android devices Google controls or preventing a rival from distribution on Android devices.

The government seems very concerned that Google may use its ownership of Android to play games in the emerging AI sector. They’ve further recommended an order preventing Google from discouraging partners from working with rivals, degrading the quality of rivals’ AI products on Android devices, or otherwise “coercing” manufacturers or other Android partners into giving Google’s AI products “better treatment.”

Importantly, if the court orders AI remedies linked to Google’s control of Android, Google could risk a forced sale of Android if Mehta grants the DOJ’s request for “contingent structural relief” requiring divestiture of Android if behavioral remedies don’t destroy the current monopolies.

Finally, the government wants Google to be required to allow publishers to opt out of AI training without impacting their search rankings. (Currently, opting out of AI scraping automatically opts sites out of Google search indexing.)

All of this, the DOJ alleged, is necessary to clear the way for a thriving search market as AI stands to shake up the competitive landscape.

“The promise of new technologies, including advances in artificial intelligence (AI), may present an opportunity for fresh competition,” the DOJ said in a court filing. “But only a comprehensive set of remedies can thaw the ecosystem and finally reverse years of anticompetitive effects.”

At the status conference Tuesday, DOJ attorney David Dahlquist reiterated to Mehta that these remedies are needed so that Google’s illegal conduct in search doesn’t extend to this “new frontier” of search, Law360 reported. Dahlquist also clarified that the DOJ views these kinds of AI products “as new access points for search, rather than a whole new market.”

“We’re very concerned about Google’s conduct being a barrier to entry,” Dahlquist said.

Google could not immediately be reached for comment. But the search giant has maintained that AI is beyond the scope of the search trial.

During the status conference, Google attorney John E. Schmidtlein disputed that AI remedies are relevant. While he agreed that “AI is key to the future of search,” he warned that “extraordinary” proposed remedies would “hobble” Google’s AI innovation, Law360 reported.

Microsoft shields confidential AI deals

Microsoft is predictably protective of its AI deals, arguing in a court filing that its “highly confidential agreements with OpenAI, Perplexity AI, Inflection, and G42 are not relevant to the issues being litigated” in the Google trial.

According to Microsoft, Google is arguing that it needs this information to “shed light” on things like “the extent to which the OpenAI partnership has driven new traffic to Bing and otherwise affected Microsoft’s competitive standing” or what’s required by “terms upon which Bing powers functionality incorporated into Perplexity’s search service.”

These insights, Google seemingly hopes, will convince Mehta that Google’s AI deals and investments are the norm in the AI search sector. But Microsoft is currently blocking access, arguing that “Google has done nothing to explain why” it “needs access to the terms of Microsoft’s highly confidential agreements with other third parties” when Microsoft has already offered to share documents “regarding the distribution and competitive position” of its AI products.

Microsoft also opposes Google’s attempts to review how search click-and-query data is used to train OpenAI’s models. Those requests would be better directed at OpenAI, Microsoft said.

If Microsoft gets its way, Google’s discovery requests will be limited to just Microsoft’s content licensing agreements for Copilot. Microsoft alleged those are the only deals “related to the general search or the general search text advertising markets” at issue in the trial.

On Tuesday, Microsoft attorney Julia Chapman told Mehta that Microsoft had “agreed to provide documents about the data used to train its own AI model and also raised concerns about the competitive sensitivity of Microsoft’s agreements with AI companies,” Law360 reported.

It remains unclear at this time if OpenAI will be forced to give Google the click-and-query data Google seeks. At the status hearing, Mehta ordered OpenAI to share “financial statements, information about the training data for ChatGPT, and assessments of the company’s competitive position,” Law360 reported.

But the DOJ may also be interested in seeing that data. In their proposed final judgment, the government forecasted that “query-based AI solutions” will “provide the most likely long-term path for a new generation of search competitors.”

Because of that prediction, any remedy “must prevent Google from frustrating or circumventing” court-ordered changes “by manipulating the development and deployment of new technologies like query-based AI solutions.” Emerging rivals “will depend on the absence of anticompetitive constraints to evolve into full-fledged competitors and competitive threats,” the DOJ alleged.

Mehta seemingly wants to see the evidence supporting the DOJ’s predictions, which could end up exposing carefully guarded secrets of both Google’s and its biggest rivals’ AI deals.

On Tuesday, the judge noted that integration of AI into search engines had already evolved what search results pages look like. And from his “very layperson’s perspective,” it seems like AI’s integration into search engines will continue moving “very quickly,” as both parties seem to agree.

Whether he buys into the DOJ’s theory that Google could use its existing advantage as the world’s greatest gatherer of search query data to block rivals from keeping pace is still up in the air, but the judge seems moved by the DOJ’s claim that “AI has the ability to affect market dynamics in these industries today as well as tomorrow.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Google’s plan to keep AI out of search trial remedies isn’t going very well Read More »

after-telling-cadillac-to-pound-sand,-f1-does-180,-grants-entry-for-2026

After telling Cadillac to pound sand, F1 does 180, grants entry for 2026

The United States will have a second team competing in Formula 1 from 2026, when Cadillac Formula 1 will join the sport as its 11th team. The result is a complete 180 for the sport’s owner, which was highly resistant to the initial bid, first announced at the beginning of 2023.

“As the pinnacle of motorsports, F1 demands boundary-pushing innovation and excellence. It’s an honor for General Motors and Cadillac to join the world’s premier racing series, and we’re committed to competing with passion and integrity to elevate the sport for race fans around the world,” said GM President Mark Reuss. “This is a global stage for us to demonstrate GM’s engineering expertise and technology leadership at an entirely new level.”

Team first, engines later

We will have to wait until 2028 to see that full engineering potential on display. Even with the incoming changes to the technical regulations, it’s far more than the work of a minute to develop a new F1 hybrid powertrain, let alone a competitive package. Audi has been working on its F1 powertrain since at least 2023, as has Red Bull, which decided to make its internal combustion engine in-house, like Ferrari or Mercedes, with partner Ford providing the electrification.

GM’s decision to throw Cadillac’s hat into the ring came with the caveat that its powertrain wouldn’t be ready until 2028—two years after it actually wants to enter the sport. That means for 2026 and 2027, Cadillac F1 will use customer engines from another manufacturer, in this case Ferrari. From 2028, we can expect a GM-designed V6 hybrid under Cadillac F1’s engine covers.

As McLaren has demonstrated this year, customer powertrains are no impediment to success, and Alpine (née Renault) is going so far as to give up its own in-house powertrain program in favor of customer engines (and most likely, a for sale sign as the French automaker looks set to walk away from the sport once again).

After telling Cadillac to pound sand, F1 does 180, grants entry for 2026 Read More »

nasa-awards-spacex-a-contract-for-one-of-the-few-things-it-hasn’t-done-yet

NASA awards SpaceX a contract for one of the few things it hasn’t done yet

Notably, the Dragonfly launch was one of the first times United Launch Alliance has been eligible to bid its new Vulcan rocket for a NASA launch contract. NASA officials gave the green light for the Vulcan rocket to compete head-to-head with SpaceX’s Falcon 9 and Falcon Heavy after ULA’s new launcher had a successful debut launch earlier this year. With this competition, SpaceX came out on top.

A half-life of 88 years

NASA’s policy for new space missions is to use solar power whenever possible. For example, Europa Clipper was originally supposed to use a nuclear power generator, but engineers devised a way for the spacecraft to use expansive solar panels to capture enough sunlight to produce electricity, even at Jupiter’s vast distance from the Sun.

But there are some missions where this isn’t feasible. One of these is Dragonfly, which will soar through the soupy nitrogen-methane atmosphere of Titan. Saturn’s largest moon is shrouded in cloud cover, and Titan is nearly 10 times farther from the Sun than Earth, so its surface is comparatively dim.

The Dragonfly mission, seen here in an artist’s concept, is slated to launch no earlier than 2027 on a mission to explore Saturn’s moon Titan. Credit: NASA/JHUAPL/Steve Gribben

Dragonfly will launch with about 10.6 pounds (4.8 kilograms) of plutonium-238 to fuel its power generator. Plutonium-238 has a half-life of 88 years. With no moving parts, RTGs have proven quite reliable, powering spacecraft for many decades. NASA’s twin Voyager probes are approaching 50 years since launch.

The Dragonfly rotorcraft will launch cocooned inside a transit module and entry capsule, then descend under parachute through Titan’s atmosphere, which is four times denser than Earth’s. Finally, Dragonfly will detach from its descent module and activate its eight rotors to reach a safe landing.

Once on Titan, Dragonfly is designed to hop from place to place on numerous flights, exploring environments rich in organic molecules, the building blocks of life. This is one of NASA’s most exciting, and daring, robotic missions of all time.

After launching from NASA’s Kennedy Space Center in Florida in July 2028, it will take Dragonfly about six years to reach Titan. When NASA selected the Dragonfly mission to begin development in 2019, the agency hoped to launch the mission in 2026. NASA later directed Dragonfly managers to target a launch in 2027, and then 2028, requiring the mission to change from a medium-lift to a heavy-lift rocket.

Dragonfly has also faced rising costs NASA blames on the COVID-19 pandemic and supply chain issues and an in-depth redesign since the mission’s selection in 2019. Collectively, these issues caused Dragonfly’s total budget to grow to $3.35 billion, more than double its initial projected cost.

NASA awards SpaceX a contract for one of the few things it hasn’t done yet Read More »

we’re-closer-to-re-creating-the-sounds-of-parasaurolophus

We’re closer to re-creating the sounds of Parasaurolophus

The duck-billed dinosaur Parasaurolophus is distinctive for its prominent crest, which some scientists have suggested served as a kind of resonating chamber to produce low-frequency sounds. Nobody really knows what Parasaurolophus sounded like, however. Hongjun Lin of New York University is trying to change that by constructing his own model of the dinosaur’s crest and its acoustical characteristics. Lin has not yet reproduced the call of Parasaurolophus, but he talked about his progress thus far at a virtual meeting of the Acoustical Society of America.

Lin was inspired in part by the dinosaur sounds featured in the Jurassic Park film franchise, which were a combination of sounds from other animals like baby whales and crocodiles. “I’ve been fascinated by giant animals ever since I was a kid. I’d spend hours reading books, watching movies, and imagining what it would be like if dinosaurs were still around today,” he said during a press briefing. “It wasn’t until college that I realized the sounds we hear in movies and shows—while mesmerizing—are completely fabricated using sounds from modern animals. That’s when I decided to dive deeper and explore what dinosaurs might have actually sounded like.”

A skull and partial skeleton of Parasaurolophus were first discovered in 1920 along the Red Deer River in Alberta, Canada, and another partial skull was discovered the following year in New Mexico. There are now three known species of Parasaurolophus; the name means “near crested lizard.” While no complete skeleton has yet been found, paleontologists have concluded that the adult dinosaur likely stood about 16 feet tall and weighed between 6,000 to 8,000 pounds. Parasaurolophus was an herbivore that could walk on all four legs while foraging for food but may have run on two legs.

It’s that distinctive crest that has most fascinated scientists over the last century, particularly its purpose. Past hypotheses have included its use as a snorkel or as a breathing tube while foraging for food; as an air trap to keep water out of the lungs; or as an air reservoir so the dinosaur could remain underwater for longer periods. Other scientists suggested the crest was designed to help move and support the head or perhaps used as a weapon while combating other Parasaurolophus. All of these, plus a few others, have largely been discredited.

We’re closer to re-creating the sounds of Parasaurolophus Read More »

android-will-soon-instantly-log-you-in-to-your-apps-on-new-devices

Android will soon instantly log you in to your apps on new devices

If you lose your iPhone or buy an upgrade, you could reasonably expect to be up and running after an hour, presuming you backed up your prior model. Your Apple stuff all comes over, sure, but most of your third-party apps will still be signed in.

Doing the same swap with an Android device is more akin to starting three-quarters fresh. After one or two Android phones, you learn to bake in an extra hour of rapid-fire logging in to all your apps. Password managers, or just using a Google account as your authentication, are a godsend.

That might change relatively soon, as Google has announced a new Restore Credentials feature, which should do what it says in the name. Android apps can “seamlessly onboard users to their accounts on a new device,” with the restore keys handled by Android’s native backup and restore process. The experience, says Google, is “delightful” and seamless. You can even get the same notifications on the new device as you were receiving on the old.

Android will soon instantly log you in to your apps on new devices Read More »