Science

a-forensic-artist-has-given-a-500-year-old-inca-“ice-maiden”-a-face

A forensic artist has given a 500-year-old Inca “ice maiden” a face

On the fourth day of Christmas —

Dubbed “Juanita,” the young woman was likely killed during a sacrificial ritual.

The final approximation of the Incan girl wearing clothing that's similar to what she wore when she died.

Enlarge / The final approximation of the Incan girl dubbed “Juanita” wearing clothing similar to what she was wearing when she died.

Dagmara Socha

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: Swedish forensic artist Oscar Nilsson combined CT scans of frozen mummified remains with skull measurements and DNA analysis to reconstruct the face of a 500-year-old Inca girl.

In 1995, archaeologists discovered the frozen, mummified remains of a young Inca girl high in the mountains of Peru, thought to have died as part of a sacrificial ritual known as Capacocha (or Ohapaq hucha). In late October, we learned how she most likely looked in life, thanks to a detailed reconstruction by Swedish forensic article Oscar Nilsson. A plaster bust of the reconstruction was unveiled at a ceremony at the Andean Sanctuaries Museum of the Catholic University of Santa Maria in Arequipa, Peru, where the girl’s remains (now called Juanita) have been on near-continuous display since her discovery.

“I thought I’d never know what her face looked like when she was alive,” archaeologist Johan Reinhardt told the BBC. Reinhardt had found the remains with Peruvian mountaineer Miguel Zárate at an altitude of 21,000 feet (6,400 meters) during an expedition to Ampato, one of the highest volcanos in the Andes. “Now 28 years later, this has become a reality thanks to Oscar Nilsson’s reconstruction.”

According to Reinhardt, Spanish chroniclers made reference to the Inca practice of making offerings to the gods: not just statues, fine textiles, and ceramics, but also occasionally human sacrifices at ceremonial shrines (huacas) built high on mountain summits. It’s thought that human sacrifices of young girls and boys were a means of appeasing the Inca gods (Apus) during periods of irregular weather patterns, particularly drought. Drought was common in the wake of a volcanic eruption.

During those periods, the ground on summits would unfreeze sufficiently for the Incas to build their sites and bury their offerings. The altitude is one reason why various Inca mummified remains have been found in remarkable states of preservation.

Earlier discoveries included the remains of an Inca boy found by looters in the 1950s, as well as the frozen body of a young man in 1964 and that of  a young boy in 1985. Then Reinhardt and Zárate made their Ampato ascent in September 1995. They were stunned to spot a mummy bundle on the ice just below the summit and realized they were looking at the frozen face of a young girl. The body was surrounded by offerings for the Inca gods, including llama bones, small carved figurines, and bits of pottery. Juanita was wrapped in a colorful burial tapestry and wearing a feathered cap and alpaca shawl, all almost perfectly preserved. Reinhardt and Zárate subsequently found two more ice mummies (a young boy and girl) the following month, and yet another female mummy in December 1997.

Reconstructing the face of the Incan

Enlarge / Reconstructing the face of the Incan “ice maiden” took nearly 400 hours.

Oscar Nilsson

It was a bit of a struggle to get Juanita’s body down from the summit because it was so heavy, the result of its flesh being so thoroughly frozen. That’s also what makes it such an exciting archaeological find. The remains of meal of vegetables were in her well-reserved stomach, although DNA analysis from her hair showed that she also ate a fair amount of animal protein. That, and the high quality of her garments, suggested she came from a noble family, possibly from the city of Cusco.

There were also traces of coca and alcohol, likely administered before Juanita’s death—a common Inca practice when sacrificing children. A CT scan of her skull revealed that Juanita had died from a a sharp blow to the head, similar to the type of injury made by a baseball bat, causing a massive hemorrhage. This, too, was a common Inca sacrificial custom.

Nilsson was able to draw upon those earlier analyses for his reconstruction, since he needed to know things like her age, gender, weight, and ethnicity. He started with the CT scan of Juanita’s skull and used the data to 3D print a plastic replica of her head. He used wooden pegs on the bust to mark out the various measurements and added clay to mold the defining details of her face, drawing on clues from her nose, eye sockets, and teeth. The DNA indicated the likely color of her skin. “In Juanita’s case, I wanted her to look both scared and proud, and with a high sense of presence at the same time,” Nilsson told Live Science. “I then cast the face in silicone [using] real human hair [that I] inserted hair by hair.”

A forensic artist has given a 500-year-old Inca “ice maiden” a face Read More »

40%-of-us-electricity-is-now-emissions-free

40% of US electricity is now emissions-free

Decarbonizing, but slowly —

Good news as natural gas, coal, and solar see the biggest changes.

Image of electric power lines with a power plant cooling tower in the background.

Just before the holiday break, the US Energy Information Agency released data on the country’s electrical generation. Because of delays in reporting, the monthly data runs through October, so it doesn’t provide a complete picture of the changes we’ve seen in 2023. But some of the trends now seem locked in for the year: wind and solar are likely to be in a dead heat with coal, and all carbon-emissions-free sources combined will account for roughly 40 percent of US electricity production.

Tracking trends

Having data through October necessarily provides an incomplete picture of 2023. There are several factors that can cause the later months of the year to differ from the earlier ones. Some forms of generation are seasonal—notably solar, which has its highest production over the summer months. Weather can also play a role, as unusually high demand for heating in the winter months could potentially require that older fossil fuel plants be brought online. It also influences production from hydroelectric plants, creating lots of year-to-year variation.

Finally, everything’s taking place against a backdrop of booming construction of solar and natural gas. So, it’s entirely possible that we will have built enough new solar over the course of the year to offset the seasonal decline at the end of the year.

Let’s look at the year-to-date data to get a sense of the trends and where things stand. We’ll then check the monthly data for October to see if any of those trends show indications of reversing.

The most important takeaway is that energy use is largely flat. Overall electricity production year-to-date is down by just over one percent from 2022, though demand was higher this October compared to last year. This is in keeping with a general trend of flat-to-declining electricity use as greater efficiency is offsetting factors like population growth and expanding electrification.

That’s important because it means that any newly added capacity will displace the use of existing facilities. And, at the moment, that displacement is happening to coal.

Can’t hide the decline

At this point last year, coal had produced nearly 20 percent of the electricity in the US. This year, it’s down to 16.2 percent, and only accounts for 15.5 percent of October’s production. Wind and solar combined are presently at 16 percent of year-to-date production, meaning they’re likely to be in a dead heat with coal this year and easily surpass it next year.

Year-to-date, wind is largely unchanged since 2022, accounting for about 10 percent of total generation, and it’s up to over 11 percent in the October data, so that’s unlikely to change much by the end of the year. Solar has seen a significant change, going from five to six percent of the total electricity production (this figure includes both utility-scale generation and the EIA’s estimate of residential production). And it’s largely unchanged in October alone, suggesting that new construction is offsetting some of the seasonal decline.

Coal is being squeezed out by natural gas, with an assist from renewables.

Enlarge / Coal is being squeezed out by natural gas, with an assist from renewables.

Eric Bangeman/Ars Technica

Hydroelectric production has dropped by about six percent since last year, causing it to slip from 6.1 percent to 5.8 percent of the total production. Depending on the next couple of months, that may allow solar to pass hydro on the list of renewables.

Combined, the three major renewables account for about 22 percent of year-to-date electricity generation, up about 0.5 percent since last year. They’re up by even more in the October data, placing them well ahead of both nuclear and coal.

Nuclear itself is largely unchanged, allowing it to pass coal thanks to the latter’s decline. Its output has been boosted by a new, 1.1 Gigawatt reactor that come online this year (a second at the same site, Vogtle in Georgia, is set to start commercial production at any moment). But that’s likely to be the end of new nuclear capacity for this decade; the challenge will be keeping existing plants open despite their age and high costs.

If we combine nuclear and renewables under the umbrella of carbon-free generation, then that’s up by nearly 1 percent since 2022 and is likely to surpass 40 percent for the first time.

The only thing that’s keeping carbon-free power from growing faster is natural gas, which is the fastest-growing source of generation at the moment, going from 40 percent of the year-to-date total in 2022 to 43.3 percent this year. (It’s actually slightly below that level in the October data.) The explosive growth of natural gas in the US has been a big environmental win, since it creates the least particulate pollution of all the fossil fuels, as well as the lowest carbon emissions per unit of electricity. But its use is going to need to start dropping soon if the US is to meet its climate goals, so it will be critical to see whether its growth flat lines over the next few years.

Outside of natural gas, however, all the trends in US generation are good, especially considering that the rise of renewable production would have seemed like an impossibility a decade ago. Unfortunately, the pace is currently too slow for the US to have a net-zero electric grid by the end of the decade.

40% of US electricity is now emissions-free Read More »

injection-of-“smart-insulin”-regulates-blood-glucose-levels-for-one-week

Injection of “smart insulin” regulates blood glucose levels for one week

Sugary treat treatment —

Tests in animals show the material works like the body’s own system.

Image of a syringe above three drug vials

Enlarge / Smart insulin has the potential to make injections far less frequent.

People with type I diabetes have to inject themselves multiple times a day with manufactured insulin to maintain healthy levels of the hormone, as their bodies do not naturally produce enough. The injections also have to be timed in response to eating and exercise, as any consumption or use of glucose has to be managed.

Research into glucose-responsive insulin, or “smart” insulin, hopes to improve the quality of life for people with type I diabetes by developing a form of insulin that needs to be injected less frequently, while providing control of blood-glucose levels over a longer period of time.

A team at Zhejiang University, China, has recently released a study documenting an improved smart insulin system in animal models—the current work doesn’t involve any human testing. Their insulin was able to regulate blood-glucose levels for a week in diabetic mice and minipigs after a single subcutaneous injection.

“Theoretically, [smart insulin is] incredibly important going forward,” said Steve Bain, clinical director of the Diabetes Research Unit in Swansea University, who was not involved in the study. “It would be a game changer.”

Polymer cage

The new smart insulin is based on a form of insulin modified with gluconic acid, which forms a complex with a polymer through chemical bonds and strong electrostatic attraction. When insulin is trapped in the polymer, its signaling function is blocked, allowing a week’s worth of insulin to be given via a single injection without a risk of overdose.

Crucial to the “glucose responsive” nature of this system is the fact that the chemical structures of glucose and gluconic acid are extremely similar, meaning the two molecules bind in very similar ways. When glucose meets the insulin-polymer complex, it can displace some of the bound insulin and form its own chemical bonds to the polymer. Glucose binding also disrupts the electrostatic attraction and further promotes insulin release.

By preferentially binding to the polymer, the glucose is able to trigger the release of insulin. And the extent of this insulin release depends on how much glucose is present: between meals, when the blood-glucose level is fairly low, only a small amount of insulin is released. This is known as basal insulin and is needed for baseline regulation of blood sugar.

But after a meal, when blood-glucose spikes, much more insulin is released. The body can now regulate the extra sugar properly, preventing abnormally high levels of glucose—known as hyperglycemia. Long-term effects of hyperglycemia in humans include nerve damage to the hands and feet and permanent damage to eyesight.

This system mimics the body’s natural process, in which insulin is also released in response to glucose.

Better regulation than standard insulin

The new smart insulin was tested in five mice and three minipigs—minipigs are often used as an animal model that’s more physiologically similar to humans. One of the three minipigs received a slightly lower dose of smart insulin, and the other two received a higher dose. The lower-dose pig showed the best response: its blood-glucose levels were tightly controlled and returned to a healthy value after meals.

During treatment, the other two pigs had glucose levels that were still above the range seen in healthy animals, although they were greatly reduced compared to pre-injection levels. The regulation of blood-glucose was also tighter compared to daily insulin injections.

It should be noted, though, that the minipig with the best response also had the lowest blood-glucose levels before treatment, which may explain why it seemed to work so well in this animal.

Crucially, these effects were all long lasting—better regulation could be seen a week after treatment. And injecting the animals with the smart insulin didn’t result in a significant immune response, which can be a common pitfall when introducing biomaterials to animals or humans.

Don’t sugarcoat it

The study is not without its limitations. Although long-term glucose regulation was seen in the mice and minipigs examined, only a few animals were involved in the study—five mice and three minipigs. And of course, there’s always the risk that the results of animal studies don’t completely track over to clinical trials in humans. “We have to accept that these are animal studies, and so going across to humans is always a bit of an issue,” said Bain.

Although more research is required before this smart insulin system can be tested in humans, this work is a promising step forward in the field.

Nature Biomedical Engineering, 2023. DOI: 10.1038/s41551-023-01138-7

Ivan Paul is a freelance writer based in the UK, finishing his PhD in cancer research. He is on X @ivan_paul_.

Injection of “smart insulin” regulates blood glucose levels for one week Read More »

getting-to-the-bottom-of-how-red-flour-beetles-absorb-water-through-their-butts

Getting to the bottom of how red flour beetles absorb water through their butts

On the third day of Christmas —

A unique group of cells pumps water into the kidneys to help harvest moisture from the air.

Who <em>doesn’t</em> thrill to the sight of a microscopic cross-section of a beetle’s rectum? You’re welcome.” src=”https://cdn.arstechnica.net/wp-content/uploads/2023/03/beetle-butt-TOP-800×536.jpg”></img><figcaption>
<p><a data-height=Enlarge / Who doesn’t thrill to the sight of a microscopic cross-section of a beetle’s rectum? You’re welcome.

Kenneth Veland Halberg

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: red flour beetles can use their butts to suck water from the air, helping them survive in extremely dry environments. Scientists are honing in on the molecular mechanisms behind this unique ability.

The humble red flour beetle (Tribolium castaneum) is a common pantry pest feeding on stored grains, flour, cereals, pasta, biscuits, beans, and nuts. It’s a remarkably hardy creature, capable of surviving in harsh arid environments due to its unique ability to extract fluid not just from grains and other food sources, but also from the air. It does this by opening its rectum when the humidity of the atmosphere is relatively high, absorbing moisture through that opening and converting it into fluid that is then used to hydrate the rest of the body.

Scientists have known about this ability for more than a century, but biologists are finally starting to get to the bottom (ahem) of the underlying molecular mechanisms, according to a March paper published in the Proceedings of the National Academies of Science. This will inform future research on how to interrupt this hydration process to better keep red flour beetle populations in check, since they are highly resistant to pesticides. They can also withstand even higher levels of radiation than the cockroach.

There are about 400,000 known species of beetle roaming the planet although scientists believe there could be well over a million. Each year, as much as 20 percent of the world’s grain stores are contaminated by red flour beetles, grain weevils, Colorado potato beetles, and confused flour beetles, particularly in developing countries. Red flour beetles in particular are a popular model organism for scientific research on development and functional genomics. The entire genome was sequenced in 2008, and the beetle shares between 10,000 and 15,000 genes with the fruit fly (Drosophila), another workhorse of genetics research. But the beetle’s development cycle more closely resembles that of other insects by comparison.

Food security in developing nations is particularly affected by animal species like the red flour beetle which has specialized in surviving in extremely dry environments, granaries included, for thousands of years.

Enlarge / Food security in developing nations is particularly affected by animal species like the red flour beetle which has specialized in surviving in extremely dry environments, granaries included, for thousands of years.

Kenneth Halberg

The rectums of most mammals and insects absorb any remaining nutrients and water from the body’s waste products prior to defecation. But the red flour beetle’s rectum is a model of ultra-efficiency in that regard. The beetle can generate extremely high salt concentrations in its kidneys, enabling it to extract all the water from its own feces and recycle that moisture back into its body.

“A beetle can go through an entire life cycle without drinking liquid water,” said co-author Kenneth Veland Halberg, a biologist at the University of Copenhagen. “This is because of their modified rectum and closely applied kidneys, which together make a multi-organ system that is highly specialized in extracting water from the food that they eat and from the air around them. In fact, it happens so effectively that the stool samples we have examined were completely dry and without any trace of water.” The entire rectal structure is encased in a perinephric membrane.

Halberg et al. took took scanning electron microscopy images of the beetle’s rectal structure. They also took tissue samples and extracted RNA from lab-grown red flour beetles, then used a new resource called BeetleAtlas for their gene expression analysis, hunting for any relevant genes.

One particular gene was expressed sixty times more in the rectum than any other. Halberg and his team eventually honed in a group of secondary cells between the beetle’s kidneys and circulatory system called leptophragmata. This finding supports prior studies that suggested these cells might be relevant since they are the only cells that interrupt the perinephric membrane, thereby enabling critical transport of potassium chloride. Translation: the cells pump salts into the kidneys to better harvest moisture from its feces or from the air.

Model of the beetle's inside and how it extracts water from the air.

Enlarge / Model of the beetle’s inside and how it extracts water from the air.

Kenneth Halberg

The next step is to build on these new insights to figure out how to interrupt the beetle’s unique hydration process at the molecular level, perhaps by designing molecules that can do so. Those molecules could then be incorporated into more eco-friendly pesticides that target the red flour beetle and similar pests while not harming more beneficial insects like bees.

“Now we understand exactly which genes, cells and molecules are at play in the beetle when it absorbs water in its rectum. This means that we suddenly have a grip on how to disrupt these very efficient processes by, for example, developing insecticides that target this function and in doing so, kill the beetle,” said Halberg. “There is twenty times as much insect biomass on Earth than that of humans. They play key roles in most food webs and have a huge impact on virtually all ecosystems and on human health. So, we need to understand them better.”

DOI: PNAS, 2023. 10.1073/pnas.2217084120  (About DOIs).

Getting to the bottom of how red flour beetles absorb water through their butts Read More »

researchers-argue-back-and-forth-about-whether-we’ve-spotted-an-exomoon

Researchers argue back and forth about whether we’ve spotted an exomoon

That’s no moon! —

Years after Kepler shut down, people are arguing over whether it spotted exomoons.

Image of two planets orbiting together around a distant star.

In 2017, the astronomy world was abuzz at the announcement that exoplanet Kepler-1625b potentially had its own moon—an exomoon. This was the first hint anyone had seen of an exomoon, and was followed five years later by another candidate around the planet Kepler-1708b.

There are over five thousand exoplanets discovered so far, and we don’t know for certain whether any have moons orbiting, which is what made these announcements so exciting. Exomoons provide more potentially habitable areas in which we can search for extraterrestrial life, and the study of moons can be a valuable window into the formation of the host planet.

But there has been much debate about these exomoon candidates, with multiple groups combing through the data obtained from the Kepler and Hubble space telescopes.

The most recent paper on the topic, published by astronomers in Germany, has come to the conclusion that the exomoon candidates around Kepler-1625b and Kepler-1708b are unlikely. Previous work has also cast doubt on the exomoon candidate around Kepler-1625b.

This is not a clear cut case, though. David Kipping, the leader of the group that made both original discoveries, and assistant professor of astronomy at Columbia University, disagrees with the new analysis. He and his group are in the process of preparing a manuscript that responds to the latest publication.

A needle in a haystack

The most common method of detecting exoplanets is the transit method. This technique measures the brightness of a star, and looks for a small dip in brightness that corresponds to a planet transiting in front of the star.

Stellar photometry can be extended to look for exomoons, an approach pioneered by Kipping. As well as the main dip caused by the planet, if a moon is orbiting the planet you should be able to see an additional, smaller dip   caused by the moon also shielding some of the star’s light.

An example of what a transit detection of an exomoon might look like.

As moons are smaller they generate a smaller signal, making them more challenging to spot. But what makes this particular case even more challenging is that the host stars Kepler-1625 and Kepler-1708 aren’t that bright. This makes the light dip even fainter—in fact these systems   have to have large moons to be within the threshold of what the Kepler space telescope can detect.

Models, models, models

Until scientists get more data from James Webb, or future missions such as ESA’s PLATO launch, it’s all down to what they can do with the existing numbers.

“The aspects here that are relevant are how the data itself is processed, what physics you put in when you’re modelling that data, and then what possible false positive signals might be out there that could reproduce the sort of signal that you’re looking for,” Eamonn Kerins, senior lecturer in astronomy at the University of Manchester who was not involved with the study, told Ars. “I think this whole debate centers around those questions essentially,” he added.

One key phenomenon that needs accurate modelling is known as the stellar limb darkening effect. Stars, including our Sun, appear dimmer at their edge than at the centre due to effects of the stellar atmosphere. As this affects the apparent brightness of the star, it’s clearly important to understand in the context of searching for exomoons by measuring a star’s brightness.

“We have models for this, but we don’t really know exactly how a specific star behaves in terms of this stellar limb darkening effect,” said René Heller, lead author of the study and astrophysicist at the Max Planck Institute for Solar System Research, in an interview for Ars. How specific stars behave can be deduced, but this isn’t always trivial. By including improved models for stellar limb darkening, the authors found that they can explain signals previously attributed to an exomoon.

Data processing is also paramount, especially a type of processing known as detrending. This takes into account long-term variability in the brightness data that is caused by random stellar variation and instrument variability, among other things. The new research shows that the statistical outcome, moon or no moon, is extremely dependent on how you carry out this detrending.

What’s more, the authors say that the data obtained from the Hubble telescope, which is primarily where the claim for the moon around Kepler-1625b comes from, can’t be properly detrended and thus shouldn’t be relied on for exomoon searches.

Two sides

Until more data is obtained, this is likely to remain an ongoing scientific discussion with no definitive conclusion.

Kerins points out that Kipping and his team have been very measured in their announcements. “They’re very, very careful to not claim it as a cast-iron detection. They’ve done comprehensive testing of the data they’ve been given, and really I think the difference here is all about what physics you put in, how you process the data, and ultimately the fact that the Kepler data set is really on the edge of finding exomoons.”

Heller, though, remains unconvinced. “My impression is that in the Kepler data, we and also other teams have done what’s currently possible and there’s no compelling object that really sticks out.”

Moons far outnumber planets in our own Solar System—two hundred and ninety to eight to date—so it’s reasonable to assume that we will come across exomoons as we continue exploring the skies. “It would be quite extraordinary, I think, if we continue to go over the next few years and not find an exomoon,” said Kerins. “I think it can only be a matter of time.”

Nature Astronomy, 2023.  DOI: 10.1038/s41550-023-02148-w

Ivan Paul is a freelance writer based in the UK, finishing his PhD in cancer research. He is on Twitter @ivan_paul_.

Researchers argue back and forth about whether we’ve spotted an exomoon Read More »

watch-sand-defy-gravity-and-flow-uphill-thanks-to-“negative-friction”

Watch sand defy gravity and flow uphill thanks to “negative friction”

On the second day of Christmas —

Applying magnetic forces to single iron oxide-coated particles spurs strange collective motion.

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: how applying magnetic forces to individual “micro-roller” particles spurs collective motion, producing some pretty counter-intuitive results.

Engineering researchers at Lehigh University have discovered that sometimes sand can actually flow uphill.

Enlarge / Engineering researchers at Lehigh University have discovered that sometimes sand can actually flow uphill.

Lehigh University

We intuitively understand that the sand pouring through an hourglass, for example, forms a neat roughly pyramid-shaped pile at the bottom, in which the grains near the surface flow over an underlying base of stationary particles. Avalanches and sand dunes exhibit similar dynamics. But scientists at Lehigh University in Pennsylvania have discovered that applying a magnetic torque can actually cause sand-like particles to collectively flow uphill in seeming defiance of gravity, according to a September paper published in the journal Nature Communications.

Sand is pretty fascinating stuff from a physics standpoint. It’s an example of a granular material, since it acts both like a liquid and a solid. Dry sand collected in a bucket pours like a fluid, yet it can support the weight of a rock placed on top of it, like a solid, even though the rock is technically denser than the sand. So sand defies all those tidy equations describing various phases of matter, and the transition from flowing “liquid” to a rigid “solid” happens quite rapidly. It’s as if the grains act as individuals in the fluid form, but are capable of suddenly banding together when solidarity is needed, achieving a weird kind of “strength in numbers” effect.

Nor can physicists precisely predict an avalanche. That’s partly because of the sheer number of grains of sand in even a small pile, each of which will interact with several of its immediate neighboring grains simultaneously—and those neighbors shift from one moment to the next. Not even a supercomputer can track the movements of individual grains over time, so the physics of flow in granular media remains a vital area of research.

But grains of sand that collectively flow uphill? That is simply bizarre behavior. Lehigh University engineer James Gilchrist manages the Laboratory for Particle Mixing and Self-Organization and stumbled upon this odd phenomenon while experimenting with “micro-rollers”: polymer particles coated in iron oxide (a process called micro-encapsulation). He was rotating a magnet under a vial of micro-rollers one day and noticed they started to pile uphill. Naturally he and his colleagues had to investigate further.

For their experiments, Gilchrist et al. attached neodymium magnets to a motorized wheel at 90-degree intervals, alternating the outward facing poles. The apparatus also included a sample holder and a USB microscope in a fixed position. The micro-rollers were prepared by suspending them in a glass vial containing ethanol and using a magnet to separate them from dust or any uncoated particles. Once the micro-rollers were clean, they were dried, suspended in fresh ethanol, and loaded onto the sample holder. A vibrating motor agitated the samples to produce flattened granular beds, and the motorized wheel was set in motion to apply magnetic torque. A gaussmeter measured the magnetic field strength relative to orientation.

Uphill granular flow of microrobotic microrollers. Credit: Lehigh University.

The results: each micro-roller began to rotate in response to the magnetic torque, creating pairs that briefly formed and then split, and increasing the magnetic force increased the particle cohesion. This in turn gave the micro-rollers more traction and enabled them to move more quickly, working in concert to counterintuitively flow uphill. In the absence of that magnetic torque, the miro-rollers flowed downhill normally. The torque-induced action was so unexpected that the researchers coined a new term to describe it: a “negative angle of repose” caused by a negative coefficient of friction.

“Up until now, no one would have used these terms,” said Gilchrist. “They didn’t exist. But to understand how these grains are flowing uphill, we calculated what the stresses are that cause them to move in that direction. If you have a negative angle of repose, then you must have cohesion to give a negative coefficient of friction. These granular flow equations were never derived to consider these things, but after calculating it, what came out is an apparent coefficient of friction that is negative.”

It’s an intriguing proof of principle that could one day lead to new ways to control how substances mix or separate, as well as potential swarming microrobotics applications. The scientists have already started building tiny staircases with laser cutters and videotaping the micro-rollers climbing up and down the other. One micro-roller can’t overcome the height of each step, but many working collectively can do so, per Gilchrist.

DOI: Nature Communications, 2023. 10.1038/s41467-023-41327-1  (About DOIs).

Listing image by Lehigh University

Watch sand defy gravity and flow uphill thanks to “negative friction” Read More »

science-lives-here:-take-a-virtual-tour-of-the-royal-institution-in-london

Science lives here: take a virtual tour of the Royal Institution in London

a special kind of place —

No less than 14 Nobel laureates have conducted ground-breaking research at the Institution.

The exterior of the Royal Institution

Enlarge / The Royal Institution was founded in 1799 and is still located in the same historic building at 21 Albermarle Street in London.

If you’re a fan of science, and especially science history, no trip to London is complete without visiting the Royal Institution, browsing the extensive collection of artifacts housed in the Faraday Museum and perhaps taking in an evening lecture by one of the many esteemed scientists routinely featured—including the hugely popular annual Christmas lectures. (The lecture theater may have been overhauled to meet the needs of the 21st century but walking inside still feels a bit like stepping back through time.) So what better time than the Christmas season to offer a virtual tour of some of the highlights contained within the historic walls of 21 Albemarle Street?

The Royal Institution was founded in 1799 by a group of leading British scientists. This is where Thomas Young explored the wave theory of light (at a time when the question of whether light was a particle or wave was hotly debated); John Tyndall conducted experiments in radiant heat; Lord Rayleigh discovered argon; James Dewar liquified hydrogen and invented the forerunner of the thermos; and father-and-son duo William Henry and William Lawrence Bragg invented x-ray crystallography.

No less than 14 Nobel laureates have conducted ground-breaking research at the Institution over the ensuing centuries, but the 19th century physicist Michael Faraday is a major focus. In fact, there is a full-sized replica of Faraday’s magnetic laboratory—where he made so many of his seminal discoveries—in the original basement room where he worked, complete with an old dumbwaiter from when the room was used as a servant’s hall. Its arrangement is based on an 1850s painting by one of Faraday’s friends and the room is filled with objects used by Faraday over the course of his scientific career.

The son of an English blacksmith, Faraday was apprenticed to a bookbinder at 14, a choice of profession that enabled him to read voraciously, particularly about the natural sciences. In 1813, a friend gave Faraday a ticket to hear the eminent scientist Humphry Davy lecture on electrochemistry at the Royal Institution. He was so taken by the presentation that he asked Davy to hire him. Davy initially declined, but shortly afterwards sacked his assistant for brawling, and hired Faraday to replace him. Faraday helped discover two new compounds of chlorine and carbon in those early days, learned how to make his own glass, and also invented an early version of the Bunsen burner, among other accomplishments.

  • Painting of the Royal Institution circa 1838, by Thomas Hosmer Shepherd.

    Public domain

  • Michael Faraday giving one of his famous Christmas lectures.

    Royal Institution

  • A Friday Evening Discourse at the Royal Institution; Sir James Dewar on Liquid Hydrogen, by Henry Jamyn Brooks, 1904

    Public domain

  • The Lecture Theatre as it looks today

  • Faraday’s magnetic laboratory in the basement of the Royal Institution

    Royal Institution

  • A page from one of Faraday’s notebooks

    Royal Institution

Faraday was particularly interested in the new science of electromagnetism, first discovered in 1820 by Hans Christian Ørsted. In 1821, Faraday discovered electromagnetic rotation—which converts electricity into mechanical motion via a magnet—and used that underlying principle to build the first electric motor. The Royal Institution’s collection includes the only surviving electric motor that Faraday built: a wire hanging down into a glass vessel with a bar magnet at the bottom. Faraday would fill the glass with mercury (an excellent conductor), then connect his apparatus to a battery, which sent electricity through the wire in turn. This created a magnetic field around the wire, and that field’s interaction with the magnet at the bottom of the glass vessel would cause the wire to rotate in a clockwise direction.

Ten years later, Faraday succeeded in showing that a jiggling magnet could induce an electrical current in a wire. Known as the principle of the dynamo, or electromagnetic induction, it became the basis of electric generators, which convert the energy of a changing magnetic field into an electrical current. One of Faraday’s induction rings is on display, comprised of coils of wire wound on opposites sides of the ring, insulated with cotton. Passing electricity through one would briefly induce a current in the other. Also on display is one of Faraday’s generators: a bar magnet and a simple cotton-insulated tube wound with a coil of wire.

In yet another experiment, Faraday placed a piece of heavy leaded glass on a magnet’s poles to see how light would be affected by a magnet. He passed light through the glass and when he turned on the electromagnet, he found that the polarization of the light had rotated slightly. This is called the magneto-optical effect (or Faraday effect), demonstrating that magnetism is related not just to electricity, but also to light. The Royal Institution has a Faraday magneto-optical apparatus with which he “at last succeeded in… magnetizing a ray of light.” In 1845, Faraday discovered diamagnetism, a property of certain materials that give them a weak repulsion from a magnetic field.

  • Equipment used by Faraday to make glass

  • Drawing of Faraday’s electromagnetic rotation experiment.

    Public domain

  • Faraday motor (electric magnetic rotation apparatus), 1822

    Royal Institution

  • Faraday’s dynamo (generator), October 1831

    Royal Institution

  • Faraday’s induction ring

    Royal Institution

  • Faraday’s magneto-optical apparatus

    Royal Institution

  • One of Faraday’s iron filings (1851) showing magnetic lines of force

    Royal Institution

  • Faraday’s original gold colloids are still active well over a century later

  • Shining a laser light through a gold colloid mixture produces the Faraday-Tyndall Effect.

    Royal Institution

Faraday concluded from all those experiments that magnetism was the center of an elaborate system of invisible curved tentacles (electric lines of force) that spread throughout space like the roots of trees branching through the earth. He was able to demonstrate these lines of force by coating sheets of paper with wax and placing them on top of bar magnets. When he sprinkled powdery iron filings on the surface, those iron filings were attracted to the magnets, revealing the lines of force. And by gently heating the waxed paper, he found that the iron filings would set on the page, preserving them.

In the 1850s, Faraday’s interests turned to the properties of light and matter. He made his own gold slides and shone light through them to observe the interactions. But commercial gold leaf, typically made by hammering the metal into thin sheets, was still much too thick for his purposes. So Faraday had to make his own via chemical means, which involved washing gold films. The resulting faint red fluid intrigued Faraday and he kept samples in bottles, shining light though the fluids and noting an intriguing “cone effect” (now known as the Faraday-Tyndall Effect)—the result of particles of gold suspended in the fluid that were much too small to see.

One might consider Faraday an early nanoscientist, since these are now known as metallic nanoparticles. The Institution’s current state-of-the-art nanotechnology lab is appropriately located right across from Faraday’s laboratory in the basement. And even though Faraday’s gold colloids are well over a century old, they remain optically active. There’s no way to figure out why this might be the case without opening the bottles but the bottles are too valuable as artifacts to justify doing that.

Plenty of other scientific luminaries have their work commemorated in the Royal Institution’s collection, including that of Faraday’s mentor, Humphry Davy, who discovered the chemical elements barium, strontium, sodium, potassium, calcium and magnesium. Early in the 19th century, there were several explosions in northern England’s coal mines caused by the lamps used by the miners accidentally igniting pockets of flammable gas. Davy was asked to come up with a safer lighting alternative.

  • Schematic for the Davy lamp

    Public domain

  • Humphry Davy’s miner’s lamp (left) displayed alongside his rival George Stephenson’s lamps

    Royal Institution

  • Schematic for John Tyndall’s radiant heat apparatus

    Royal Institution

  • Tyndall’s radiant heat tube

    Royal Institution

  • Tyndall’s blue sky tube, 1869

    Royal Institution

  • Title page of Tyndall’s Heat: A Mode of Motion

    Paul Wilkinson/Royal Institution

After experimenting with several prototypes, Davy finally settled on a simple design in 1815 consisting of a “chimney” made of wire gauze to enclose the flame. The gauze absorbed heat to prevent igniting flammable gas but still let through sufficient light. The invention significantly reduced fatalities among coal miners. Davy had a rival, however in a mining engineer named George Stephenson who independently developed his own design that was remarkably similar to Davy’s. Samples of both are displayed in the Institution’s lower ground floor “Light Corridor.” Davy’s lamp would ultimately triumph, while Stephenson later invented the first steam-powered railroad locomotive.

Atmospheric physicist John Tyndall was a good friend of Faraday and shared the latter’s gift for public lecture demonstrations. His experiments on radiation and the heat-absorptive power of gases were undertaken with an eye toward developing a better understanding of the physics of molecules.  Among the Tyndall artifacts housed in the Royal Institution is his radiant heat tube, part of an elaborate experimental apparatus he used to measure the extent to which infrared radiation was absorbed and emitted by various gases filling its central tube. By this means he concluded that water vapor absorbs more radiant heat than atmospheric gases, and hence that vapor is crucial for moderating Earth’s climate via a natural “greenhouse effect.”

The collection also includes Tyndall’s “blue sky apparatus,” which the scientist used to explain why the sky is blue during the day and takes on red hues at sunset—namely, particles in the Earth’s atmosphere scatter sunlight and blue light is scattered more strongly than red light. (It’s the same Faraday-Tyndall effect observed when shining light through Faraday’s gold colloids.)

  • James Dewar in the Royal Institution, circa 1900

    Public domain

  • A Dewar flask

    Royal Institution

  • The x-ray spectrometer developed by William Henry Bragg.

    Royal Institution

  • Bragg’s rock salt model

On Christmas Day, 1892, James Dewar exhibited his newly invented Dewar flask at the Royal Institution for the first time, which he used for his cryogenic experiments on liquefying gases. Back in 1872, Dewar and Peter Tait had built a vacuum-insulated vessel to keep things warm, and Dewar adapted that design for his flask, designed to keep things cold—specifically cold enough to maintain the extremely low temperatures at which gases transitioned into liquid form. Dewar failed to patent his invention, however; the patent eventually went to the Thermos company in 1904, which rebranded the product to keep liquids hot as well as cold.

As for William Henry Bragg, he studied alpha, beta, and gamma rays early in his career and hypothesized that both gamma rays and x-rays had particle-like properties. This was bolstered by Max Von Laue‘s Nobel Prize-winning discovery that crystals could diffract x-rays. Bragg and his son, William Lawrence—then a student at Trinity College Cambridge—began conducting their own experiments. Bragg pere invented a special “ionization spectrometer,” in which a crystal could be rotated to precise angles so that the different scattering patterns of x-rays could be measured. The pair used the instrument to determine the structure of crystals and molecules, winning the 1915 Nobel Prize in Physics for their efforts. That spectrometer, the prototype of today’s x-ray diffractometers, is still housed in the Royal Institution, as well as their model of the atomic structure of rock salt.

Science lives here: take a virtual tour of the Royal Institution in London Read More »

people-can-tell-what-you-want-to-know-when-you-shake-wrapped-christmas-gifts

People can tell what you want to know when you shake wrapped Christmas gifts

On the first day of Christmas —

We can tell if it’s about how many objects are inside, or the shape of those objects.

adorable curly red haired toddler in onesie grinning while holding a wrapped christmas present

Enlarge / Shake, shake, shake: this adorable young child would love to guess what he’s getting for Christmas this year.

Johns Hopkins University

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: New research shows it’s incredibly easy for people watching others shake boxes to tell what they’re up to.

Christmas Day is a time for opening presents and finally ending the suspense of what one is receiving this year, but chances are some of us may have already guessed what’s under the wrapping—perhaps by strategically shaking the boxes for clues about its contents. According to a November paper published in the Proceedings of the National Academy of Sciences, if someone happened to see you shaking a wrapped gift, they would be able to tell from those motions what you were trying to learn by doing so.

“There are few things more delightful than seeing a child’s eyes light up as they pick up a present and wonder what might be inside,” said co-author Chaz Firestone of Johns Hopkins University, who studies how vision and thought interact. “What our work shows is that your mind is able to track the information they are seeking. Just as they might be able to tell what’s inside the box by shaking it around, you can tell what they are trying to figure out when they shake it.” Christmas presents are “the perfect real-life example of our experiment.”

According to Firestone et al., there is a large scientific literature devoted to studying how people represent and interpret basic actions like walking, reaching, lifting, eating, chasing, or following. It’s a vital ability that helps us anticipate the behavior of others. These are all examples of pragmatic actions with a specific aim, whether it be retrieving an object or moving from one place to the next.  Other kinds of actions might be communication-oriented, such as waving, pointing, or assuming an aggressive (or friendly) posture.

The JHU study focused on so-called “epistemic” actions, in which one is seeking information: dipping a toe into the bathtub to see how hot is, for example, testing a door to see if it is locked, or shaking a wrapped box to glean information about what might be inside—like a child trying to guess whether a wrapped Christmas present contains Lego blocks or a teddy bear. “Epistemic actions pervade our lives, and recognizing them does, too,” the authors wrote, citing the ability to tell that a “meandering” campus visitor needs directions, or that someone rifling through shallow drawers is probably looking for keys or similar small objects.

People watched other people shake wrapped boxes for science.

For the first experiment, 16 players were asked to shake opaque boxes. In the first round, they tried to guess the number of objects inside the box (in this case, whether there were five or 15 US nickels). In the second, they tried to guess the shape of a geometric solid inside the box (either a sphere or a cube). All the players scored perfectly in both rounds—an expected outcome, given the simplicity of the task. The videos of those rounds were then placed online and 100 different study participants (“observers”) were asked to watch two videos of the same player and determine which video was from the first “guess the number” round and which was from the second “guess the shape” round.  Almost all the observers guessed correctly.

This was intriguing evidence that the observers could indeed infer the goal of the shaking (what the game players were trying to learn) simply by interpreting their motions. But the researchers wondered to what extent the success of the observers relied on the game players’ success at guessing either the number or shape of objects. So they tweaked the box-shaking game to produce more player error. This time, the videotaped players were asked to determine first whether the box held 9, 12, or 16 nickels, and second, whether the box contained a sphere, cylinder, or cube. Only four out of 18 players guessed correctly. But the success rate of 100 new observers who watched the videos remained the same.

Firestone et al. ran three more variations on the basic experiment to refine their results. With each iteration, most of the players performed shaking motions that were different depending on whether the round involved numbers or shapes, and most of the observers (500 in total) successfully inferred what the players were trying to learn by watching those shaking motions. “When you think about all the mental calculations someone must make to understand what someone else is trying to learn, it’s a remarkably complicated process,” said Firestone. “But our findings show it’s something people do easily.”

DOI: PNAS, 2023. 10.1073/pnas.2303162120  (About DOIs).

People can tell what you want to know when you shake wrapped Christmas gifts Read More »

corvids-seem-to-handle-temporary-memories-the-way-we-do

Corvids seem to handle temporary memories the way we do

Working on memory —

Birds show evidence that they lump temporary memories into categories.

A black bird with yellow eyes against a blue sky.

Enlarge / A jackdaw tries to remember what color it was thinking of.

Humans tend to think that we are the most intelligent life-forms on Earth, and that we’re largely followed by our close relatives such as chimps and gorillas. But there are some areas of cognition in which homo sapiens and other primates are not unmatched. What other animal’s brain could possibly operate at a human’s level, at least when it comes to one function? Birds—again.

This is far from the first time that bird species such as corvids and parrots have shown that they can think like us in certain ways. Jackdaws are clever corvids that belong to the same family as crows and ravens. After putting a pair of them to the test, an international team of researchers saw that the birds’ working memory operates the same way as that of humans and higher primates. All of these species use what’s termed “attractor dynamics,” where they organize information into specific categories.

Unfortunately for them, that means they also make the same mistakes we do. “Jackdaws (Corvus monedula) have similar behavioral biases as humans; memories are less precise and more biased as memory demands increase,” the researchers said in a study recently published in Communications Biology.

Remembering not to forget

Working memory is where we hang on to items for a brief period of time—like a postal code looked up in one browser tab and typed into a second. It can hold everything from numbers and words to images and concepts. But these memories deteriorate quickly, and the capacity is limited—the more things we try to remember, the less likely the brain is going to remember them all correctly.

Attractor dynamics give the brain an assist with working memory by taking sensory input, such as color, and categorizing it. The highly specific red shade “Fire Lily” might fade from working memory quickly, and fewer specifics will stick around as time passes, yet it will still be remembered as “red.” You lose specifics first, but hang on to the general idea longer.

Aside from time, the other thing that kills working memory is distractions. Less noise—meaning distracting factors inside and outside the brain—will make it easier to distinguish Fire Lily among the other reds. If a hypothetical customer was browsing paint swatches for Sandstone (a taupe) and London Fog (a gray) in addition to Fire Lily, remembering each color accurately would become even more difficult because of the increased demands on working memory.

Bias can also blur working memory and cause the brain to remember some red hues more accurately than others, especially if the brain compartmentalizes them all under “red.” This can happen when a particular customer has a certain idea of the color red that leans warmer or cooler than Fire Lily. If they view red as leaning slightly warmer than Fire Lily, they might believe a different, warmer red is Fire Lily.

In living color

To find out if corvids process stimuli using short-term memory with attractor dynamics, the researchers subjected two jackdaws to a variety of tests that involved remembering colors. Each bird had to peck on a white button to begin the test. They were then shown a color—the target color—before being shown a chart of 64 colors. The jackdaws had to look at that chart and peck the color they had previously been shown. A correct answer would get them their favorite treat, while responses that were close but not completely accurate would get them other treats.

While the birds performed well with just one color, their accuracy went down as the researchers challenged them to remember more target colors from the chart at once. They were more likely to pick colors that were close to, but not exactly, the target colors they had been shown—likely because there was a greater load on their short-term memory.

This is what we’d see if a customer had to remember not only Fire Lily, but Sandstone and London Fog. The only difference is that we humans would be able to read the color names, and the jackdaws only found out they were wrong when they didn’t get their favorite treat.

“Despite vastly different visual systems and brain organizations, corvids and primates show similar attractor dynamics, which can mitigate noise in visual working memory representations,” the researchers said in the same study.

How and why birds evolved attractor dynamics still needs to be understood. Because avian eyesight differs from human eyesight, there could have been differences in color perception that the research team was unable to account for. However, it seems that the same mechanisms for working memory that evolved in humans and other primates also evolved separately in corvids. “Birdbrain” should be taken as a compliment.

Communications Biology, 2023. DOI:  10.1038/s42003-023-05442-5

Corvids seem to handle temporary memories the way we do Read More »

people-exaggerate-the-consequences-of-saying-no-to-invites

People exaggerate the consequences of saying no to invites

Just say no —

People are more understanding of the reasons for rejections than most of us think.

A green envelope with a white card within it.

Enlarge / The invitation might be nice, but you can feel free to say no.

The holidays can be a time of parties, events, dinners, outings, get-togethers, impromptu meetups—and stress. Is it really an obligation to say yes to every single invite? Is not showing up to Aunt Tillie’s annual ugly sweater party this once going to mean a permanent ban? Turning down some of those invitations waiting impatiently for an RSVP can feel like a risk.

But wait! Turning down an invite won’t necessarily have the harsh consequences that are often feared (especially this time of year). A group of researchers led by psychologist and assistant professor Julian Givi of West Virginia University put test subjects through a series of experiments to see if a host’s reaction to an invitation being declined would really be as awful as the invitee feared. In the experiments, those who declined invitations were not guilted or blacklisted by the inviters. Turns out that hosts were not so upset as invitees thought they would be when someone couldn’t make it.

“Invitees have exaggerated concerns about how much the decline will anger the inviter, signal that the invitee does not care about the inviter, make the inviter unlikely to offer another invitation in the future, and so forth,” the researchers said in a study published by the American Psychological Association.

You’re invited…now what?

Why are we so nervous that declining invitations will annihilate our social lives? Appearing as if we don’t care about the host is one obvious reason. The research team also thinks there is an additional explanation behind this: we mentally exaggerate how much the inviter focuses on the rejection, and underestimate how much they consider what might be going on in our heads and in our lives. This makes us believe that there is no way the inviter will be understanding about any excuse.

All this anxiety means we often end up reluctantly dragging ourselves to a holiday movie or dinner or that infamous ugly sweater party, and saying yes to every single invite, even if it eventually leads to holiday burnout.

To determine if our fears are justified, the psychologists who ran the study focused on three things. The first was declining invitations for fun social activities, such as ice skating in the park. The second focus was how much invitees exaggerated the expected consequences of declining. Finally, the third focus was on how invitees also exaggerated how much hosts were affected by the rejection itself, as opposed to the reasons the invitee gave for turning down the invite.

The show (or party, or whatever) must go on

There were five total experiments that assessed whether someone declining an invitation felt more anxious about it than they should have. In these experiments, invitees were the subjects who had to turn down an invitation, while hosts were the subjects who were tasked with reacting to a declined invitation.

The first experiment had subjects imagining that a hypothetical friend invented them to a museum exhibit, but they turned the invitation down. The invitee then had to describe the possible negative consequences of saying no. Other subjects in this experiment were told to imagine being the one who invited the friend who turned them down, and then report how they would feel.

Most of those imagining they were the invitees overestimated what the reaction of the host would be.

Invitees predicted that a rejected host would experience anger and disappointment, and assume the invitee didn’t care enough about the host. Long term, they also expected that their relationship with the host would be damaged. They weren’t especially concerned about not being invited to future events or that hosts would retaliate by turning them down if they issued invites.

The four remaining experiments slightly altered the circumstances and measured these same potential consequences, obtaining similar results. The second experiment used hosts and invitees who were couples in real life, and who gave each other actual invitations and rejections instead of just imagining them. Invitees again overestimated how negative the hosts’ reactions would be. In the third experiment, outside observers were asked to read a summary of the invitation and rejection, then predict hosts’ reactions. The observers again thought the inviters would react much more negatively than they actually did.

In the fourth experiment, stakes were higher because subjects were told to imagine the invitation and rejection scenario involving a real friend, albeit one who was not present for the experiment. Invitees had to predict how negative their friend’s reaction would be to their response and also their friend’s opinion on why they might have declined. Those doing the inviting had to describe their reactions to a rejection and predict their friend’s expectations about how they would react. Invitees tended to predict more negative reactions than hosts did.

Finally, the fifth experiment also had subjects working individually, this time putting themselves in the place of both the host and invitee. They had to read and respond to an invitation rejection scenario from the perspective of both roles, with the order they handled host and invitee randomized. Those who took the host role first realized that hosts usually empathize with the reasons someone is not able to attend, making them unlikely to predict highly negative reactions to a declined invitation when they were asked later.

Overestimation

Despite their differences, these experiments all point in a similar direction. “Consistent with our theorizing, invitees tended to overestimate the negative ramifications of the invitation decline,” the researchers said in the same study.

Evidently, Aunt Tilly will not be gravely disappointed if her favorite niece or nephew cannot make it to her ugly sweater party this year—some events just happen to be scheduled at especially inconvenient times. This study, however, didn’t test the ramifications of declining invites for more significant but less frequent events, such as weddings and baby showers. Based on the results for smaller events, it’s likely that the thought of turning such an invite down will result in even more anxiety. The key question is whether the hosts will be less understanding for big events.

Givi and his team still note that accepting invitations can have positive effects. Human beings benefit from being around other people, and isolation can be detrimental. Still, we need to remember that too much of a good thing can be too much—everyone needs time to recharge. Even with the heavy feeling of obligation that comes with being invited somewhere, turning down one or two invites will probably not start a holiday apocalypse—unless your aunt is an exception.

Journal of Personality and Social Psychology, 2023.  DOI: 10.1037/pspi0000443.supp

People exaggerate the consequences of saying no to invites Read More »

banks-use-your-deposits-to-loan-money-to-fossil-fuel,-emissions-heavy-firms

Banks use your deposits to loan money to fossil-fuel, emissions-heavy firms

Money for something —

Your $1,000 in the bank creates emissions equal to a flight from NYC to Seattle.

High angle shot of female hand inserting her bank card into automatic cash machine in the city. Withdrawing money, paying bills, checking account balances and make a bank transfer. Privacy protection, internet and mobile banking security concept

When you drop money in the bank, it looks like it’s just sitting there, ready for you to withdraw. In reality, your institution makes money on your money by lending it elsewhere, including to the fossil fuel companies driving climate change, as well as emissions-heavy industries like manufacturing.

So just by leaving money in a bank account, you’re unwittingly contributing to worsening catastrophes around the world. According to a new analysis, for every $1,000 dollars the average American keeps in savings, each year they indirectly create emissions equivalent to flying from New York to Seattle. “We don’t really take a look at how the banks are using the money we keep in our checking account on a daily basis, where that money is really circulating,” says Jonathan Foley, executive director of Project Drawdown, which published the analysis. “But when we look under the hood, we see that there’s a lot of fossil fuels.”

By switching to a climate-conscious bank, you could reduce those emissions by about 75 percent, the study found. In fact, if you moved $8,000 dollars—the median balance for US customers—the reduction in your indirect emissions would be twice that of the direct emissions you’d avoid if you switched to a vegetarian diet.

Put another way: You as an individual have a carbon footprint—by driving a car, eating meat, running a gas furnace instead of a heat pump—but your money also has a carbon footprint. Banking, then, is an underappreciated yet powerful avenue for climate action on a mass scale. “Not just voting every four years, or not just skipping the hamburger, but also where my money sits, that’s really important,” says Foley.

Just as you can borrow money from a bank, so too do fossil fuel companies and the companies that support that industry—think of building pipelines and other infrastructure. “Even if it’s not building new pipelines, for a fossil fuel company to be doing just its regular operations—whether that’s maintaining the network of gas stations that it owns, or maintaining existing pipelines, or paying its employees—it’s going to need funding for that,” says Paddy McCully, senior analyst at Reclaim Finance, an NGO focused on climate action.

A fossil fuel company’s need for those loans varies from year to year, given the fluctuating prices of those fuels. That’s where you, the consumer, comes in. “The money that an individual puts into their bank account makes it possible for the bank to then lend money to fossil fuel companies,” says Richard Brooks, climate finance director at Stand.earth, an environmental and climate justice advocacy group. “If you look at the top 10 banks in North America, each of them lends out between $20 billion and $40 billion to fossil fuel companies every year.”

The new report finds that on average, 11 of the largest US banks lend 19.4 percent of their portfolios to carbon-intensive industries. (The American Bankers Association did not immediately respond to a request to comment for this story.) To be very clear: Oil, gas, and coal companies wouldn’t be able to keep producing these fuels—when humanity needs to be reducing carbon emissions dramatically and rapidly—without these loans. New fossil fuel projects aren’t simply fleeting endeavors, but will operate for years, locking in a certain amount of emissions going forward.

At the same time, Brooks says, big banks are under-financing the green economy. As a civilization, we’re investing in the wrong kind of energy if we want to avoid the ever-worsening effects of climate change. Yes, 2022 was the first year that climate finance surpassed the trillion-dollar mark. “However, the alarming aspect is that climate finance must increase by at least fivefold annually, as swiftly as possible, to mitigate the worst impacts of climate change,” says Valerio Micale, senior manager of the Climate Policy Initiative. “An even more critical consideration is that this cost, which would accumulate to $266 trillion until 2050, pales in comparison to the costs of inaction, estimated at over $2,000 trillion over the same period.”

Smaller banks, at least, are less likely to be providing money for the fossil fuel industry. A credit union operates more locally, so it’s much less likely to be fronting money for, say, a new oil pipeline. “Big fossil fuel companies go to the big banks for their financing,” says Brooks. “They’re looking for loans in the realm of hundreds of millions of dollars, sometimes multibillion-dollar loans, and a credit union wouldn’t be able to provide that.”

This makes banking a uniquely powerful lever to pull when it comes to climate action, Foley says. Compared to switching to vegetarianism or veganism to avoid the extensive carbon emissions associated with animal agriculture, money is easy to move. “If large numbers of people start to tell their financial institutions that they don’t really want to participate in investing in fossil fuels, that slowly kind of drains capital away from what’s available for fossil fuels,” says Foley.

While the new report didn’t go so far as to exhaustively analyze the lending habits of the thousands of banks in the US, Foley says there’s a growing number that deliberately don’t invest in fossil fuels. If you’re not sure about what your bank is investing in, you can always ask. “I think when people hear we need to move capital out of fossil fuels into climate solutions, they probably think only Warren Buffett can do that,” says Foley. “That’s not entirely true. We can all do a little bit of that.”

This story originally appeared on wired.com.

Banks use your deposits to loan money to fossil-fuel, emissions-heavy firms Read More »

rocket-report:-vulcan-stacked-for-launch;-starship-aces-test-ahead-of-third-flight

Rocket Report: Vulcan stacked for launch; Starship aces test ahead of third flight

Electron returned to flight successfully this week.

Enlarge / Electron returned to flight successfully this week.

Rocket Lab

Welcome to Edition 6.24 of the Rocket Report! This will be the final edition of this newsletter until January 4—hey, space enthusiasts need a holiday break too! And given all that’s expected to happen in 2024 in the world of launch, a bit of a recharge seems like a smart move. Stephen and I wish everyone happy holidays and a healthy and prosperous new year. Until then!

As always, we welcome reader submissions, and if you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.

Ranking the top 10 US launch companies of 2023. Oops, we did it again and published a list of the most accomplished US commercial launch companies. It’s no surprise that SpaceX is atop the list, but what comes after is more intriguing, including a new company in second position. I hope the list sparks debate, discussion, and appreciation for the challenge of operating a successful rocket company.

This is a really hard business … The article closes with this message, which I think is a fitting way to end the calendar year and kick off the holiday season: “As ever, I remain in awe of all the talented engineers and business people out there trying to make a go of it in the launch industry. This is a difficult and demanding business, replete with problems. I salute your hard work and hope for your success.”

New Shepard finally flies again. With redesigned engine components, Blue Origin’s New Shepard rocket took off from West Texas and flew to the edge of space on Tuesday with a package of scientific research and technology demonstration experiments, Ars reports. This was the first flight of Blue Origin’s New Shepard rocket since September 12, 2022, when an engine failure destroyed the booster and triggered an in-flight abort for the vehicle’s pressurized capsule during an uncrewed flight.

Does “soon'” really mean soon? … It took 15 months for Blue Origin to return to flight with New Shepard, but Tuesday’s successful launch puts the company on a path to resuming human missions. So when will Blue Origin start flying people again? “Following a thorough review of today’s mission, we look forward to flying our next crewed flight soon,” said Erika Wagner, a longtime Blue Origin manager who co-hosted the company’s webcast of Tuesday’s flight. (submitted by EllPeaTea and Ken the Bin)

The easiest way to keep up with Eric Berger’s space reporting is to sign up for his newsletter, we’ll collect his stories in your inbox.

Electron successfully returns to flight. Rocket Lab successfully launched a Japanese radar imaging satellite on the first flight of its Electron rocket since a failure nearly three months ago, Space News reports. The Electron lifted off from the company’s Launch Complex 1 in New Zealand at 11: 05 pm ET on December 14. The vehicle deployed its payload, the QPS-SAR-5 or Tsukuyomi-1 satellite, for Japanese company iQPS, afterward.

A record number of launches this year … The launch was the first for Electron since a September 19 failure during a launch of another radar-imaging satellite for Capella Space. On that mission, the first stage performed as expected, but the second stage’s engine appeared to shut down immediately after ignition, preventing it from reaching orbit. The launch was the 10th flight of the Electron this year, including one launch of a suborbital version of Electron called HASTE. (submitted by Ken the Bin)

Shetland approved for UK launches. SaxaVord Spaceport on the small island of Unst has been given approval from the Civil Aviation Authority to begin orbital launches in 2024, the BBC reports. It will be the first fully licensed spaceport in Western Europe able to launch vertically into orbit. It permits up to 30 launches a year that will be used to take satellites and other payloads into space.

Launches this summer? … The site, which is the first spaceport in Scotland, has several launch operators around the world currently developing rockets. It is anticipated that German rocket firm HyImpulse will attempt sub-orbital launches as early as this August. Full orbital launches are expected to take place at SaxaVord from 2025. Cornwall Spaceport was the UK’s first licensed spaceport; however, its rockets are launched horizontally and carried by an aircraft. (submitted by gizmo23 and Ken the Bin)

Rocket Report: Vulcan stacked for launch; Starship aces test ahead of third flight Read More »