There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2020, each day from December 25 through January 5. Today: Using markerless motion capture technology to determine what makes the best free throw shooters in basketball.
Basketball season is in full swing, and in a close game, the team that makes the highest percentage of free throws can often eke out the win. A better understanding of the precise biomechanics of the best free-throw shooters could translate into critical player-performance improvement. Researchers at the University of Kansas in Lawrence used markerless motion-capture technology to do just that, reporting their findings in an August paper published in the journal Frontiers in Sports and Active Living.
“We’re very interested in analyzing basketball shooting mechanics and what performance parameters differentiate proficient from nonproficient shooters,” said co-author Dimitrije Cabarkapa, director of the Jayhawk Athletic Performance Laboratory at the University of Kansas. “High-speed video analysis is one way that we can do that, but innovative technological tools such as markerless motion capture systems can allow us to dig even deeper into that. In my opinion, the future of sports science is founded on using noninvasive and time-efficient testing methodologies.”
Scientists are sports fans like everyone else, so it’s not surprising that a fair amount of prior research has gone into various aspects of basketball. For instance, there has been considerable debate on whether the “hot hand” phenomenon in basketball is a fallacy or not—that is, when players make more shots in a row than statistics suggest they should. A 1985 study proclaimed it a fallacy, but more recent mathematical analysis (including a 2015 study examining the finer points of the law of small numbers) from other researchers has provided some vindication that such streaks might indeed be a real thing, although it might only apply to certain players.
Some 20 years ago, Larry Silverberg and Chia Tran of North Carolina State University developed a method to computationally simulate the trajectories of millions of basketballs on the computer and used it to examine the mathematics of the free throw. Per their work, in a perfect free throw, the basketball has a 3 hertz backspin as it leaves the player’s fingertips, the launch is about 52 degrees, and the launch speed is fairly slow, ensuring the greatest probability of making the basket. Of those variables launch speed is the most difficult for players to control. The aim point also matters: Players should aim at the back of the rim, which is more forgiving than the front.
There was also a 2021 study by Malaysian scientists that analyzed the optimal angle of a basketball free throw, based on data gleaned from 30 NBA players. They concluded that a player’s height is inversely proportional to the initial velocity and optimal throwing angle, and that the latter is directly proportional to the time taken for a ball to reach its maximum height.
Cabarkapa’s lab has been studying basketball players’ performance for several years now, including how eating breakfast (or not) impacts shooting performance, and what happens to muscles when players overtrain. They published a series of studies in 2022 assessing the effectiveness of the most common coaching cues, like “bend your knees,” “tuck your elbow in,” or “release the ball as high as possible.” For one study, Cabarkapa et al. analyzed high-definition video of free-throw shooters for kinematic differences between players who excel at free throws and those who don’t. The results pointed to greater flexion in hip, knee, and angle joints resulting in lower elbow placement when shooting.
Yet they found no kinematic differences in shots that proficient players made and those they missed, so the team conducted a follow-up study employing a 3D motion-capture system. This confirmed that greater knee and elbow flexion and lower elbow placement were critical factors. There was only one significant difference between made and missed free-throw shots: positioning the forearm almost parallel with an imaginary lateral axis.
There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2020, each day from December 25 through January 5. Today: Archaeologists relied on chemical clues and techniques like FTIR spectroscopy and archaeomagnetic analysis to reconstruct the burning of Jerusalem by Babylonian forces around 586 BCE.
Archaeologists have uncovered new evidence in support of Biblical accounts of the siege and burning of the city of Jerusalem by the Babylonians around 586 BCE, according to a September paper published in the Journal of Archaeological Science.
The Hebrew bible contains the only account of this momentous event, which included the destruction of Solomon’s Temple. “The Babylonian chronicles from these years were not preserved,” co-author Nitsan Shalom of Tel Aviv University in Israel told New Scientist. According to the biblical account, “There was a violent and complete destruction, the whole city was burned and it stayed completely empty, like the descriptions you see in [the Book of] Lamentations about the city deserted and in complete misery.”
Judah was a vassal kingdom of Babylon during the late 7th century BCE, under the rule of Nebuchadnezzar II. This did not sit well with Judah’s king, Jehoiakim, who revolted against the Babylonian king in 601 BCE despite being warned not to do so by the prophet Jeremiah. He stopped paying the required tribute and sided with Egypt when Nebuchadnezzar tried (and failed) to in invade that country. Jehoiakim died and his son Jeconiah succeeded him when Nebuchadnezzar’s forces besieged Jerusalem in 597 BCE. The city was pillaged and Jeconiah surrendered and was deported to Babylon for his trouble, along with a substantial portion of Judah’s population. (The Book of Kings puts the number at 10,000.) His uncle Zedekiah became king of Judah.
Zedekiah also chafed under Babylonian rule and revolted in turn, refusing to pay the required tribute and seeking alliance with the Egyptian pharaoh Hophra. This resulted in a brutal 30-month siege by Nebuchadnezzar’s forces against Judah and its capital, Jerusalem. Eventually the Babylonians prevailed again, breaking through the city walls to conquer Jerusalem. Zedekiah was forced to watch his sons killed and was then blinded, bound, and taken to Babylon as a prisoner. This time Nebuchadnezzar was less merciful and ordered his troops to completely destroy Jerusalem and pull down the wall around 586 BCE.
There is archaeological evidence to support the account of the city being destroyed by fire, along with nearby villages and towns on the western border. Three residential structures were excavated between 1978 and 1982 and found to contain burned wooden beams dating to around 586 BCE. Archaeologists also found ash and burned wooden beams from the same time period when they excavated several structures at the Giv’ati Parking Lot archaeological site, close to the assumed location of Solomon’s Temple. Samples taken from a plaster floor showed exposure to high temperatures of at least 600 degrees Celsius
However, it wasn’t possible to determine from that evidence whether the fires were intentional or accidental, or where the fire started if it was indeed intentional. For this latest research, Shalom and her colleagues focused on the two-story Building 100 at the Giv’ati Parking Lot site. They used Fourier transform infrared (FTIR) spectroscopy—which measures the absorption of infrared light to determine to what degree a sample had been heated—and archaeomagnetic analysis, which determines whether samples containing magnetic minerals were sufficiently heated to reorient those compounds to a new magnetic north.
The analysis revealed varying degrees of exposure to high-temperature fire in three rooms (designated A, B, and C) on the bottom level of Building 100, with Room C showing the most obvious evidence. This might have been a sign that Room C was the ignition point, but there was no fire path; the burning of Room C appeared to be isolated. Combined with an earlier 2020 study on segments of the second level of the building, the authors concluded that several fires were lit in the building and the fires burned strongest in the upper floors, except for that “intense local fire” in Room C on the first level.
“When a structure burns, heat rises and is concentrated below the ceiling,” the authors wrote. “The walls and roof are therefore heated to higher temperatures than the floor.” The presence of charred beams on the floors suggest this was indeed the case: most of the heat rose to the ceiling, burning the beams until they collapsed to the floors, which otherwise were subjected to radiant heat. But the extent of the debris was likely not caused just by that collapse, suggesting that the Babylonians deliberately went back in and knocked down any remaining walls.
Furthermore, “They targeted the more important, the more famous buildings in the city,” Shalom told New Scientist, rather than destroying everything indiscriminately. “2600 years later, we’re still mourning the temple.”
While they found no evidence of additional fuels that might have served as accelerants, “we may assume the fire was intentionally ignited due to its widespread presence in all rooms and both stories of the building,” Shalom et al. concluded. “The finds within the rooms indicate there was enough flammable material (vegetal and wooden items and construction material) to make additional fuel unnecessary. The widespread presence of charred remains suggests a deliberate destruction by fire…. [T]he spread of the fire and the rapid collapse of the building indicate that the destroyers invested great efforts to completely demolish the building and take it out of use.”
There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2020, each day from December 25 through January 5. Today: Pirates! Specifically, an interview with historian Rebecca Simon on the real-life buccaneer bylaws that shaped every aspect of a pirate’s life.
One of the many amusing scenes in the 2003 film Pirates of the Caribbean: The Curse of the Black Pearl depicts Elizabeth Swann (Keira Knightley) invoking the concept of “parley” in the pirate code to negotiate a cease of hostilities with pirate captain Hector Barbossa (Geoffrey Rush). “The code is more what you’d call guidelines than actual rules,” he informs her. Rebecca Simon, a historian at Santa Monica College, delves into the real, historical set of rules and bylaws that shaped every aspect of a pirate’s life with her latest book. The Pirates’ Code: Laws and Life Aboard Ship.
Simon is the author of such books as Why We Love Pirates: The Hunt for Captain Kidd and How He Changed Piracy Forever and Pirate Queens: The Lives of Anne Bonny and Mary Read. Her PhD thesis research focused on pirate trails and punishment. She had been reading a book about Captain Kidd and the war against the pirates, and was curious as to why he had been executed in an East London neighborhood called Wapping, at Execution Dock on the Thames. People were usually hung at Tyburn in modern day West London at Marble Arch. “Why was Captain Kidd taken to a different place? What was special about that?” Simon told Ars. “Nothing had been written much about it at all, especially in connection to piracy. So I began researching how pirate trials and executions were done in London. I consider myself to be a legal historian of crime and punishment through the lens of piracy.”
Ars sat down with Simon to learn more.
Ars Technica: How did the idea of a pirates’ code come about?
Rebecca Simon: Two of the pirates that I mention in the book—Ned Low and Bartholomew Roberts—their code was actually published in newspapers in London. I don’t where they got it. Maybe it was made up for the sake of readership because that is getting towards the tail end of the Golden Age of Piracy, the 1720s. But we find examples of other codes in A General History of the Pyrates written by a man named Captain Charles Johnson in 1724. It included many pirate biographies and a lot of it was very largely fictionalized. So we take it with a grain of salt. But we do know that pirates did have a notion of law and order and regulations and ritual based on survivor accounts.
You had to be very organized. You had to have very specific rules because as a pirate, you’re facing death every second of the day, more so than if you are a merchant or a fisherman or a member of the Royal Navy. Pirates go out and attack to get the goods that they want. In order to survive all that, they have to be very meticulously prepared. Everyone has to know their exact role and everyone has to have a game plan going in. Pirates didn’t attack willy-nilly out of control. No way. They all had a role.
Ars Technica: Is it challenging to find primary sources about this? You rely a lot trial transcripts, as well as eyewitness accounts and maritime logs.
Rebecca Simon: It’s probably one of the best ways to learn about how pirates lived on the ship, especially through their own words, because pirates didn’t leave records. These trial transcripts were literal transcriptions of the back and forth between the lawyer and the pirate, answering very specific questions in very specific detail. They were transcribed verbatim and they sold for profit. People found them very interesting. It’s really the only place where we really get to hear the pirate’s voice. So to me that was always one of the best ways to find information about pirates, because anything else you’re looking at is the background or the periphery around the pirates: arrest records, or observations of how the pirate seemed to be acting and what the pirate said. We have to take that with a grain of salt because we’re only hearing it from a third party.
Ars Technica: Some of the pirate codes seemed surprisingly democratic. They divided the spoils equally according to rank, so there was a social hierarchy. But there was also a sense of fairness.
Rebecca Simon: You needed to have a sense of order on a pirate ship. One of the big draws that pirates used to recruit hostages to officially join them into piracy was to tell them they’d get an equal share. This was quite rare on many other ships. where payment was based per person, or maybe just a flat rate across the board. A lot of times your wages might get withheld or you wouldn’t necessarily get the wages you were promised. On a pirate ship, everyone had the amount of money they were going to get based on the hierarchy and based on their skill level. The quartermaster was in charge of doling out all of the spoils or the stolen goods. If someone was caught taking more of their share, that was a huge deal.
You could get very severely punished perhaps by marooning or being jailed below the hold. The punishment had to be decided by the whole crew, so it didn’t seem like the captain was being unfair or overly brutal. Pirates could also vote out their captain if they felt the captain was doing a bad job, such as not going after enough ships, taking too much of his share, being too harsh in punishment, or not listening to the crew. Again, this is all to keep order. You had to keep morale very high, you had to make sure there was very little discontent or infighting.
Ars Technica: Pirates have long been quite prominent in popular culture. What explains their enduring appeal?
Rebecca Simon: During the 1700s, when pirates were very active, they fascinated people in London and England because they were very far removed from piracy, more so than those who traded a lot for a living in North America and the Caribbean. But it used to be that you were born into your social class and there was no social mobility. You’re born poor because your father was poor, your grandfather was poor, your children will be poor, your grandchildren will be poor. Most pirates started out as poor sailors but as pirates they could become wealthy. If a pirate was lucky, they could make enough in one or two years and then retire and live comfortably. People also have a morbid fascination for these brutal people committing crimes. Think about all the true crime podcasts and true crime documentaries on virtually every streaming service today. We’re just attracted to that. It was the same with piracy.
Going into the 19th century, we have the publication of the book Treasure Island, an adventure story harking back to this idea of piracy in a way that generations hadn’t seen before. This is during a time period where there was sort of a longing for adventure in general and Treasure Island fed into this. That is what spawned the pop culture pirate going into the 20th century. Everything people know about pirates, for the most part, they’re getting from Treasure Island. The whole treasure map, X marks the spot, the eye patch, the peg leg, the speech. Pirate popularity has ebbed and flowed in the 20th and 21st centuries. Of course, the Pirates of the Caribbean franchise was a smash hit. And I think during the pandemic, people were feeling very confined and upset with leadership. Pirates were appealing because they cast all that off and we got shows like Black Sails and Our Flag Means Death.
Ars Technica: Much of what you do is separate fact from fiction, such as the legend of Captain Kidd’s buried treasure. What are some of the common misconceptions that you find yourself correcting, besides buried treasure?
Rebecca Simon: A lot of people ask me about the pirate accent: “Aaarr matey!” That accent we think of comes from the actor Robert Newton who played Long John Silver in the 1950 film Treasure Island. In reality, it just depended on where they were born. At the end of the day, pirates were sailors. People ask about what they wore, what they ate, thinking it’s somehow different. But the reality is it was the same as other sailors. They might have had better clothes and better food because of how often they robbed other ships.
Another misconception is that pirates were after gold and jewels and treasure. In the 17th and 18th centuries, “treasure” just meant “valuable.” They wanted goods they could sell. So about 50 percent was stuff they kept to replenish their own ship and their stores. The other 50 percent were goods they could sell: textiles, wine, rum, sugar, and (unfortunately) the occasional enslaved person counted as cargo. There’s also a big misconception that pirates were all about championing the downtrodden:they hated slavery and they freed enslaved people. They hated corrupt authority. That’s not the reality. They were still people of their time. Blackbeard, aka Edward Teach, did capture a slave ship and he did include those slaves in his crew. But he later sold them at a slave port.
Thanks to Our Flag Means Death and Black Sails, people sometimes assume that all pirates were gay or bisexual. That’s also not true. The concept of homosexuality as we think of it just didn’t exist back then. It was more situational homosexuality arising from confined close quarters and being very isolated for a long period of time. And it definitely was not all pirates. There was about the same percentage of gay or bisexual pirates as your own workplace, but it was not discussed and it was considered to be a crime. There’s this idea that pirate ships had gay marriage; that wasn’t necessarily a thing. They practiced something called matelotage, a formal agreement where you would be legally paired with someone because if they died, it was a way to ensure their goods went to somebody. It was like a civil union. Were some of these done romantically? It’s possible. We just don’t know because that sort of stuff was never, ever recorded.
Ars Technica: Your prior book, Pirate Queens, focused on female pirates like Anne Bonny and Mary Read. It must have been challenging for a woman to pass herself off as a man on a pirate ship.
Rebecca Simon: You’d have to take everything in consideration, the way you dressed, the way you walked, the way you talked. A lot of women who would be on a pirate ship were probably very wiry, having been maids who hauled buckets of coal and water and goods and did a lot of physical activity all day. They could probably pass themselves off as boys or adolescents who were not growing facial hair. So it probably wasn’t too difficult. Going to the bathroom was a a big thing. Men would pee over the edge of the ship. How’s a woman going to do this? You put a funnel under the pirate dress and pee through the funnel, which can create a stream going over the side of the ship. When it’s really crowded, men aren’t exactly going to be looking at that very carefully.
The idea of Anne Bonny and Mary Read being lesbians is a 20th century concept, originating with an essay by a feminist writer in the 1970s. There’s no evidence for it. There’s no historical documentation about them before they entered into piracy. According to Captain Charles Johnson’s highly fictionalized account, Mary disguised herself as a male sailor. Anne fell in love with this male sailor on the ship and tried to seduce him, only to discover he was a woman. Anne was “disappointed.” There’s no mention of Anne and Mary actually getting together. Anne was the lover of Calico Jack Rackham, Mary was married to a crew member. This was stated in the trial. And when both women were put on trial and found guilty of piracy, they both revealed they were pregnant.
Ars Technica: Pirates had notoriously short careers: about two years on average. Why would they undertake all that risk for such a short time?
Rebecca Simon: There’s the idea that you can get wealthy quickly. There were a lot of people who became pirates because they had no other choice. Maybe they were criminals or work was not available to them. Pirate ships were extremely diverse. You did have black people as crew members, maybe freed enslaved or escaped enslaved people. They usually had the most menial jobs, but they did exist on ships. Some actively chose it because working conditions on merchant ships and naval ships were very tough and they didn’t always have access to good food or medical care. And many people were forced into it, captured as hostages to replace pirates who had been killed in battle.
Ars Technica: What were the factors that led to the end of what we call the Golden Age of Piracy?
Rebecca Simon: There were several reasons why piracy really began to die down in the 1720s. One was an increase in the Royal Navy presence so the seas were a lot more heavily patrolled and it was becoming more difficult to make a living as a pirate. Colonial governors and colonists were no longer supporting pirates the way they once had, so a lot of pirates were now losing their alliances and protections. A lot of major pirate leaders who had been veterans of the War of the Spanish Succession as privateers had been killed in battle by the 1720s: people like Charles Vane, Edward Teach, Benjamin Hornigold, Henry Jennings, and Sam Bellamy.
It was just becoming too risky. And by 1730 a lot more wars were breaking out, which required people who could sail and fight. Pirates were offered pardons if they agreed to become a privateer, basically a government-sanctioned mercenary at sea where they were contracted to attack specific enemies. As payment they got to keep about 80 percent of what they stole. A lot of pirates decided that was more lucrative and more stable.
Ars Technica: What was the most surprising thing that you learned while you were researching and writing this book?
Rebecca Simon: Stuff about food, oddly enough. I was really surprised by how much people went after turtles as food. Apparently turtles are very high in vitamin C and had long been believed to cure all kinds of illnesses and impotence. Also, pirates weren’t really religious, but Bartholomew Roberts would dock at shore so his crew could celebrate Christmas—perhaps as an appeasement. When pirates were put on trial, they always said they were forced into it. The lawyers would ask if they took their share after the battle ended. If they said yes, the law deemed them a pirate. You therefore participated; it doesn’t matter if they forced you. Finally, my PhD thesis was on crime and the law and executions. People would ask me about ships but I didn’t study ships at all. So this book really branched out my maritime knowledge and helped me understand how ships worked and how the people on board operated.
There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2020, each day from December 25 through January 5. Today: the surprisingly complex physics of two simply constructed instruments: the Indonesian bundengan and the Australian Aboriginal didgeridoo (or didjeridu).
The bundengan is a rare, endangered instrument from Indonesia that can imitate the sound of metallic gongs and cow-hide drums (kendangs) in a traditional gamelan ensemble. The didgeridoo is an iconic instrument associated with Australian Aboriginal culture that produces a single, low-pitched droning note that can be continuously sustained by skilled players. Both instruments are a topic of scientific interest because their relatively simple construction produces some surprisingly complicated physics. Two recent studies into their acoustical properties were featured at an early December meeting of the Acoustical Society of America, held in Sydney, Australia, in conjunction with the Australian Acoustical Society.
The bundengan originated with Indonesian duck hunters as protection from rain and other adverse conditions while in the field, doubling as a musical instrument to pass the time. It’s a half-dome structure woven out of bamboo splits to form a lattice grid, crisscrossed at the top to form the dome. That dome is then coated with layers of bamboo sheaths held in place with sugar palm fibers. Musicians typically sit cross-legged inside the dome-shaped resonator and pluck the strings and bars to play. The strings produce metallic sounds while the plates inside generate percussive drum-like sounds.
Gea Oswah Fatah Parikesit of Universitas Gadja Mada in Indonesia has been studying the physics and acoustics of the bundengan for several years now. And yes, he can play the instrument. “I needed to learn to do the research,” he said during a conference press briefing. “It’s very difficult because you have two different blocking styles for the right and left hand sides. The right hand is for the melody, for the string, and the left is for the rhythm, to pluck the chords.”
Much of Parikesit’s prior research on the bundengan focused on the unusual metal/percussive sound of the strings, especially the critical role played by the placement of bamboo clips. He used computational simulations of the string vibrations to glean insight on how the specific gong-like sound was produced, and how those vibrations change with the addition of bamboo clips located at different sections of the string. He found that adding the clips produces two vibrations of different frequencies at different locations on the string, with the longer section having a high frequency vibration compared to the lower frequency vibration of the shorter part of the string. This is the key to making the gong-like sound.
This time around, Parikesit was intrigued by the fact many bundengan musicians have noted the instrument sounds better wet. In fact, several years ago, Parikesit attended a bundengan concert in Melbourne during the summer when it was very hot and dry—so much so that the musicians brought their own water spray bottles to ensure the instruments stayed (preferably) fully wet.
“A key element between the dry and wet versions of the bundengan is the bamboo sheaths—the material used to layer the wall of the instrument,” Parokesit said. “When the bundengan is dry, the bamboo sheaths open and that results in looser connections between neighboring sheaths. When the bundengan is wet, the sheaths tend to form a curling shape, but because they are held by ropes, they form tight connections between the neighboring sheaths.”
The resulting tension allows the sheaths to vibrate together. That has a significant impact on the instrument’s sound, taking on a “twangier” quality when dry and a more of metallic gong sound when it is wet. Parikesit has tried making bundengans with other materials: paper, leaves, even plastics. But none of those produce the same sound quality as the bamboo sheaths. He next plans to investigate other musical instruments made from bamboo sheaths.“As an Indonesian, I have extra motivation because the bundengan is a piece of our cultural heritage,” Parikesit said. “I am trying my best to support the conservation and documentation of the bundengan and other Indonesian endangered instruments.”
Coupling with the human vocal tract
Meanwhile, John Smith of the University of New South Wales is equally intrigued by the physics and acoustics of the didgeridoo. The instrument is constructed from the trunk or large branches of the eucalyptus tree. The trick is to find a live tree with lots of termite activity, such that the trunk has been hollowed out leaving just the living sapwood shell. A suitably hollow trunk is then cut down, cleaned out, the bark removed, the ends trimmed, and the exterior shaped into a long cylinder or cone to produce the final instrument. The longer the instrument, the lower the pitch or key.
Players will vibrate their lips to play the didgeridoo in a manner similar to lip valve instruments like trumpets or trombones, except those use a small mouthpiece attached to the instrument as an interface. (Sometimes a beeswax rim is added to a didgeridoo mouthpiece end.) Players typically use circular breathing to maintain that continuous low-pitched drone for several minutes, basically inhaling through the nose and using air stored in the puffed cheeks to keep producing the sound. It’s the coupling of the instrument with the human vocal tract that makes the physics so complex, per Smith.
Smith was interested in investigating how changes in the configuration of the vocal tract produced timbral changes in the rhythmic pattern of the sounds produced. To do so, “We needed to develop a technique that could measure the acoustic properties of the player’s vocal tract while playing,” Smith said during the same press briefing. “This involved injecting a broadband signal into the corner of the player’s mouth and using a microphone to record the response.” That enabled Smith and his cohorts to record the vocal tract impedance in different configurations in the mouth.
The results: “We showed that strong resonances in the vocal tract can suppress bands of frequencies in the output sound,” said Smith. “The remaining strong bands of frequencies, called formants, are noticed by our hearing because they fall in the same ranges as the formants we use in speech. It’s a bit like a sculptor removing marble, and we observe the bits that are left behind.”
Smith et al. also noted that the variations in timbre arise from the player singing while playing, or imitating animal sounds (such as the dingo or the kookaburra), which produces many new frequencies in the output sound. To measure the contact between vocal folds, they placed electrodes on either side of a player’s throat and zapped them with a small high frequency electric current. They simultaneously measured lip movement with another pair of electrics above and below the lips. Both types of vibrations affect the flow of air to produce the new frequencies.
As for what makes a desirable didgeridoo that appeals to players, acoustic measurements on a set of 38 such instruments—with the quality of each rated by seven experts in seven different subjective categories—produced a rather surprising result. One might think players would prefer instruments with very strong resonances but the opposite turned out to be true. Instruments with stronger resonances were ranked the worst, while those with weaker resonances rated more highly. Smith, for one, thinks this makes sense. “This means that their own vocal tract resonance can dominate the timbre of the notes,” he said.
With all the technological advances humans have made, it may seem like we’ve lost touch with nature—but not all of us have. People in some parts of Africa use a guide more effective than any GPS system when it comes to finding beeswax and honey. This is not a gizmo, but a bird.
The Greater Honeyguide (highly appropriate name), Indicator indicator (even more appropriate scientific name), knows where all the beehives are because it eats beeswax. The Hadza people of Tanzania and Yao people of Mozambique realized this long ago. Hadza and Yao honey hunters have formed a unique relationship with this bird species by making distinct calls, and the honeyguide reciprocates with its own calls, leading them to a hive.
Because the Hadza and Yao calls differ, zoologist Claire Spottiswoode of the University of Cambridge and anthropologist Brian Wood of UCLA wanted to find out if the birds respond generically to human calls, or are attuned to their local humans. They found that the birds are much more likely to respond to a local call, meaning that they have learned to recognize that call.
Come on, get that honey
To see which sound the birds were most likely to respond to, Spottiswoode and Wood played three recordings, starting with the local call. The Yao honeyguide call is what the researchers describe as “a loud trill followed by a grunt (‘brrrr-hm’) while the Hadza call is more of “a melodic whistle,” as they say in a study recently published in Science. The second recording they would play was the foreign call, which would be the Yao call in Hadza territory and vice versa.
The third recording was an unrelated human sound meant to test whether the human voice alone was enough for a honeyguide to follow. Because Hadza and Yao voices sound similar, the researchers would alternate among recordings of honey hunters speaking words such as their names.
So which sounds were the most effective cues for honeyguides to partner with humans? In Tanzania, local Hadza calls were three times more likely to initiate a partnership with a honeyguide than Yao calls or human voices. Local Yao calls were also the most successful in Mozambique, where, in comparison to Hadza calls and human voices, they were twice as likely to elicit a response that would lead to a cooperative effort to search for a beehive. Though honeyguides did sometimes respond to the other sounds, and were often willing to cooperate when hearing them, it became clear that the birds in each region had learned a local cultural tradition that had become just as much a part of their lives as those of the humans who began it.
Now you’re speaking my language
There is a reason that honey hunters in both the Hadza and Yao tribes told Wood and Spottiswoode that they have never changed their calls and will never change them. If they did, they’d be unlikely to gather nearly as much honey.
How did this interspecies communication evolve? Other African cultures besides the Hadza and Yao have their own calls to summon a honeyguide. Why do the types of calls differ? The researchers do not think these calls came about randomly.
Both the Hadza and Yao people have their own unique languages, and sounds from them may have been incorporated into their calls. But there is more to it than that. The Hadza often hunt animals when hunting for honey. Therefore, the Hadza don’t want their calls to be recognized as human, or else the prey they are after might sense a threat and flee. This may be why they use whistles to communicate with honeyguides—by sounding like birds, they can both attract the honeyguides and stalk prey without being detected.
In contrast, the Yao do not hunt mammals, relying mostly on agriculture and fishing for food. This, along with the fact that they try to avoid potentially dangerous creatures such as lions, rhinos, and elephants, and can explain why they use recognizably human vocalizations to call honeyguides. Human voices may scare these animals away, so Yao honey hunters can safely seek honey with their honeyguide partners. These findings show that cultural diversity has had a significant influence on calls to honeyguides.
While animals might not literally speak our language, the honeyguide is just one of many species that has its own way of communicating with us. They can even learn our cultural traditions.
“Cultural traditions of consistent behavior are widespread in non-human animals and could plausibly mediate other forms of interspecies cooperation,” the researchers said in the same study.
Honeyguides start guiding humans as soon as they begin to fly, and this knack, combined with learning to answer traditional calls and collaborate with honey hunters, works well for both human and bird. Maybe they are (in a way) speaking our language.
For the first time in four centuries, it’s good to be a beaver. Long persecuted for their pelts and reviled as pests, the dam-building rodents are today hailed by scientists as ecological saviors. Their ponds and wetlands store water in the face of drought, filter out pollutants, furnish habitat for endangered species, and fight wildfires. In California, Castor canadensis is so prized that the state recently committed millions to its restoration.
While beavers’ benefits are indisputable, however, our knowledge remains riddled with gaps. We don’t know how many are out there, or which direction their populations are trending, or which watersheds most desperately need a beaver infusion. Few states have systematically surveyed them; moreover, many beaver ponds are tucked into remote streams far from human settlements, where they’re near-impossible to count. “There’s so much we don’t understand about beavers, in part because we don’t have a baseline of where they are,” says Emily Fairfax, a beaver researcher at the University of Minnesota.
But that’s starting to change. Over the past several years, a team of beaver scientists and Google engineers have been teaching an algorithm to spot the rodents’ infrastructure on satellite images. Their creation has the potential to transform our understanding of these paddle-tailed engineers—and help climate-stressed states like California aid their comeback. And while the model hasn’t yet gone public, researchers are already salivating over its potential. “All of our efforts in the state should be taking advantage of this powerful mapping tool,” says Kristen Wilson, the lead forest scientist at the conservation organization the Nature Conservancy. “It’s really exciting.”
The beaver-mapping model is the brainchild of Eddie Corwin, a former member of Google’s real-estate sustainability group. Around 2018, Corwin began to contemplate how his company might become a better steward of water, particularly the many coastal creeks that run past its Bay Area offices. In the course of his research, Corwin read Water: A Natural History, by an author aptly named Alice Outwater. One chapter dealt with beavers, whose bountiful wetlands, Outwater wrote, “can hold millions of gallons of water” and “reduce flooding and erosion downstream.” Corwin, captivated, devoured other beaver books and articles, and soon started proselytizing to his friend Dan Ackerstein, a sustainability consultant who works with Google. “We both fell in love with beavers,” Corwin says.
Corwin’s beaver obsession met a receptive corporate culture. Google’s employees are famously encouraged to devote time to passion projects, the policy that produced Gmail; Corwin decided his passion was beavers. But how best to assist the buck-toothed architects? Corwin knew that beaver infrastructure—their sinuous dams, sprawling ponds, and spidery canals—is often so epic it can be seen from space. In 2010, a Canadian researcher discovered the world’s longest beaver dam, a stick-and-mud bulwark that stretches more than a half-mile across an Alberta park, by perusing Google Earth. Corwin and Ackerstein began to wonder whether they could contribute to beaver research by training a machine-learning algorithm to automatically detect beaver dams and ponds on satellite imagery—not one by one, but thousands at a time, across the surface of an entire state.
After discussing the concept with Google’s engineers and programmers, Corwin and Ackerstein decided it was technically feasible. They reached out next to Fairfax, who’d gained renown for a landmark 2020 study showing that beaver ponds provide damp, fire-proof refuges in which other species can shelter during wildfires. In some cases, Fairfax found, beaver wetlands even stopped blazes in their tracks. The critters were such talented firefighters that she’d half-jokingly proposed that the US Forest Service change its mammal mascot—farewell, Smoky Bear, and hello, Smoky Beaver.
Fairfax was enthusiastic about the pond-mapping idea. She and her students already used Google Earth to find beaver dams to study within burned areas. But it was a laborious process, one that demanded endless hours of tracing alpine streams across screens in search of the bulbous signature of a beaver pond. An automated beaver-finding tool, she says, could “increase the number of fires I can analyze by an order of magnitude.”
With Fairfax’s blessing, Corwin, Ackerstein, and a team of programmers set about creating their model. The task, they decided, was best suited to a convolutional neural network, a type of algorithm that essentially tries to figure out whether a given chunk of geospatial data includes a particular object—whether a stretch of mountain stream contains a beaver dam, say. Fairfax and some obliging beaverologists from Utah State University submitted thousands of coordinates for confirmed dams, ponds, and canals, which the Googlers matched up with their own high-resolution images to teach the model to recognize the distinctive appearance of beaverworks. The team also fed the algorithm negative data—images of beaverless streams and wetlands—so that it would know what it wasn’t looking for. They dubbed their model the Earth Engine Automated Geospatial Elements Recognition, or EEAGER—yes, as in “eager beaver.”
Training EEAGER to pick out beaver ponds wasn’t easy. The American West was rife with human-built features that seemed practically designed to fool a beaver-seeking model. Curving roads reminded EEAGER of winding dams; the edges of man-made reservoirs registered as beaver-built ponds. Most confounding, weirdly, were neighborhood cul-de-sacs, whose asphalt circles, surrounded by gray strips of sidewalk, bore an uncanny resemblance to a beaver pond fringed by a dam. “I don’t think anybody anticipated that suburban America was full of what a computer would think were beaver dams,” Ackerstein says.
As the researchers pumped more data into EEAGER, it got better at distinguishing beaver ponds from impostors. In May 2023, the Google team, along with beaver researchers Fairfax, Joe Wheaton, and Wally Macfarlane, published a paper in the Journal of Geophysical Research Biogeosciencesdemonstrating the model’s efficacy. The group fed EEAGER more than 13,000 landscape images with beaver dams from seven western states, along with some 56,000 dam-less locations. The model categorized the landscape accurately—beaver dammed or not—98.5 percent of the time.
That statistic, granted, oversells EEAGER’s perfection. The Google team opted to make the model fairly liberal, meaning that, when it predicts whether or not a pixel of satellite imagery contains a beaver dam, it’s more likely to err on the side of spitting out a false positive. EEAGER still requires a human to check its answers, in other words—but it can dramatically expedite the work of scientists like Fairfax by pointing them to thousands of probable beaver sites.
“We’re not going to replace the expertise of biologists,” Ackerstein says. “But the model’s success is making human identification much more efficient.”
According to Fairfax, EEAGER’s use cases are many. The model could be used to estimate beaver numbers, monitor population trends, and calculate beaver-provided ecosystem services like water storage and fire prevention. It could help states figure out where to reintroduce beavers, where to target stream and wetland restoration, and where to create conservation areas. It could allow researchers to track beavers’ spread in the Arctic as the rodents move north with climate change; or their movements in South America, where beavers were introduced in the 1940s and have since proliferated. “We literally cannot handle all the requests we’re getting,” says Fairfax, who serves as EEAGER’s scientific adviser.
The algorithm’s most promising application might be in California. The Golden State has a tortured relationship with beavers: For decades, the state generally denied that the species was native, the byproduct of an industrial-scale fur trade that wiped beavers from the West Coast before biologists could properly survey them. Although recent historical research proved that beavers belong virtually everywhere in California, many water managers and farmers still perceive them as nuisances, and frequently have them killed for plugging up road culverts and meddling with irrigation infrastructure.
Yet those deeply entrenched attitudes are changing. After all, no state is in more dire need of beavers’ water-storage services than flammable, drought-stricken, flood-prone California. In recent years, thanks to tireless lobbying by a campaign called Bring Back the Beaver, the California Department of Fish and Wildlife has begun to overhaul its outdated beaver policies. In 2022, the state budgeted more than $1.5 million for beaver restoration, and announced it would hire five scientists to study and support the rodents. It also revised its official approach to beaver conflict to prioritize coexistence over lethal trapping. And, this fall, the wildlife department relocated a family of seven beavers onto the ancestral lands of the Mountain Maidu people—the state’s first beaver release in almost 75 years.
It’s only appropriate, then, that California is where EEAGER is going to get its first major test. The Nature Conservancy and Google plan to run the model across the state sometime in 2024, a comprehensive search for every last beaver dam and pond. That should give the state’s wildlife department a good sense of where its beavers are living, roughly how many it has, and where it could use more. The model will also provide California with solid baseline data against which it can compare future populations, to see whether its new policies are helping beavers recover. “When you have imagery that’s repeated frequently, that gives you the opportunity to understand change through time,” says the Conservancy’s Kristen Wilson.
What’s next for EEAGER after its California trial? The main thing, Ackerstein says, is to train it to identify beaverworks in new places. (Although beaver dams and ponds present as fairly similar in every state, the model also relies on context clues from the surrounding landscape, and a sagebrush plateau in Wyoming looks very different from a deciduous forest in Massachusetts.) The team also has to figure out EEAGER’s long-term fate: Will it remain a tool hosted by Google? Spin off into a stand-alone product? Become a service operated by a university or nonprofit?
“That’s the challenge for the future—how do we make this more universally accessible and usable?” Corwin says. The beaver revolution may not be televised, but it will definitely be documented by satellite.
Spectacular scenery, from lush rainforests to towering mountain ranges, dots the surface of our planet. But some of Earth’s most iconic landmarks––ones that may harbor clues to the origin of life on Earth and possibly elsewhere––lay hidden at the bottom of the ocean. Scientists recently found one such treasure in Ecuadorian waters: a submerged mini Yellowstone called Sendero del Cangrejo.
This hazy alien realm simmers in the deep sea in an area called the Western Galápagos Spreading Center––an underwater mountain range where tectonic plates are slowly moving away from each other. Magma wells up from Earth’s mantle here to create new oceanic crust in a process that created the Galápagos Islands and smaller underwater features, like hydrothermal vents. These vents, which pump heated, mineral-rich water into the ocean in billowing plumes, may offer clues to the origin of life on Earth. Studying Earth’s hydrothermal vents could also offer a gateway to finding life, or at least its building blocks, on other worlds.
The newly discovered Sendero del Cangrejo contains a chain of hydrothermal vents that spans nearly two football fields. It hosts hot springs and geyser chimneys that support an array of creatures, from giant, spaghetti-like tube worms to alabaster Galatheid crabs.
The crabs, also known as squat lobsters, helped guide researchers to Sendero del Cangrejo. Ecuadorian observers chose the site’s name, which translates to “Trail of the Crabs,” in their honor.
“It did feel like the squat lobsters were leading us like breadcrumbs, like we were Hansel and Gretel, to the actual vent site,” said Hayley Drennon, a senior research assistant at Columbia University’s Lamont-Doherty Earth Observatory, who participated in the expedition.
The joint American and Ecuadorian research team set sail aboard the Schmidt Ocean Institute’s Falkor (too) research vessel in mid-August in search of new hydrothermal vents. They did some mapping and sampling on the way to their target location, about 300 miles off the west coast of the Galápagos.
The team used a ‘Tow-Yo’ technique to gather and transmit real-time data to the crew aboard the ship. “We lowered sensors attached to a long wire to the seafloor, and then towed the wire up and down like a yo-yo,” explained Roxanne Beinart, an associate professor at the University of Rhode Island and the expedition’s chief scientist. “This process allowed us to monitor changes in temperature, water clarity, and chemical composition to help pinpoint potential hydrothermal vent locations.”
When they reached a region that seemed promising, they deployed the remotely operated vehicle SuBastian for a better look. Less than 24 hours later, the team began seeing more and more Galatheid crabs, which they followed until they found the vents.
The crabs were particularly useful guides since the vent fluids there are clear, unlike “black smokers” that create easy-to-see plumes. SuBastian explored the area for about 43 hours straight in the robot’s longest dive to date.
But the true discovery process spanned decades. Researchers have known for nearly 20 years that the area was likely home to hydrothermal activity thanks to chemical signals measured in 2005. About a decade later, teams ventured out again and collected animal samples. Now, due to the Schmidt Ocean Institute’s recent expedition, scientists have the most comprehensive data set ever for this location. It includes chemical, geological, and biological data, along with the first high-temperature water samples.
“It’s not uncommon for an actual discovery like this to take decades,” said Jill McDermott, an associate professor at Lehigh University and the expedition’s co-chief scientist. “The ocean is a big place, and the locations are very remote, so it takes a lot of time and logistics to get out to them.” The team will continue their research onshore to help us understand how hydrothermal vents influence our planet.
Genesis from hell?
Sendero del Cangrejo may compare to a small-scale Yellowstone in some ways, but it’s no tourist destination. It’s pitch-black since sunlight can’t reach the deep ocean floor. The crushing weight of a mile of water presses down from overhead. And the vents are hot and toxic. Some of them clocked in at 290º C (550º F)—nearly hot enough to melt lead.
Before scientists discovered hydrothermal vents in 1977, they assumed such extreme conditions would preclude the possibility of life. Yet that trailblazing team saw multiple species thriving, including white clams that guided them to the vents the same way the Galatheid crabs led the modern researchers to Sendero del Cangrejo.
Before the 1977 find, no one knew life could survive in such a hostile place. Now, scientists know there are microbes called thermophiles that can only live in high temperatures (up to about 120º C, or 250º F).
Bacteria that surround hydrothermal vents don’t eat other organisms or create energy from sunlight like plants do. Instead, they produce energy using chemicals like methane or hydrogen sulfide that emanate from the vents. This process, called chemosynthesis, was first identified through the characterization of organisms discovered at these vents. Chemosynthetic bacteria are the backbone of hydrothermal vent ecosystems, serving as a nutrition source for higher organisms.
Some researchers suggest life on Earth may have originated near hydrothermal vents due to their unique chemical and energy-rich conditions. While the proposal remains unproven, the discovery of chemosynthesis opened our eyes to new places that could host life.
The possibility of chemosynthetic creatures diminishes the significance of so-called habitable zones around stars, which describe the orbital distances between which surface water can remain liquid on a planet or moon. The habitable zone in our own Solar System extends from about Venus’ orbit out nearly to Mars’.
NASA’s Europa Clipper mission is set to launch late next year to determine whether there are places below the surface of Jupiter’s icy moon, Europa, that could support life. It’s a lot colder out there, well beyond our Solar System’s habitable zone, but scientists think Europa is internally heated. It experiences strong tidal forces from Jupiter’s gravity, which could create hydrothermal activity on the moon’s ocean floor.
Several other moons in our Solar System also host subsurface oceans and experience the same tidal heating that could potentially create habitable conditions. By exploring Earth’s hydrothermal vents, scientists could learn more about what to look for in similar environments elsewhere in our Solar System.
“The Ocean’s Multivitamin”
While hydrothermal vents are relatively new to science, they’re certainly not new to our planet. “Vents have been active since Earth’s oceans first formed,” McDermott said. “They’ve been present in our oceans for as long as we’ve had them, so about 3 billion years.”
During that time, they’ve likely transformed our planet’s chemistry and geology by cycling chemicals and minerals from Earth’s crust throughout the ocean.
“All living things on Earth need minerals and elements that they get from the crust,” said Peter Girguis, a professor at Harvard University, who participated in the expedition. “It’s no exaggeration to say that all life on earth is inextricably tied to the rocks upon which we live and the geological processes occurring deep inside the planet…it’s like the ocean’s multivitamin.”
But the full extent of the impact hydrothermal vents have on the planet remains unknown. In the nearly 50 years since hydrothermal vents were first discovered, scientists have uncovered hundreds more spread around the globe. Yet no one knows how many remain unidentified; there are likely thousands more vents hidden in the deep. Detailed studies, like those the expedition scientists are continuing onshore, could help us understand how hydrothermal activity influences the ocean.
The team’s immediate observations offer a good starting point for their continued scientific sleuthing.
“I actually expected to find denser animal populations in some places,” Beinart said.
McDermott thinks that could be linked to the composition of the vent fluids. “Several of the vents were clear—not very particle-rich,” she said. “They’re probably lower in minerals, but we’re not sure why.” Now, the team will measure different metal levels in water samples from the vent fluids to figure out why they’re low in minerals and whether that has influenced the animals the vents host.
Researchers are learning more about hydrothermal vents every day, but many mysteries remain, such as the eventual influence ocean acidification could have on vents. As they seek answers, they’re sure to find more questions and open up new avenues of scientific exploration.
Ashley writes about space as a contractor for NASA’s Goddard Space Flight Center by day and freelances as an environmental writer. She holds a master’s degree in space studies from the University of North Dakota and is finishing a master’s in science writing through The Johns Hopkins University. She writes most of her articles with one of her toddlers on her lap.
CAPE TOWN, South Africa—A weathered, green building stands at the edge of the cozy suburban Table View neighborhood in Cape Town, just a few blocks down from a Burger King and a community library. Upon stepping inside, visitors’ feet squelch on a mat submerged in antibacterial liquid—one of the first signs this isn’t just another shop on the street.
A few steps further down the main hallway, a cacophony of discordant brays and honks fill the air. A couple more strides reveal the source of these guttarall calls: African penguins.
Welcome to the nonprofit Southern African Foundation for the Conservation Of Coastal Birds’ hatchery and nursery, where hundreds of these birds are hand-reared after being injured or abandoned in the wild.
While this conservation center is a flourishing refuge for African penguins, the species as a whole is in dire straits. Over the past century, African penguin populations have plummeted, dropping from around one million breeding pairs in the early 1900s to less than 10,000 in 2023 as environmental conditions have worsened due to increased fishing pressure and climate change, which have both decreased fish populations on which penguins rely.
The climate crisis has also fueled more frequent and severe weather events in South Africa such as floods and heat waves, resulting in an increased number of penguin parents abandoning their eggs to seek refuge.
The staff at the Foundation is working to hand-rear penguins with the goal to release most of them back into one of the threatened Cape colonies they came from. But some of these penguins are destined for a different destination: a rocky outcropping along the Eastern Cape of South Africa within the De Hoop Nature Reserve.
There, scientists and conservationists are working to establish a new penguin colony, which they hope will become a stronghold for the entire African penguin species.
The ecological trap
It’s difficult to pin a single threat to the demise of African penguins; oil spills, avian flu and extreme weather events have wreaked havoc on colonies across South Africa. These chronic issues combine with freak incidents: In 2021, a swarm of bees killed more than 60 African penguins on the popular Boulders Beach in Cape Town and, a year later, two huskies killed 19 penguins in the same area.
However, scientists say that one of the main causes of the seabirds’ decline is the intense fishing pressure on sardines and anchovies, the penguin’s main diet.
Fighting unemployment, low-income people fish around coastal beaches to support themselves, said Shanet Rutgers, an animal health technician at the Two Oceans Aquarium in South Africa, and there is a large commercial industry for purse-seine fishing, in which a wall of netting is cast around a school of fish.
“When they pull out too much fish in the ocean, they leave the colonies with almost little to nothing to feed on,” she said.
There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: Swedish forensic artist Oscar Nilsson combined CT scans of frozen mummified remains with skull measurements and DNA analysis to reconstruct the face of a 500-year-old Inca girl.
In 1995, archaeologists discovered the frozen, mummified remains of a young Inca girl high in the mountains of Peru, thought to have died as part of a sacrificial ritual known as Capacocha (or Ohapaq hucha). In late October, we learned how she most likely looked in life, thanks to a detailed reconstruction by Swedish forensic article Oscar Nilsson. A plaster bust of the reconstruction was unveiled at a ceremony at the Andean Sanctuaries Museum of the Catholic University of Santa Maria in Arequipa, Peru, where the girl’s remains (now called Juanita) have been on near-continuous display since her discovery.
“I thought I’d never know what her face looked like when she was alive,” archaeologist Johan Reinhardt told the BBC. Reinhardt had found the remains with Peruvian mountaineer Miguel Zárate at an altitude of 21,000 feet (6,400 meters) during an expedition to Ampato, one of the highest volcanos in the Andes. “Now 28 years later, this has become a reality thanks to Oscar Nilsson’s reconstruction.”
According to Reinhardt, Spanish chroniclers made reference to the Inca practice of making offerings to the gods: not just statues, fine textiles, and ceramics, but also occasionally human sacrifices at ceremonial shrines (huacas) built high on mountain summits. It’s thought that human sacrifices of young girls and boys were a means of appeasing the Inca gods (Apus) during periods of irregular weather patterns, particularly drought. Drought was common in the wake of a volcanic eruption.
During those periods, the ground on summits would unfreeze sufficiently for the Incas to build their sites and bury their offerings. The altitude is one reason why various Inca mummified remains have been found in remarkable states of preservation.
Earlier discoveries included the remains of an Inca boy found by looters in the 1950s, as well as the frozen body of a young man in 1964 and that of a young boy in 1985. Then Reinhardt and Zárate made their Ampato ascent in September 1995. They were stunned to spot a mummy bundle on the ice just below the summit and realized they were looking at the frozen face of a young girl. The body was surrounded by offerings for the Inca gods, including llama bones, small carved figurines, and bits of pottery. Juanita was wrapped in a colorful burial tapestry and wearing a feathered cap and alpaca shawl, all almost perfectly preserved. Reinhardt and Zárate subsequently found two more ice mummies (a young boy and girl) the following month, and yet another female mummy in December 1997.
It was a bit of a struggle to get Juanita’s body down from the summit because it was so heavy, the result of its flesh being so thoroughly frozen. That’s also what makes it such an exciting archaeological find. The remains of meal of vegetables were in her well-reserved stomach, although DNA analysis from her hair showed that she also ate a fair amount of animal protein. That, and the high quality of her garments, suggested she came from a noble family, possibly from the city of Cusco.
There were also traces of coca and alcohol, likely administered before Juanita’s death—a common Inca practice when sacrificing children. A CT scan of her skull revealed that Juanita had died from a a sharp blow to the head, similar to the type of injury made by a baseball bat, causing a massive hemorrhage. This, too, was a common Inca sacrificial custom.
Nilsson was able to draw upon those earlier analyses for his reconstruction, since he needed to know things like her age, gender, weight, and ethnicity. He started with the CT scan of Juanita’s skull and used the data to 3D print a plastic replica of her head. He used wooden pegs on the bust to mark out the various measurements and added clay to mold the defining details of her face, drawing on clues from her nose, eye sockets, and teeth. The DNA indicated the likely color of her skin. “In Juanita’s case, I wanted her to look both scared and proud, and with a high sense of presence at the same time,” Nilsson told Live Science. “I then cast the face in silicone [using] real human hair [that I] inserted hair by hair.”
Just before the holiday break, the US Energy Information Agency released data on the country’s electrical generation. Because of delays in reporting, the monthly data runs through October, so it doesn’t provide a complete picture of the changes we’ve seen in 2023. But some of the trends now seem locked in for the year: wind and solar are likely to be in a dead heat with coal, and all carbon-emissions-free sources combined will account for roughly 40 percent of US electricity production.
Tracking trends
Having data through October necessarily provides an incomplete picture of 2023. There are several factors that can cause the later months of the year to differ from the earlier ones. Some forms of generation are seasonal—notably solar, which has its highest production over the summer months. Weather can also play a role, as unusually high demand for heating in the winter months could potentially require that older fossil fuel plants be brought online. It also influences production from hydroelectric plants, creating lots of year-to-year variation.
Finally, everything’s taking place against a backdrop of booming construction of solar and natural gas. So, it’s entirely possible that we will have built enough new solar over the course of the year to offset the seasonal decline at the end of the year.
Let’s look at the year-to-date data to get a sense of the trends and where things stand. We’ll then check the monthly data for October to see if any of those trends show indications of reversing.
The most important takeaway is that energy use is largely flat. Overall electricity production year-to-date is down by just over one percent from 2022, though demand was higher this October compared to last year. This is in keeping with a general trend of flat-to-declining electricity use as greater efficiency is offsetting factors like population growth and expanding electrification.
That’s important because it means that any newly added capacity will displace the use of existing facilities. And, at the moment, that displacement is happening to coal.
Can’t hide the decline
At this point last year, coal had produced nearly 20 percent of the electricity in the US. This year, it’s down to 16.2 percent, and only accounts for 15.5 percent of October’s production. Wind and solar combined are presently at 16 percent of year-to-date production, meaning they’re likely to be in a dead heat with coal this year and easily surpass it next year.
Year-to-date, wind is largely unchanged since 2022, accounting for about 10 percent of total generation, and it’s up to over 11 percent in the October data, so that’s unlikely to change much by the end of the year. Solar has seen a significant change, going from five to six percent of the total electricity production (this figure includes both utility-scale generation and the EIA’s estimate of residential production). And it’s largely unchanged in October alone, suggesting that new construction is offsetting some of the seasonal decline.
Hydroelectric production has dropped by about six percent since last year, causing it to slip from 6.1 percent to 5.8 percent of the total production. Depending on the next couple of months, that may allow solar to pass hydro on the list of renewables.
Combined, the three major renewables account for about 22 percent of year-to-date electricity generation, up about 0.5 percent since last year. They’re up by even more in the October data, placing them well ahead of both nuclear and coal.
Nuclear itself is largely unchanged, allowing it to pass coal thanks to the latter’s decline. Its output has been boosted by a new, 1.1 Gigawatt reactor that come online this year (a second at the same site, Vogtle in Georgia, is set to start commercial production at any moment). But that’s likely to be the end of new nuclear capacity for this decade; the challenge will be keeping existing plants open despite their age and high costs.
If we combine nuclear and renewables under the umbrella of carbon-free generation, then that’s up by nearly 1 percent since 2022 and is likely to surpass 40 percent for the first time.
The only thing that’s keeping carbon-free power from growing faster is natural gas, which is the fastest-growing source of generation at the moment, going from 40 percent of the year-to-date total in 2022 to 43.3 percent this year. (It’s actually slightly below that level in the October data.) The explosive growth of natural gas in the US has been a big environmental win, since it creates the least particulate pollution of all the fossil fuels, as well as the lowest carbon emissions per unit of electricity. But its use is going to need to start dropping soon if the US is to meet its climate goals, so it will be critical to see whether its growth flat lines over the next few years.
Outside of natural gas, however, all the trends in US generation are good, especially considering that the rise of renewable production would have seemed like an impossibility a decade ago. Unfortunately, the pace is currently too slow for the US to have a net-zero electric grid by the end of the decade.
People with type I diabetes have to inject themselves multiple times a day with manufactured insulin to maintain healthy levels of the hormone, as their bodies do not naturally produce enough. The injections also have to be timed in response to eating and exercise, as any consumption or use of glucose has to be managed.
Research into glucose-responsive insulin, or “smart” insulin, hopes to improve the quality of life for people with type I diabetes by developing a form of insulin that needs to be injected less frequently, while providing control of blood-glucose levels over a longer period of time.
A team at Zhejiang University, China, has recently released a study documenting an improved smart insulin system in animal models—the current work doesn’t involve any human testing. Their insulin was able to regulate blood-glucose levels for a week in diabetic mice and minipigs after a single subcutaneous injection.
“Theoretically, [smart insulin is] incredibly important going forward,” said Steve Bain, clinical director of the Diabetes Research Unit in Swansea University, who was not involved in the study. “It would be a game changer.”
Polymer cage
The new smart insulin is based on a form of insulin modified with gluconic acid, which forms a complex with a polymer through chemical bonds and strong electrostatic attraction. When insulin is trapped in the polymer, its signaling function is blocked, allowing a week’s worth of insulin to be given via a single injection without a risk of overdose.
Crucial to the “glucose responsive” nature of this system is the fact that the chemical structures of glucose and gluconic acid are extremely similar, meaning the two molecules bind in very similar ways. When glucose meets the insulin-polymer complex, it can displace some of the bound insulin and form its own chemical bonds to the polymer. Glucose binding also disrupts the electrostatic attraction and further promotes insulin release.
By preferentially binding to the polymer, the glucose is able to trigger the release of insulin. And the extent of this insulin release depends on how much glucose is present: between meals, when the blood-glucose level is fairly low, only a small amount of insulin is released. This is known as basal insulin and is needed for baseline regulation of blood sugar.
But after a meal, when blood-glucose spikes, much more insulin is released. The body can now regulate the extra sugar properly, preventing abnormally high levels of glucose—known as hyperglycemia. Long-term effects of hyperglycemia in humans include nerve damage to the hands and feet and permanent damage to eyesight.
This system mimics the body’s natural process, in which insulin is also released in response to glucose.
Better regulation than standard insulin
The new smart insulin was tested in five mice and three minipigs—minipigs are often used as an animal model that’s more physiologically similar to humans. One of the three minipigs received a slightly lower dose of smart insulin, and the other two received a higher dose. The lower-dose pig showed the best response: its blood-glucose levels were tightly controlled and returned to a healthy value after meals.
During treatment, the other two pigs had glucose levels that were still above the range seen in healthy animals, although they were greatly reduced compared to pre-injection levels. The regulation of blood-glucose was also tighter compared to daily insulin injections.
It should be noted, though, that the minipig with the best response also had the lowest blood-glucose levels before treatment, which may explain why it seemed to work so well in this animal.
Crucially, these effects were all long lasting—better regulation could be seen a week after treatment. And injecting the animals with the smart insulin didn’t result in a significant immune response, which can be a common pitfall when introducing biomaterials to animals or humans.
Don’t sugarcoat it
The study is not without its limitations. Although long-term glucose regulation was seen in the mice and minipigs examined, only a few animals were involved in the study—five mice and three minipigs. And of course, there’s always the risk that the results of animal studies don’t completely track over to clinical trials in humans. “We have to accept that these are animal studies, and so going across to humans is always a bit of an issue,” said Bain.
Although more research is required before this smart insulin system can be tested in humans, this work is a promising step forward in the field.
There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2023, each day from December 25 through January 5. Today: red flour beetles can use their butts to suck water from the air, helping them survive in extremely dry environments. Scientists are honing in on the molecular mechanisms behind this unique ability.
The humble red flour beetle (Tribolium castaneum) is a common pantry pest feeding on stored grains, flour, cereals, pasta, biscuits, beans, and nuts. It’s a remarkably hardy creature, capable of surviving in harsh arid environments due to its unique ability to extract fluid not just from grains and other food sources, but also from the air. It does this by opening its rectum when the humidity of the atmosphere is relatively high, absorbing moisture through that opening and converting it into fluid that is then used to hydrate the rest of the body.
Scientists have known about this ability for more than a century, but biologists are finally starting to get to the bottom (ahem) of the underlying molecular mechanisms, according to a March paper published in the Proceedings of the National Academies of Science. This will inform future research on how to interrupt this hydration process to better keep red flour beetle populations in check, since they are highly resistant to pesticides. They can also withstand even higher levels of radiation than the cockroach.
There are about 400,000 known species of beetle roaming the planet although scientists believe there could be well over a million. Each year, as much as 20 percent of the world’s grain stores are contaminated by red flour beetles, grain weevils, Colorado potato beetles, and confused flour beetles, particularly in developing countries. Red flour beetles in particular are a popular model organism for scientific research on development and functional genomics. The entire genome was sequenced in 2008, and the beetle shares between 10,000 and 15,000 genes with the fruit fly (Drosophila), another workhorse of genetics research. But the beetle’s development cycle more closely resembles that of other insects by comparison.
The rectums of most mammals and insects absorb any remaining nutrients and water from the body’s waste products prior to defecation. But the red flour beetle’s rectum is a model of ultra-efficiency in that regard. The beetle can generate extremely high salt concentrations in its kidneys, enabling it to extract all the water from its own feces and recycle that moisture back into its body.
“A beetle can go through an entire life cycle without drinking liquid water,” said co-author Kenneth Veland Halberg, a biologist at the University of Copenhagen. “This is because of their modified rectum and closely applied kidneys, which together make a multi-organ system that is highly specialized in extracting water from the food that they eat and from the air around them. In fact, it happens so effectively that the stool samples we have examined were completely dry and without any trace of water.” The entire rectal structure is encased in a perinephric membrane.
Halberg et al. took took scanning electron microscopy images of the beetle’s rectal structure. They also took tissue samples and extracted RNA from lab-grown red flour beetles, then used a new resource called BeetleAtlas for their gene expression analysis, hunting for any relevant genes.
One particular gene was expressed sixty times more in the rectum than any other. Halberg and his team eventually honed in a group of secondary cells between the beetle’s kidneys and circulatory system called leptophragmata. This finding supports prior studies that suggested these cells might be relevant since they are the only cells that interrupt the perinephric membrane, thereby enabling critical transport of potassium chloride. Translation: the cells pump salts into the kidneys to better harvest moisture from its feces or from the air.
The next step is to build on these new insights to figure out how to interrupt the beetle’s unique hydration process at the molecular level, perhaps by designing molecules that can do so. Those molecules could then be incorporated into more eco-friendly pesticides that target the red flour beetle and similar pests while not harming more beneficial insects like bees.
“Now we understand exactly which genes, cells and molecules are at play in the beetle when it absorbs water in its rectum. This means that we suddenly have a grip on how to disrupt these very efficient processes by, for example, developing insecticides that target this function and in doing so, kill the beetle,” said Halberg. “There is twenty times as much insect biomass on Earth than that of humans. They play key roles in most food webs and have a huge impact on virtually all ecosystems and on human health. So, we need to understand them better.”