Behavioral science

people-will-share-misinformation-that-sparks-“moral-outrage”

People will share misinformation that sparks “moral outrage”


People can tell it’s not true, but if they’re outraged by it, they’ll share anyway.

Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.

But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.

Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.

Tracking the outrage

The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.

Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.

The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.

“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.

Finally, the researchers started analyzing the data to answer questions like whether misinformation sources evoke more outrage, whether outrageous news was shared more often than non-outrageous news, and finally, what reasons people had for sharing outrageous content. And that’s when the idealized picture of honest, truthful citizens who shared misinformation just because they were too distracted to recognize it started to crack.

Going with the flow

The Facebook and Twitter data analyzed by Brady’s team revealed that misinformation evoked more outrage than trustworthy news. At the same time, people were way more likely to share outrageous content, regardless of whether it was misinformation or not. Putting those two trends together led the team to conclude outrage primarily boosted the spread of fake news since reliable sources usually produced less outrageous content.

“What we know about human psychology is that our attention is drawn to things rooted in deep biases shaped by evolutionary history,” Brady says. Those things are emotional content, surprising content, and especially, content that is related to the domain of morality. “Moral outrage is expressed in response to perceived violations of moral norms. This is our way of signaling to others that the violation has occurred and that we should punish the violators. This is done to establish cooperation in the group,” Brady explains.

This is why outrageous content has an advantage in the social media attention economy. It stands out, and standing out is a precursor to sharing. But there are other reasons we share outrageous content. “It serves very particular social functions,” Brady says. “It’s a cheap way to signal group affiliation or commitment.”

Cheap, however, didn’t mean completely free. The team found that the penalty for sharing misinformation, outrageous or not, was loss of reputation—spewing nonsense doesn’t make you look good, after all. The question was whether people really shared fake news because they failed to identify it as such or if they just considered signaling their affiliation was more important.

Flawed human nature

Brady’s team designed two behavioral experiments where 1,475 people were presented with a selection of fact-checked news stories curated to contain outrageous and not outrageous content; they were also given reliable news and misinformation. In both experiments, the participants were asked to rate how outrageous the headlines were.

The second task was different, though. In the first experiment, people were simply asked to rate how likely they were to share a headline, while in the second they were asked to determine if the headline was true or not.

It turned out that most people could discern between true and fake news. Yet they were willing to share outrageous news regardless of whether it was true or not—a result that was in line with previous findings from Facebook and Twitter data. Many participants were perfectly OK with sharing outrageous headlines, even though they were fully aware those headlines were misinformation.

Brady pointed to an example from the recent campaign, when a reporter pushed J.D. Vance about false claims regarding immigrants eating pets. “When the reporter pushed him, he implied that yes, it was fabrication, but it was outrageous and spoke to the issues his constituents were mad about,” Brady says. These experiments show that this kind of dishonesty is not exclusive to politicians running for office—people do this on social media all the time.

The urge to signal a moral stance quite often takes precedence over truth, but misinformation is not exclusively due to flaws in human nature. “One thing this study was not focused on was the impact of social media algorithms,” Brady notes. Those algorithms usually boost content that generates engagement, and we tend to engage more with outrageous content. This, in turn, incentivizes people to make their content more outrageous to get this algorithmic boost.

Science, 2024.  DOI: 10.1126/science.adl2829

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

People will share misinformation that sparks “moral outrage” Read More »

bats-use-echolocation-to-make-mental-maps-for-navigation

Bats use echolocation to make mental maps for navigation

Bat maps

To evaluate the route each bat took to get back to the roost, the team used their simulations to measure the echoic entropy it experienced along the way. The field where the bats were released was a low echoic entropy area, so during those first few minutes when they were flying around they were likely just looking for some more distinct, higher entropy landmarks to figure out where they were. Once they were oriented, they started flying to the roost, but not in a straight line. They meandered a bit, and the groups with higher sensory deprivation tended to meander more.

The meandering, researchers suspect, was due to trouble the bats had with maintaining the steady path relying on echolocation alone. When they were detecting distinctive landmarks like a specific orchard, they corrected the course. Repeating the process eventually brought them to their roost.

But could this be landmark-based navigation? Or perhaps simple beaconing, where an animal locks onto something like a distant light and moves toward it?

The researchers argue in favor of cognitive acoustic maps. “I think if echolocation wasn’t such a limited sensory modality, we couldn’t reach a conclusion about the bats using cognitive acoustic maps,” Goldshtein says. The distance between landmarks the bats used to correct their flight path was significantly longer than echolocation’s sensing range. Yet they knew which direction the roost was relative to one landmark, even when the next landmark on the way was acoustically invisible. You can’t do that without having the area mapped.

“It would be really interesting to understand how other bats do that, to compare between species,” Goldshtein says. There are bats that fly over a thousand meters above the ground, so they simply can’t sense any landmarks using echolocation. Other species hunt over sea, which, as per this team’s simulations, would be just one huge low-entropy area. “We are just starting. That’s why I do not study only navigation but also housing, foraging, and other aspects of their behavior. I think we still don’t know enough about bats in general,” Goldshtein claims.

Science, 2024.  DOI: 10.1126/science.adn6269

Bats use echolocation to make mental maps for navigation Read More »

remembering-where-your-meals-came-from-key-for-a-small-bird’s-survival

Remembering where your meals came from key for a small bird’s survival

Where’d I leave that again? —

For small birds, remembering where the food is beats forgetting when it’s gone.

a small, black and grey bird perched on the branch of a fir tree.

It seems like common sense that being smart should increase the chances of survival in wild animals. Yet for a long time, scientists couldn’t demonstrate that because it was unclear how to tell exactly if a lion or a crocodile or a mountain chickadee was actually smart or not. Our best shots, so far, were looking at indirect metrics like brain size or doing lab tests of various cognitive skills such as reversal learning, an ability that can help an animal adapt to a changing environment.

But a new, large-scale study on wild mountain chickadees, led by Joseph Welklin, an evolutionary biologist at the University of Nevada, showed that neither brain size nor reversal learning skills were correlated with survival. What mattered most for chickadees, small birds that save stashes of food, was simply remembering where they cached all their food. A chickadee didn’t need to be a genius to survive; it just needed to be good at its job.

Testing bird brains

“Chickadees cache one food item in one location, and they do this across a big area. They can have tens of thousands of caches. They do this in the fall and then, in the winter, they use a special kind of spatial memory to find those caches and retrieve the food. They are little birds, weight is like 12 grams, and they need to eat almost all the time. If they don’t eat for a few hours, they die,” explains Vladimir Pravosudov, an ornithologist at the University of Nevada and senior co-author of the study.

The team chose the chickadees to study the impact cognitive skills had on survival because the failure to find their caches was their most common cause of death. This way, the team hoped, the impact of other factors like predation or disease would be minimized.

First, however, Welklin and his colleagues had to come up with a way to test cognitive skills in a fairly large population of chickadees. They did it by placing a metal square with two smart feeders attached to each side among the trees where the chickadees lived. “The feeders were equipped with RFID receivers that recognized the signal whenever a chickadee, previously marked with a microchip-fitted leg band, landed near them and opened the doors to dispense a single seed,” Welklin says. After a few days spent getting the chickadees familiar with the door-opening mechanism, the team started running tests.

The first task was aimed at testing how good different chickadees were at their most important job: associating a location with food and remembering where it was. To this end, each of the 227 chickadees participating in the study was assigned just one feeder that opened when they landed on it; all the other feeders remained closed. A chickadee’s performance was measured by the number of trials it needed to figure out which feeder would serve it, and how many errors (landings on the wrong feeders) it made over four days. “If you were to find the right feeder at random, it should take you 3.5 trials on average. All the birds learned and performed way better than chance,” Pravosudov says.

The second task was meant to test reversal learning skills, widely considered the best predictor of survival. Once the chickadees learned the location of the reward-dispensing feeders, the locations were changed. The goal was to see how fast the birds would adapt to this change.

Once the results of both tests were in, the team monitored the birds using their microchip bands, catching them and changing the bands every year, for over six years. “Part of the reason that’s never been done in the past is just because it takes so much work,” says Welklin. But the work paid off in the end.

Remembering where your meals came from key for a small bird’s survival Read More »

people-game-ais-via-game-theory

People game AIs via game theory

Games inside games —

They reject more of the AI’s offers, probably to get it to be more generous.

A judge's gavel near a pile of small change.

Enlarge / In the experiments, people had to judge what constituted a fair monetary offer.

In many cases, AIs are trained on material that’s either made or curated by humans. As a result, it can become a significant challenge to keep the AI from replicating the biases of those humans and the society they belong to. And the stakes are high, given we’re using AIs to make medical and financial decisions.

But some researchers at Washington University in St. Louis have found an additional wrinkle in these challenges: The people doing the training may potentially change their behavior when they know it can influence the future choices made by an AI. And, in at least some cases, they carry the changed behaviors into situations that don’t involve AI training.

Would you like to play a game?

The work involved getting volunteers to participate in a simple form of game theory. Testers gave two participants a pot of money—$10, in this case. One of the two was then asked to offer some fraction of that money to the other, who could choose to accept or reject the offer. If the offer was rejected, nobody got any money.

From a purely rational economic perspective, people should accept anything they’re offered, since they’ll end up with more money than they would have otherwise. But in reality, people tend to reject offers that deviate too much from a 50/50 split, as they have a sense that a highly imbalanced split is unfair. Their rejection allows them to punish the person who made the unfair offer. While there are some cultural differences in terms of where the split becomes unfair, this effect has been replicated many times, including in the current work.

The twist with the new work, performed by Lauren Treimana, Chien-Ju Hoa, and Wouter Kool, is that they told some of the participants that their partner was an AI, and the results of their interactions with it would be fed back into the system to train its future performance.

This takes something that’s implicit in a purely game-theory-focused setup—that rejecting offers can help partners figure out what sorts of offers are fair—and makes it highly explicit. Participants, or at least the subset involved in the experimental group that are being told they’re training an AI, could readily infer that their actions would influence the AI’s future offers.

The question the researchers were curious about was whether this would influence the behavior of the human participants. They compared this to the behavior of a control group who just participated in the standard game theory test.

Training fairness

Treimana, Hoa, and Kool had pre-registered a number of multivariate analyses that they planned to perform with the data. But these didn’t always produce consistent results between experiments, possibly because there weren’t enough participants to tease out relatively subtle effects with any statistical confidence and possibly because the relatively large number of tests would mean that a few positive results would turn up by chance.

So, we’ll focus on the simplest question that was addressed: Did being told that you were training an AI alter someone’s behavior? This question was asked through a number of experiments that were very similar. (One of the key differences between them was whether the information regarding AI training was displayed with a camera icon, since people will sometimes change their behavior if they’re aware they’re being observed.)

The answer to the question is a clear yes: people will in fact change their behavior when they think they’re training an AI. Through a number of experiments, participants were more likely to reject unfair offers if they were told that their sessions would be used to train an AI. In a few of the experiments, they were also more likely to reject what were considered fair offers (in US populations, the rejection rate goes up dramatically once someone proposes a 70/30 split, meaning $7 goes to the person making the proposal in these experiments). The researchers suspect this is due to people being more likely to reject borderline “fair” offers such as a 60/40 split.

This happened even though rejecting any offer exacts an economic cost on the participants. And people persisted in this behavior even when they were told that they wouldn’t ever interact with the AI after training was complete, meaning they wouldn’t personally benefit from any changes in the AI’s behavior. So here, it appeared that people would make a financial sacrifice to train the AI in a way that would benefit others.

Strikingly, in two of the three experiments that did follow up testing, participants continued to reject offers at a higher rate two days after their participation in the AI training, even when they were told that their actions were no longer being used to train the AI. So, to some extent, participating in AI training seems to have caused them to train themselves to behave differently.

Obviously, this won’t affect every sort of AI training, and a lot of the work that goes into producing material that’s used in training something like a Large Language Model won’t have been done with any awareness that it might be used to train an AI. Still, there’s plenty of cases where humans do get more directly involved in training, so it’s worthwhile being aware that this is another route that can allow biases to creep in.

PNAS, 2024. DOI: 10.1073/pnas.2408731121  (About DOIs).

People game AIs via game theory Read More »

how-do-brainless-creatures-control-their-appetites?

How do brainless creatures control their appetites?

Feed me! —

Separate systems register when the animals have eaten and control feeding behaviors.

Image of a greenish creature with a long stalk and tentacles, against a black background.

The hydra is a Lovecraftian-looking microorganism with a mouth surrounded by tentacles on one end, an elongated body, and a foot on the other end. It has no brain or centralized nervous system. Despite the lack of either of those things, it can still feel hunger and fullness. How can these creatures know when they are hungry and realize when they have had enough?

While they lack brains, hydra do have a nervous system. Researchers from Kiel University in Germany found they have an endodermal (in the digestive tract) and ectodermal (in the outermost layer of the animal) neuronal population, both of which help them react to food stimuli. Ectodermal neurons control physiological functions such as moving toward food, while endodermal neurons are associated with feeding behavior such as opening the mouth—which also vomits out anything indigestible.

Even such a limited nervous system is capable of some surprisingly complex functions. Hydras might even give us some insights into how appetite evolved and what the early evolutionary stages of a central nervous system were like.

No, thanks, I’m full

Before finding out how the hydra’s nervous system controls hunger, the researchers focused on what causes the strongest feeling of satiety, or fullness, in the animals. They were fed with the brine shrimp Artemia salina, which is among their usual prey, and exposed to the antioxidant glutathione. Previous studies have suggested that glutathione triggers feeding behavior in hydras, causing them to curl their tentacles toward their mouths as if they are swallowing prey.

Hydra fed with as much Artemia as they could eat were given glutathione afterward, while the other group was only given only glutathione and no actual food. Hunger was gauged by how fast and how often they opened their mouths.

It turned out that the first group, which had already glutted themselves on shrimp, showed hardly any response to glutathione eight hours after being fed. Their mouths barely opened—and slowly if so—because they were not hungry enough for even a feeding trigger like glutathione to make them feel they needed seconds.

It was only at 14 hours post-feeding that the hydra that had eaten shrimp opened their mouths wide enough and fast enough to indicate hunger. However, those that were not fed and only exposed to glutathione started showing signs of hunger only four hours after exposure. Mouth opening was not the only behavior provoked by hunger since starved animals also somersaulted through the water and moved toward light, behaviors associated with searching for food. Sated animals would stop somersaulting and cling to the wall of the tank they were in until they were hungry again.

Food on the “brain”

After observing the behavioral changes in the hydra, the research team looked into the neuronal activity behind those behaviors. They focused on two neuronal populations, the ectodermal population known as N3 and the endodermal population known as N4, both known to be involved in hunger and satiety. While these had been known to influence hydra feeding responses, how exactly they were involved was unknown until now.

Hydra have N3 neurons all over their bodies, especially in the foot. Signals from these neurons tell the animal that it has eaten enough and is experiencing satiety. The frequency of these signals decreased as the animals grew hungrier and displayed more behaviors associated with hunger. The frequency of N3 signals did not change in animals that were only exposed to glutathione and not fed, and these hydra behaved just like animals that had gone without food for an extended period of time. It was only when they were given actual food that the N3 signal frequency increased.

“The ectodermal neuronal population N3 is not only responding to satiety by increasing neuronal activity, but is also controlling behaviors that changed due to feeding,” the researchers said in their study, which was recently published in Cell Reports.

Though N4 neurons were only seen to communicate indirectly with the N3 population in the presence of food, they were found to influence eating behavior by regulating how wide the hydras opened their mouths and how long they kept them open. Lower frequency of N4 signals was seen in hydra that were starved or only exposed to glutathione. Higher frequency of N4 signals were associated with the animals keeping their mouths shut.

So, what can the neuronal activity of a tiny, brainless creature possibly tell us about the evolution of our own complex brains?

The researchers think the hydra’s simple nervous system may parallel the much more complex central and enteric (in the gut) nervous systems that we have. While N3 and N4 operate independently, there is still some interaction between them. The team also suggests that the way N4 regulates the hydra’s eating behavior is similar to the way the digestive tracts of mammals are regulated.

“A similar architecture of neuronal circuits controlling appetite/satiety can be also found in mice where enteric neurons, together with the central nervous system, control mouth opening,” they said in the same study.

Maybe, in a way, we really do think with our gut.

Cell Reports, 2024. DOI: 10.1016/j.celrep.2024.114210

How do brainless creatures control their appetites? Read More »

dogs’-brain-activity-shows-they-recognize-the-names-of-objects

Dogs’ brain activity shows they recognize the names of objects

Wired for science!

Enlarge / Wired for science!

Boglárka Morvai

Needle, a cheerful miniature schnauzer I had as a kid, turned into a ball of unspeakable noise and fury each time she saw a dog called Puma. She hated Puma so much she would go ballistic, barking and growling. Merely whispering the name “Puma” set off the same reaction, as though the sound of it and the idea of the dog it represented were clearly connected deep in Needle’s mind.

A connection between a word and a mental representation of its meaning is called “referential understanding,” and for a very long time, we believed dogs lacked this ability. Now, a study published by a team of Hungarian researchers indicates we might have been wrong.

Practice makes perfect

The idea that dogs couldn’t form associations with language in a referential manner grew out of behavioral studies in which dogs were asked to do a selective fetching task. The canines had a few objects placed in front of them (like a toy or a bone) and then had to fetch the one specifically named by their owner.

“In laboratory conditions, the dogs performed at random, fetching whatever they could grab first, even though their owners claimed they knew the names of the objects,” said Marianna Boros, a researcher at Neuroethology of Communication Lab at Eötvös Loránd University in Budapest, Hungary. “But the problem is when the dogs are not trained for the task, there are hundreds of things that can disturb them. They can be more interested in one specific toy, they may be bored, or they may not understand the task. So many distractions.”

To get around the issue of distractions, her team checked to see if the dogs could understand words passively using EEG brain monitoring. In humans, the EEG reading that is considered a telltale sign of semantic reasoning is the N400 effect.

“The work on the N400 was first published in 1981, and hundreds of studies replicated it since then with different stimuli. Typically, you show images of objects to the subject and say matching or mismatching names. When you measure EEG brain activity, you will see it looks different in match and mismatch scenarios,” explained Lilla Magyari, also a scientist at Neuroethology of Communication Lab and co-author of the study. (It’s called the N400 effect because the peak of this difference appears around 400 milliseconds after an object is presented, Magyari explained.)

The only change the team made to adapt a standard N400 test to dogs was switching the order of stimuli—the words were uttered first, and the matching or mismatching objects were shown second. “Because when they hear the word which activates mental representation of the object, they are expecting to see it. The sound made them more attentive,” said Magyari.

Timing is everything

In the experiment, the dogs started out lying on a mat with EEG gear on their heads in a room with an experimenter or the owner of a different dog. The owner of the dog being tested was separated by a glass pane with controllable opaqueness. “It was important because EEG studies [can] very precisely time the moment of presentation of your stimulus,” said Boros.

Oszkár Dániel Gáti

Sentences spoken by the owners that would get the dogs’ attention—things like “Kun-kun, look! The ball!”—were recorded and played to each dog through a loudspeaker. Then, 2,000 milliseconds after each dog heard the sentence, the pane would turn transparent, and the owner would appear holding a matching or mismatching toy. “Each test lasted for as long as the dog was happy to participate. The moment it started to get up or look away, we just stopped the test, and the dog could leave the mat and we just finished by playing sessions. It was all super dog-friendly,” Boros said.

Dogs’ brain activity shows they recognize the names of objects Read More »

this-bird-is-like-a-gps-for-honey

This bird is like a GPS for honey

Show me the honey —

The honeyguide recognizes calls made by different human groups.

A bird perched on a wall in front of an urban backdrop.

Enlarge / A greater honeyguide

With all the technological advances humans have made, it may seem like we’ve lost touch with nature—but not all of us have. People in some parts of Africa use a guide more effective than any GPS system when it comes to finding beeswax and honey. This is not a gizmo, but a bird.

The Greater Honeyguide (highly appropriate name), Indicator indicator (even more appropriate scientific name), knows where all the beehives are because it eats beeswax. The Hadza people of Tanzania and Yao people of Mozambique realized this long ago. Hadza and Yao honey hunters have formed a unique relationship with this bird species by making distinct calls, and the honeyguide reciprocates with its own calls, leading them to a hive.

Because the Hadza and Yao calls differ, zoologist Claire Spottiswoode of the University of Cambridge and anthropologist Brian Wood of UCLA wanted to find out if the birds respond generically to human calls, or are attuned to their local humans. They found that the birds are much more likely to respond to a local call, meaning that they have learned to recognize that call.

Come on, get that honey

To see which sound the birds were most likely to respond to, Spottiswoode and Wood played three recordings, starting with the local call. The Yao honeyguide call is what the researchers describe as “a loud trill followed by a grunt (‘brrrr-hm’) while the Hadza call is more of “a melodic whistle,” as they say in a study recently published in Science. The second recording they would play was the foreign call, which would be the Yao call in Hadza territory and vice versa.

The third recording was an unrelated human sound meant to test whether the human voice alone was enough for a honeyguide to follow. Because Hadza and Yao voices sound similar, the researchers would alternate among recordings of honey hunters speaking words such as their names.

So which sounds were the most effective cues for honeyguides to partner with humans? In Tanzania, local Hadza calls were three times more likely to initiate a partnership with a honeyguide than Yao calls or human voices. Local Yao calls were also the most successful in Mozambique, where, in comparison to Hadza calls and human voices, they were twice as likely to elicit a response that would lead to a cooperative effort to search for a beehive. Though honeyguides did sometimes respond to the other sounds, and were often willing to cooperate when hearing them, it became clear that the birds in each region had learned a local cultural tradition that had become just as much a part of their lives as those of the humans who began it.

Now you’re speaking my language

There is a reason that honey hunters in both the Hadza and Yao tribes told Wood and Spottiswoode that they have never changed their calls and will never change them. If they did, they’d be unlikely to gather nearly as much honey.

How did this interspecies communication evolve? Other African cultures besides the Hadza and Yao have their own calls to summon a honeyguide. Why do the types of calls differ? The researchers do not think these calls came about randomly.

Both the Hadza and Yao people have their own unique languages, and sounds from them may have been incorporated into their calls. But there is more to it than that. The Hadza often hunt animals when hunting for honey. Therefore, the Hadza don’t want their calls to be recognized as human, or else the prey they are after might sense a threat and flee. This may be why they use whistles to communicate with honeyguides—by sounding like birds, they can both attract the honeyguides and stalk prey without being detected.

In contrast, the Yao do not hunt mammals, relying mostly on agriculture and fishing for food. This, along with the fact that they try to avoid potentially dangerous creatures such as lions, rhinos, and elephants, and can explain why they use recognizably human vocalizations to call honeyguides. Human voices may scare these animals away, so Yao honey hunters can safely seek honey with their honeyguide partners. These findings show that cultural diversity has had a significant influence on calls to honeyguides.

While animals might not literally speak our language, the honeyguide is just one of many species that has its own way of communicating with us. They can even learn our cultural traditions.

“Cultural traditions of consistent behavior are widespread in non-human animals and could plausibly mediate other forms of interspecies cooperation,” the researchers said in the same study.

Honeyguides start guiding humans as soon as they begin to fly, and this knack, combined with learning to answer traditional calls and collaborate with honey hunters, works well for both human and bird. Maybe they are (in a way) speaking our language.

Science, 2023.  DOI: 10.1126/science.adh412

This bird is like a GPS for honey Read More »

people-exaggerate-the-consequences-of-saying-no-to-invites

People exaggerate the consequences of saying no to invites

Just say no —

People are more understanding of the reasons for rejections than most of us think.

A green envelope with a white card within it.

Enlarge / The invitation might be nice, but you can feel free to say no.

The holidays can be a time of parties, events, dinners, outings, get-togethers, impromptu meetups—and stress. Is it really an obligation to say yes to every single invite? Is not showing up to Aunt Tillie’s annual ugly sweater party this once going to mean a permanent ban? Turning down some of those invitations waiting impatiently for an RSVP can feel like a risk.

But wait! Turning down an invite won’t necessarily have the harsh consequences that are often feared (especially this time of year). A group of researchers led by psychologist and assistant professor Julian Givi of West Virginia University put test subjects through a series of experiments to see if a host’s reaction to an invitation being declined would really be as awful as the invitee feared. In the experiments, those who declined invitations were not guilted or blacklisted by the inviters. Turns out that hosts were not so upset as invitees thought they would be when someone couldn’t make it.

“Invitees have exaggerated concerns about how much the decline will anger the inviter, signal that the invitee does not care about the inviter, make the inviter unlikely to offer another invitation in the future, and so forth,” the researchers said in a study published by the American Psychological Association.

You’re invited…now what?

Why are we so nervous that declining invitations will annihilate our social lives? Appearing as if we don’t care about the host is one obvious reason. The research team also thinks there is an additional explanation behind this: we mentally exaggerate how much the inviter focuses on the rejection, and underestimate how much they consider what might be going on in our heads and in our lives. This makes us believe that there is no way the inviter will be understanding about any excuse.

All this anxiety means we often end up reluctantly dragging ourselves to a holiday movie or dinner or that infamous ugly sweater party, and saying yes to every single invite, even if it eventually leads to holiday burnout.

To determine if our fears are justified, the psychologists who ran the study focused on three things. The first was declining invitations for fun social activities, such as ice skating in the park. The second focus was how much invitees exaggerated the expected consequences of declining. Finally, the third focus was on how invitees also exaggerated how much hosts were affected by the rejection itself, as opposed to the reasons the invitee gave for turning down the invite.

The show (or party, or whatever) must go on

There were five total experiments that assessed whether someone declining an invitation felt more anxious about it than they should have. In these experiments, invitees were the subjects who had to turn down an invitation, while hosts were the subjects who were tasked with reacting to a declined invitation.

The first experiment had subjects imagining that a hypothetical friend invented them to a museum exhibit, but they turned the invitation down. The invitee then had to describe the possible negative consequences of saying no. Other subjects in this experiment were told to imagine being the one who invited the friend who turned them down, and then report how they would feel.

Most of those imagining they were the invitees overestimated what the reaction of the host would be.

Invitees predicted that a rejected host would experience anger and disappointment, and assume the invitee didn’t care enough about the host. Long term, they also expected that their relationship with the host would be damaged. They weren’t especially concerned about not being invited to future events or that hosts would retaliate by turning them down if they issued invites.

The four remaining experiments slightly altered the circumstances and measured these same potential consequences, obtaining similar results. The second experiment used hosts and invitees who were couples in real life, and who gave each other actual invitations and rejections instead of just imagining them. Invitees again overestimated how negative the hosts’ reactions would be. In the third experiment, outside observers were asked to read a summary of the invitation and rejection, then predict hosts’ reactions. The observers again thought the inviters would react much more negatively than they actually did.

In the fourth experiment, stakes were higher because subjects were told to imagine the invitation and rejection scenario involving a real friend, albeit one who was not present for the experiment. Invitees had to predict how negative their friend’s reaction would be to their response and also their friend’s opinion on why they might have declined. Those doing the inviting had to describe their reactions to a rejection and predict their friend’s expectations about how they would react. Invitees tended to predict more negative reactions than hosts did.

Finally, the fifth experiment also had subjects working individually, this time putting themselves in the place of both the host and invitee. They had to read and respond to an invitation rejection scenario from the perspective of both roles, with the order they handled host and invitee randomized. Those who took the host role first realized that hosts usually empathize with the reasons someone is not able to attend, making them unlikely to predict highly negative reactions to a declined invitation when they were asked later.

Overestimation

Despite their differences, these experiments all point in a similar direction. “Consistent with our theorizing, invitees tended to overestimate the negative ramifications of the invitation decline,” the researchers said in the same study.

Evidently, Aunt Tilly will not be gravely disappointed if her favorite niece or nephew cannot make it to her ugly sweater party this year—some events just happen to be scheduled at especially inconvenient times. This study, however, didn’t test the ramifications of declining invites for more significant but less frequent events, such as weddings and baby showers. Based on the results for smaller events, it’s likely that the thought of turning such an invite down will result in even more anxiety. The key question is whether the hosts will be less understanding for big events.

Givi and his team still note that accepting invitations can have positive effects. Human beings benefit from being around other people, and isolation can be detrimental. Still, we need to remember that too much of a good thing can be too much—everyone needs time to recharge. Even with the heavy feeling of obligation that comes with being invited somewhere, turning down one or two invites will probably not start a holiday apocalypse—unless your aunt is an exception.

Journal of Personality and Social Psychology, 2023.  DOI: 10.1037/pspi0000443.supp

People exaggerate the consequences of saying no to invites Read More »

what-happens-in-a-crow’s-brain-when-it-uses-tools? 

What happens in a crow’s brain when it uses tools? 

This is your brain on tools —

Researchers trace the areas of the brain that are active when birds are using tools.

Three crows on the streets in the foreground with traffic and city lights blurry in the background.

Enlarge / Sure, they can use tools, but do they know where the nearest subway stop is?

“A thirsty crow wanted water from a pitcher, so he filled it with pebbles to raise the water level to drink,” summarizes a famous Aesop Fable. While this tale is thousands of years old, animal behaviorists still use this challenge to study corvids (which include crows, ravens, jays, and magpies) and their use of tools. In a recent Nature Communications study, researchers from a collaboration of universities across Washington, Florida, and Utah used radioactive tracers within the brains of several American crows to see which parts of their brains were active when they used stones to obtain food from the bottom of a water-filled tube.

Their results indicate that the motor learning and tactile control centers were activated in the brains of the more proficient crows, while the sensory and higher-order processing centers lit up in the brains of less proficient crows. These results suggest that competence with tools is linked to certain memories and muscle control, which the researchers claimed is similar to a ski jumper visualizing the course before jumping.

The researchers also found that out of their avian test subjects, female crows were especially proficient at tool usage, succeeding in the challenge quickly. “[A] follow-up question is whether female crows actually have more need for creative thinking relative to male crows,” elaborates Loma Pendergraft, the study’s first author and a graduate student at the University of Washington, who wants to understand if the caregiving and less dominant role of female crows gives them a higher capacity for tool use.

While only two species of crow (the New Caledonian crow and the Hawaiian crow) inherently use twigs and sticks as foraging tools, this study also suggests that other crow species, like the American crow, have the neural flexibility to learn to use tools.

A less invasive look at bird brains

Due to their unique behaviors, complex social structures, and reported intelligence, crows have fascinated animal behavioralists for decades. Scientists can study crows’ brains in real time by using 18F-fluorodeoxyglucose (FDG), a radioactive tracer, which the researchers injected into the crows’ brains. They then use positron emission tomography (PET) scans to see which brain areas are activated during different tasks.

“FDG-PET is a method we use to remotely examine activity throughout the entire brain without needing to do any surgeries or implants,” explained Pendergraft. “It’s like [a functional] MRI.” The FDG-PET method is non-invasive, as the crows aren’t required to sit still, which minimizes the stress the crows feel during the experiment.  In the Nature Communications study, Pendergraft and his team ensured the crows were anesthetized before scanning them.

FDG is also used in various medical imaging techniques, such as diagnosing Alzheimer’s disease or screening for cancerous tissue. “Basically, the body treats it as glucose, a substance needed for cells to stay alive,” Pendergraft added. “If a body part is working harder than normal, it’s going to need extra glucose to power the additional activity. This means we can measure relative FDG concentrations within the brain as a proxy for relative brain activity.”

What happens in a crow’s brain when it uses tools?  Read More »