language

researchers-track-individual-neurons-as-they-respond-to-words

Researchers track individual neurons as they respond to words

Pondering phrasing —

When processing language, individual neurons respond to words with similar meanings.

Human Neuron, Digital Light Microscope. (Photo By BSIP/Universal Images Group via Getty Images)

Enlarge / Human Neuron, Digital Light Microscope. (Photo By BSIP/Universal Images Group via Getty Images)

BSIP/Universal Images Group via Getty Images

“Language is a huge field, and we are novices in this. We know a lot about how different areas of the brain are involved in linguistic tasks, but the details are not very clear,” says Mohsen Jamali, a computational neuroscience researcher at Harvard Medical School who led a recent study into the mechanism of human language comprehension.

“What was unique in our work was that we were looking at single neurons. There is a lot of studies like that on animals—studies in electrophysiology, but they are very limited in humans. We had a unique opportunity to access neurons in humans,” Jamali adds.

Probing the brain

Jamali’s experiment involved playing recorded sets of words to patients who, for clinical reasons, had implants that monitored the activity of neurons located in their left prefrontal cortex—the area that’s largely responsible for processing language. “We had data from two types of electrodes: the old-fashioned tungsten microarrays that can pick the activity of a few neurons; and the Neuropixel probes which are the latest development in electrophysiology,” Jamali says. The Neuropixels were first inserted in human patients in 2022 and could record the activity of over a hundred neurons.

“So we were in the operation room and asked the patient to participate. We had a mixture of sentences and words, including gibberish sounds that weren’t actual words but sounded like words. We also had a short story about Elvis,” Jamali explains. He said the goal was to figure out if there was some structure to the neuronal response to language. Gibberish words were used as a control to see if the neurons responded to them in a different way.

“The electrodes we used in the study registered voltage—it was a continuous signal at 30 kHz sampling rate—and the critical part was to dissociate how many neurons we had in each recording channel. We used statistical analysis to separate individual neurons in the signal,” Jamali says. Then, his team synchronized the neuronal activity signals with the recordings played to the patients down to a millisecond and started analyzing the data they gathered.

Putting words in drawers

“First, we translated words in our sets to vectors,” Jamali says. Specifically, his team used the Word2Vec, a technique used in computer science to find relationships between words contained in a large corpus of text. What Word2Vec can do is tell if certain words have something in common—if they are synonyms, for example. “Each word was represented by a vector in a 300-dimensional space. Then we just looked at the distance between those vectors and if the distance was close, we concluded the words belonged in the same category,” Jamali explains.

Then the team used these vectors to identify words that clustered together, which suggested they had something in common (something they later confirmed by examining which words were in a cluster together). They then determined whether specific neurons responded differently to different clusters of words. It turned out they did.

“We ended up with nine clusters. We looked at which words were in those clusters and labeled them,” Jamali says. It turned out that each cluster corresponded to a neat semantic domain. Specialized neurons responded to words referring to animals, while other groups responded to words referring to feelings, activities, names, weather, and so on. “Most of the neurons we registered had one preferred domain. Some had more, like two or three,” Jamali explained.

The mechanics of comprehension

The team also tested if the neurons were triggered by the mere sound of a word or by its meaning. “Apart from the gibberish words, another control we used in the study was homophones,” Jamali says. The idea was to test if the neurons responded differently to the word “sun” and the word “son,” for example.

It turned out that the response changed based on context. When the sentence made it clear the word referred to a star, the sound triggered neurons triggered by weather phenomena. When it was clear that the same sound referred to a person, it triggered neurons responsible for relatives. “We also presented the same words at random without any context and found that it didn’t elicit as strong a response as when the context was available,” Jamali claims.

But the language processing in our brains will need to involve more than just different semantic categories being processed by different groups of neurons.

“There are many unanswered questions in linguistic processing. One of them is how much a structure matters, the syntax. Is it represented by a distributed network, or can we find a subset of neurons that encode structure rather than meaning?” Jamali asked. Another thing his team wants to study is what the neural processing looks like during speech production, in addition to comprehension. “How are those two processes related in terms of brain areas and the way the information is processed,” Jamali adds.

The last thing—and according to Jamali the most challenging thing—is using the Neuropixel probes to see how information is processed across different layers of the brain. “The Neuropixel probe travels through the depths of the cortex, and we can look at the neurons along the electrode and say like, ‘OK, the information from this layer, which is responsible for semantics, goes to this layer, which is responsible for something else.’ We want to learn how much information is processed by each layer. This should be challenging, but it would be interesting to see how different areas of the brain are involved at the same time when presented with linguistic stimuli,” Jamali concludes.

Nature, 2024.  DOI: 10.1038/s41586-024-07643-2

Researchers track individual neurons as they respond to words Read More »

whale-songs-have-features-of-language,-but-whales-may-not-be-speaking

Whale songs have features of language, but whales may not be speaking

A group of sperm whales and remora idle near the surface of the ocean.

Whales use complex communication systems we still don’t understand, a trope exploited in sci-fi shows like Apple TV’s Extrapolations. That show featured a humpback whale (voiced by Meryl Streep) discussing Mahler’s symphonies with a human researcher via some AI-powered inter-species translation app developed in 2046.

We’re a long way from that future. But a team of MIT researchers has now analyzed a database of Caribbean sperm whales’ calls and has found there really is a contextual and combinatorial structure in there. But does it mean whales have a human-like language and we can just wait until Chat GPT 8.0 to figure out how to translate from English to Sperm-Whaleish? Not really.

One-page dictionary

“Sperm whales communicate using clicks. These clicks occur in short packets we call codas that typically last less than two seconds, containing three to 40 clicks,” said Pratyusha Sharma, a researcher at the MIT Computer Science and Artificial Intelligence Laboratory and the lead author of the study. Her team argues that codas are analogues of words in human language and are further organized in coda sequences that are analogues of sentences. “Sperm whales are not born with this communication system; it’s acquired and changes over the course of time,” Sharma said.

Seemingly, sperm whales have a lot to communicate about. Earlier observational studies revealed that they live a fairly complex social life revolving around family units forming larger structures called clans. They also have advanced hunting strategies and do group decision-making, seeking consensus on where to go and what to do.

Despite this complexity in behavior and relationships, their vocabulary seemed surprisingly sparse.

Sharma’s team sourced a record of codas from the dataset of the Dominica Sperm Whale Project, a long-term study on sperm whales that recorded and annotated 8,719 individual codas made by EC-1, a sperm whale clan living in East Caribbean waters. Those 8,719 recorded codas, according to earlier research on this database, were really just 21 coda types that the whales were using over and over.

A set of 21 words didn’t look like much of a language. “But this [number] is exactly what we found was not true,” Sharma said.

Fine-grained changes

“People doing those earlier studies were looking at the calls in isolation… They were annotating these calls, taking them out of context, shuffling them up, and then tried to figure out what kind of patterns were recurring,” Sharma explained. Her team, by contrast, analyzed the same calls in their full context, basically looking at entire exchanges rather than at separate codas. “One of the things we saw was fine-grained changes in the codas that other whales participating in the exchange were noticing and reacting to. If you looked at all these calls out of context, all these fine-grained changes would be lost; they would be considered noise,” Sharma said.

The first of those newly recognized fine-grained changes was termed “rubato,” borrowed from music, where it means introducing slight variations in the tempo of a piece. Communicating sperm whales could stretch or shrink a coda while keeping the same rhythm (where rhythm describes the spacing between the clicks in a coda).

The second feature the researchers discovered was ornamentation. “An ornament is an extra click added at the end of the coda. And when you have this extra click, it marks a critical point, and the call changes. It either happens toward the beginning or at the end of the call,” said Sharma.

The whales could individually manipulate rubato and ornamentation, as well as previously identified rhythm and tempo features. By combining this variation, they can produce a very large variety of codas. “The whales produce way more combinations of these features than 21—the information-carrying capacity of this system is a lot more capable than that,” Sharma said.

Her team identified 18 types of rhythm, three variants of rubato, five types of tempo, and an ability to add an ornament or not in the sperm whale’s communication system. That adds up to 540 possible codas, of which there are roughly 150 these whales frequently used in real life. Not only were sperm whales’ calls built with distinctive units at a coda level (meaning they were combinatorial), but they were compositional in that a call contained multiple codas.

But does that get us any closer to decoding the whale’s language?

“The combinatoriality at the word level and compositionality at the sentence level in human languages is something that looks very similar to what we found,” Sharma said. But the team didn’t determine whether meaning was being conveyed, she added. And without evidence of meaning, we might be barking up the wrong tree entirely.

Whale songs have features of language, but whales may not be speaking Read More »

this-bird-is-like-a-gps-for-honey

This bird is like a GPS for honey

Show me the honey —

The honeyguide recognizes calls made by different human groups.

A bird perched on a wall in front of an urban backdrop.

Enlarge / A greater honeyguide

With all the technological advances humans have made, it may seem like we’ve lost touch with nature—but not all of us have. People in some parts of Africa use a guide more effective than any GPS system when it comes to finding beeswax and honey. This is not a gizmo, but a bird.

The Greater Honeyguide (highly appropriate name), Indicator indicator (even more appropriate scientific name), knows where all the beehives are because it eats beeswax. The Hadza people of Tanzania and Yao people of Mozambique realized this long ago. Hadza and Yao honey hunters have formed a unique relationship with this bird species by making distinct calls, and the honeyguide reciprocates with its own calls, leading them to a hive.

Because the Hadza and Yao calls differ, zoologist Claire Spottiswoode of the University of Cambridge and anthropologist Brian Wood of UCLA wanted to find out if the birds respond generically to human calls, or are attuned to their local humans. They found that the birds are much more likely to respond to a local call, meaning that they have learned to recognize that call.

Come on, get that honey

To see which sound the birds were most likely to respond to, Spottiswoode and Wood played three recordings, starting with the local call. The Yao honeyguide call is what the researchers describe as “a loud trill followed by a grunt (‘brrrr-hm’) while the Hadza call is more of “a melodic whistle,” as they say in a study recently published in Science. The second recording they would play was the foreign call, which would be the Yao call in Hadza territory and vice versa.

The third recording was an unrelated human sound meant to test whether the human voice alone was enough for a honeyguide to follow. Because Hadza and Yao voices sound similar, the researchers would alternate among recordings of honey hunters speaking words such as their names.

So which sounds were the most effective cues for honeyguides to partner with humans? In Tanzania, local Hadza calls were three times more likely to initiate a partnership with a honeyguide than Yao calls or human voices. Local Yao calls were also the most successful in Mozambique, where, in comparison to Hadza calls and human voices, they were twice as likely to elicit a response that would lead to a cooperative effort to search for a beehive. Though honeyguides did sometimes respond to the other sounds, and were often willing to cooperate when hearing them, it became clear that the birds in each region had learned a local cultural tradition that had become just as much a part of their lives as those of the humans who began it.

Now you’re speaking my language

There is a reason that honey hunters in both the Hadza and Yao tribes told Wood and Spottiswoode that they have never changed their calls and will never change them. If they did, they’d be unlikely to gather nearly as much honey.

How did this interspecies communication evolve? Other African cultures besides the Hadza and Yao have their own calls to summon a honeyguide. Why do the types of calls differ? The researchers do not think these calls came about randomly.

Both the Hadza and Yao people have their own unique languages, and sounds from them may have been incorporated into their calls. But there is more to it than that. The Hadza often hunt animals when hunting for honey. Therefore, the Hadza don’t want their calls to be recognized as human, or else the prey they are after might sense a threat and flee. This may be why they use whistles to communicate with honeyguides—by sounding like birds, they can both attract the honeyguides and stalk prey without being detected.

In contrast, the Yao do not hunt mammals, relying mostly on agriculture and fishing for food. This, along with the fact that they try to avoid potentially dangerous creatures such as lions, rhinos, and elephants, and can explain why they use recognizably human vocalizations to call honeyguides. Human voices may scare these animals away, so Yao honey hunters can safely seek honey with their honeyguide partners. These findings show that cultural diversity has had a significant influence on calls to honeyguides.

While animals might not literally speak our language, the honeyguide is just one of many species that has its own way of communicating with us. They can even learn our cultural traditions.

“Cultural traditions of consistent behavior are widespread in non-human animals and could plausibly mediate other forms of interspecies cooperation,” the researchers said in the same study.

Honeyguides start guiding humans as soon as they begin to fly, and this knack, combined with learning to answer traditional calls and collaborate with honey hunters, works well for both human and bird. Maybe they are (in a way) speaking our language.

Science, 2023.  DOI: 10.1126/science.adh412

This bird is like a GPS for honey Read More »