neurobiology

how-do-brainless-creatures-control-their-appetites?

How do brainless creatures control their appetites?

Feed me! —

Separate systems register when the animals have eaten and control feeding behaviors.

Image of a greenish creature with a long stalk and tentacles, against a black background.

The hydra is a Lovecraftian-looking microorganism with a mouth surrounded by tentacles on one end, an elongated body, and a foot on the other end. It has no brain or centralized nervous system. Despite the lack of either of those things, it can still feel hunger and fullness. How can these creatures know when they are hungry and realize when they have had enough?

While they lack brains, hydra do have a nervous system. Researchers from Kiel University in Germany found they have an endodermal (in the digestive tract) and ectodermal (in the outermost layer of the animal) neuronal population, both of which help them react to food stimuli. Ectodermal neurons control physiological functions such as moving toward food, while endodermal neurons are associated with feeding behavior such as opening the mouth—which also vomits out anything indigestible.

Even such a limited nervous system is capable of some surprisingly complex functions. Hydras might even give us some insights into how appetite evolved and what the early evolutionary stages of a central nervous system were like.

No, thanks, I’m full

Before finding out how the hydra’s nervous system controls hunger, the researchers focused on what causes the strongest feeling of satiety, or fullness, in the animals. They were fed with the brine shrimp Artemia salina, which is among their usual prey, and exposed to the antioxidant glutathione. Previous studies have suggested that glutathione triggers feeding behavior in hydras, causing them to curl their tentacles toward their mouths as if they are swallowing prey.

Hydra fed with as much Artemia as they could eat were given glutathione afterward, while the other group was only given only glutathione and no actual food. Hunger was gauged by how fast and how often they opened their mouths.

It turned out that the first group, which had already glutted themselves on shrimp, showed hardly any response to glutathione eight hours after being fed. Their mouths barely opened—and slowly if so—because they were not hungry enough for even a feeding trigger like glutathione to make them feel they needed seconds.

It was only at 14 hours post-feeding that the hydra that had eaten shrimp opened their mouths wide enough and fast enough to indicate hunger. However, those that were not fed and only exposed to glutathione started showing signs of hunger only four hours after exposure. Mouth opening was not the only behavior provoked by hunger since starved animals also somersaulted through the water and moved toward light, behaviors associated with searching for food. Sated animals would stop somersaulting and cling to the wall of the tank they were in until they were hungry again.

Food on the “brain”

After observing the behavioral changes in the hydra, the research team looked into the neuronal activity behind those behaviors. They focused on two neuronal populations, the ectodermal population known as N3 and the endodermal population known as N4, both known to be involved in hunger and satiety. While these had been known to influence hydra feeding responses, how exactly they were involved was unknown until now.

Hydra have N3 neurons all over their bodies, especially in the foot. Signals from these neurons tell the animal that it has eaten enough and is experiencing satiety. The frequency of these signals decreased as the animals grew hungrier and displayed more behaviors associated with hunger. The frequency of N3 signals did not change in animals that were only exposed to glutathione and not fed, and these hydra behaved just like animals that had gone without food for an extended period of time. It was only when they were given actual food that the N3 signal frequency increased.

“The ectodermal neuronal population N3 is not only responding to satiety by increasing neuronal activity, but is also controlling behaviors that changed due to feeding,” the researchers said in their study, which was recently published in Cell Reports.

Though N4 neurons were only seen to communicate indirectly with the N3 population in the presence of food, they were found to influence eating behavior by regulating how wide the hydras opened their mouths and how long they kept them open. Lower frequency of N4 signals was seen in hydra that were starved or only exposed to glutathione. Higher frequency of N4 signals were associated with the animals keeping their mouths shut.

So, what can the neuronal activity of a tiny, brainless creature possibly tell us about the evolution of our own complex brains?

The researchers think the hydra’s simple nervous system may parallel the much more complex central and enteric (in the gut) nervous systems that we have. While N3 and N4 operate independently, there is still some interaction between them. The team also suggests that the way N4 regulates the hydra’s eating behavior is similar to the way the digestive tracts of mammals are regulated.

“A similar architecture of neuronal circuits controlling appetite/satiety can be also found in mice where enteric neurons, together with the central nervous system, control mouth opening,” they said in the same study.

Maybe, in a way, we really do think with our gut.

Cell Reports, 2024. DOI: 10.1016/j.celrep.2024.114210

How do brainless creatures control their appetites? Read More »

mutations-in-a-non-coding-gene-associated-with-intellectual-disability

Mutations in a non-coding gene associated with intellectual disability

Splice of life —

A gene that only makes an RNA is linked to neurodevelopmental problems.

Colored ribbons that represent the molecular structure of a large collection of proteins and RNAs.

Enlarge / The spliceosome is a large complex of proteins and RNAs.

Almost 1,500 genes have been implicated in intellectual disabilities; yet for most people with such disabilities, genetic causes remain unknown. Perhaps this is in part because geneticists have been focusing on the wrong stretches of DNA when they go searching. To rectify this, Ernest Turro—a biostatistician who focuses on genetics, genomics, and molecular diagnostics—used whole genome sequencing data from the 100,000 Genomes Project to search for areas associated with intellectual disabilities.

His lab found a genetic association that is the most common one yet to be associated with neurodevelopmental abnormality. And the gene they identified doesn’t even make a protein.

Trouble with the spliceosome

Most genes include instructions for how to make proteins. That’s true. And yet human genes are not arranged linearly—or rather, they are arranged linearly, but not contiguously. A gene containing the instructions for which amino acids to string together to make a particular protein—hemoglobin, insulin, serotonin, albumin, estrogen, whatever protein you like—is modular. It contains part of the amino acid sequence, then it has a chunk of DNA that is largely irrelevant to that sequence, then a bit more of the protein’s sequence, then another chunk of random DNA, back and forth until the end of the protein. It’s as if each of these prose paragraphs were separated by a string of unrelated letters (but not a meaningful paragraph from a different article).

In order to read this piece through coherently, you’d have to take out the letters interspersed between its paragraphs. And that’s exactly what happens with genes. In order to read the gene through coherently, the cell has machinery that splices out the intervening sequences and links up the protein-making instructions into a continuous whole. (This doesn’t happen in the DNA itself; it happens to an RNA copy of the gene.) The cell’s machinery is obviously called the spliceosome.

There are about a hundred proteins that comprise the spliceosome. But the gene just found to be so strongly associated with neurodevelopmental disorders doesn’t encode any of them. Rather, it encodes one of five RNA molecules that are also part of the spliceosome complex and interact with the RNAs that are being spliced. Mutations in this gene were found to be associated with a syndrome with symptoms that include intellectual disability, seizures, short stature, neurodevelopmental delay, drooling, motor delay, hypotonia (low muscle tone), and microcephaly (having a small head).

Supporting data

The researchers buttressed their finding by examining three other databases; in all of them, they found more people with the syndrome who had mutations in this same gene. The mutations occur in a remarkably conserved region of the genome, suggesting that it is very important. Most of the mutations were new in the affected people—i.e. not inherited from their parents—but there was one case of one particular mutation in the gene that was inherited. Based on this, the researchers concluded that this particular variant may cause a less severe disorder than the other mutations.

Many studies that look for genes associated with diseases have focused on searching catalogs of protein coding genes. These results suggest that we could have been missing important mutations because of this focus.

Nature Medicine, 2024. DOI: 10.1038/s41591-024-03085-5

Mutations in a non-coding gene associated with intellectual disability Read More »

chemical-tweaks-to-a-toad-hallucinogen-turns-it-into-a-potential-drug

Chemical tweaks to a toad hallucinogen turns it into a potential drug

No licking toads! —

Targets a different serotonin receptor from other popular hallucinogens.

Image of the face of a large toad.

Enlarge / The Colorado River toad, also known as the Sonoran Desert Toad.

It is becoming increasingly accepted that classic psychedelics like LSD, psilocybin, ayahuasca, and mescaline can act as antidepressants and anti-anxiety treatments in addition to causing hallucinations. They act by binding to a serotonin receptor. But there are 14 known types of serotonin receptors, and most of the research into these compounds has focused on only one of them—the one these molecules like, called 5-HT2A. (5-HT, short for 5-hydroxytryptamine, is the chemical name for serotonin.)

The Colorado River toad (Incilius alvarius), also known as the Sonoran Desert toad, secretes a psychedelic compound that likes to bind to a different serotonin receptor subtype called 5-HT1A. And that difference may be the key to developing an entirely distinct class of antidepressants.

Uncovering novel biology

Like other psychedelics, the one the toad produces decreases depression and anxiety and induces meaningful and spiritually significant experiences. It has been used clinically to treat vets with post-traumatic stress disorder and is being developed as a treatment for other neurological disorders and drug abuse. 5-HT1A is a validated therapeutic target, as approved drugs, including the antidepressant Viibryd and the anti-anxiety med Buspar, bind to it. But little is known about how psychedelics engage with this receptor and which effects it mediates, so Daniel Wacker’s lab decided to look into it.

The researchers started by making chemical modifications to the frog psychedelic and noting how each of the tweaked molecules bound to both 5-HT2A  and 5-HT1A. As a group, these psychedelics are known as “designer tryptamines”—that’s tryp with a “y”, mind you—because they are metabolites of the amino acid tryptophan.

The lab made 10 variants and found one that is more than 800-fold selective about sticking to 5-HT1A as compared to 5-HT2A. That makes it a great research tool for elucidating the structure-activity relationship of the 5-HT1A receptor, as well as the molecular mechanisms behind the pharmacology of the drugs on the market that bind to it. The lab used it to explore both of those avenues. However, the variant’s ultimate utility might be as a new therapeutic for psychiatric disorders, so they tested it in mice.

Improving the lives of mice

The compound did not induce hallucinations in mice, as measured by the “head-twitch response.” But it did alleviate depression, as measured by a “chronic social defeat stress model.” In this model, for 10 days in a row, the experimental mouse was introduced to an “aggressor mouse” for “10-minute defeat bouts”; essentially, it got beat up by a bully at recess for two weeks. Understandably, after this experience, the experimental mouse tended not to be that friendly with new mice, as controls usually are. But when injected with the modified toad psychedelic, the bullied mice were more likely to interact positively with new mice they met.

Depressed mice, like depressed people, also suffer from anhedonia: a reduced ability to experience pleasure. In mice, this manifests in not taking advantage of drinking sugar water when given the opportunity. But treated bullied mice regained their preference for the sweet drink. About a third of mice seem to be “stress-resilient” in this model; the bullying doesn’t seem to phase them. The drug increased the number of resilient mice.

The 5-HT2A receptor has hogged all of the research love because it mediates the hallucinogenic effects of many popular psychedelics, so people assumed that it must mediate their therapeutic effects, too. However, Wacker argues that there is little evidence supporting this assumption. Wacker’s new toad-based psychedelic variant and its preference for the 5-HT1A receptor will help elucidate the complementary roles these two receptor subtypes play in mediating the cellular and psychological effects of psychedelic molecules. And it might provide the basis for a new tryptamine-based mental health treatment as well—one without hallucinatory side effects, disappointing as that may be to some.

Nature, 2024.  DOI: 10.1038/s41586-024-07403-2

Chemical tweaks to a toad hallucinogen turns it into a potential drug Read More »

the-science-of-smell-is-fragrant-with-submolecules

The science of smell is fragrant with submolecules

Did you smell something? —

A chemical that we smell may be a composite of multiple smell-making pieces.

cartoon of roses being smelled, with the nasal passages, neurons, and brain visible through cutaways.

When we catch a whiff of perfume or indulge in a scented candle, we are smelling much more than Floral Fantasy or Lavender Vanilla. We are actually detecting odor molecules that enter our nose and interact with cells that send signals to be processed by our brain. While certain smells feel like they’re unchanging, the complexity of this system means that large odorant molecules are perceived as the sum of their parts—and we are capable of perceiving the exact same molecule as a different smell.

Smell is more complex than we might think. It doesn’t consist of simply detecting specific molecules. Researcher Wen Zhou and his team from the Institute of Psychology of the Chinese Academy of Sciences have now found that parts of our brains analyze smaller parts of the odor molecules that make things smell.

Smells like…

So how do we smell? Odor molecules that enter our noses stimulate olfactory sensory neurons. They do this by binding to odorant receptors on these neurons (each of which makes only one of approximately 500 different odor receptors). Smelling something activates different neurons depending on what the molecules in that smell are and which receptors they interact with. The sensory neurons in the piriform cortex of the brain then use the information from the sensory neurons and interpret it as a message that makes us smell vanilla. Or a bouquet of flowers. Or whatever else.

Odor molecules were previously thought to be coded only as whole molecules, but Zhou and his colleagues wanted to see whether the brain’s analysis of odor molecules could perceive something less than a complete molecule. They reasoned that, if only whole molecules work, then after being exposed to a part of an odorant molecule, the test subjects would smell the original molecule exactly the same way. If, by contrast, the brain was able to pick up on the smell of a molecule’s substructures, neurons would adapt to the substructure. When re-exposed to the original molecule, subjects would not sense it nearly as strongly.

“If [sub-molecular factors are part of our perception of an odor]—the percept[ion] and its neural representation would be shifted towards those of the unadapted part of that compound,” the researchers said in a study recently published in Nature Human Behavior.

Doesn’t smell like…

To see whether their hypothesis held up, Zhou’s team presented test subjects with a compound abbreviated CP, its separate components C and P, and an unrelated component, U. P and U were supposed to have equal aromatic intensity despite being different scents.

In one session, subjects smelled CP and then sniffed P until they had adapted to it. When they smelled CP again, they reported it smelling more like C than P. Despite being exposed to the entire molecule, they were mostly smelling C, which was unadapted. In another session, subjects adapted to U, after which there was no change in how they perceived CP. So, the effect is specific to smelling a portion of the odorant molecule.

In yet another experiment, subjects were told to first smell CP and then adapt to the smell of P with just one nostril while they kept the other nostril closed. Once adapted, CP and C smelled similar, but only when snorted through the nostril that had been open. The two smelled much more different through the nostril that had been closed.

Previous research has shown that adaptation to odors takes place in the piriform cortex. Substructure adaptation causes this part of the brain to respond differently to the portions of a chemical that the nose has recently been exposed to.

This olfactory experiment showed that our brains perceive smells by doing more than just recognizing the presence of a whole odor molecule. Some molecules can be perceived as a collection of submolecular units that are perceived separately.

“The smells we perceived are the products of continuous analysis and synthesis in the olfactory system,” the team said in the same study, “breath by breath, of the structural features and relationships of volatile compounds in our ever-changing chemical environment.”

Nature Human Behaviour, 2024.  DOI: 10.1038/s41562-024-01849-0

The science of smell is fragrant with submolecules Read More »

dna-parasite-now-plays-key-role-in-making-critical-nerve-cell-protein

DNA parasite now plays key role in making critical nerve cell protein

Domesticated viruses —

An RNA has been adopted to help the production of myelin, a key nerve protein.

Graphic depiction of a nerve cell with a myelin coated axon.

Human brains (and the brains of other vertebrates) are able to process information faster because of myelin, a fatty substance that forms a protective sheath over the axons of our nerve cells and speeds up their impulses. How did our neurons evolve myelin sheaths? Part of the answer—which was unknown until now—almost sounds like science fiction.

Led by scientists from Altos Labs-Cambridge Institute of Science, a team of researchers has uncovered a bit of the gnarly past of how myelin ended up covering vertebrate neurons: a molecular parasite has been messing with our genes. Sequences derived from an ancient virus help regulate a gene that encodes a component of myelin, helping explain why vertebrates have an edge when it comes to their brains.

Prehistoric infection

Myelin is a fatty material produced by oligodendrocyte cells in the central nervous system and Schwann cells in the peripheral nervous system. Its insulating properties allow neurons to zap impulses to one another at faster speeds and greater lengths. Our brains can be complex in part because myelin enables longer, narrower axons, which means more nerves can be stacked together.

The un-myelinated brain cells of many invertebrates often need to rely on wider—and therefore fewer—axons for impulse conduction. Rapid impulse conduction makes quicker reactions possible, whether that means fleeing danger or capturing prey.

So, how do we make myelin? A key player in its production appears to be a type of molecular parasite called a retrotransposon.

Like other transposons, retrotransposons can move to new locations in the genome through an RNA intermediate. However, most retrotransposons in our genome have picked up too many mutations to move about anymore.

RNLTR12-int is a retrotransposon that is thought to have originally entered our ancestors’ genome as a virus. Rat genomes now have over 100 copies of the retrotransposon.

An RNA made by RNLTR12-int helps produce myelin by binding to a transcription factor or a protein that regulates the activity of other genes. The RNA/protein combination binds to DNA near the gene for myelin basic protein, or MBP, a major component of myelin.

“MBP is essential for the membrane growth and compression of [central nervous system] myelin,” the researchers said in a study recently published in Cell.

Technical knockout

To find out whether RNLTR12-int really was behind the regulation of MBP and, therefore, myelin production, the research team had to knock its level down and see if myelination still happened. They first experimented on rat brains before moving on to zebrafish and frogs.

When they inhibited RNLTR12-int, the results were drastic. In the central nervous system, genetically edited rats produced 98 percent less MBP than those where the gene was left unedited. The absence of RNLTR12-int also caused the oligodendrocytes that produce myelin to develop much simpler structures than they would normally form. When RNLTR12-int was knocked out in the peripheral nervous system, it reduced myelin produced by Schwann cells.

The researchers used a SOX10 antibody to show that SOX10 bound to the RNLTR12-int transcript in vivo. This was an important result, since there are lots of non-coding RNAs made by cells, and it wasn’t clear whether any RNA would work or if it was specific to RNLTR12-int.

Do these results hold up in other jawed vertebrates? Using CRISPR-CAS9 to perform knockout tests with retrotransposons related to RNLTR12-int in frogs and zebrafish showed similar results.

Myelination has enriched the vertebrate brain so it can work like never before. This is why the term “brain food” is literal. Healthy fats are so important for our brains; they help form myelin since it is a fatty acid. Think about that next time you’re pulling an all-nighter while reaching for a handful of nuts.

Cell, 2024. DOI: 10.1016/j.cell.2024.01.011

DNA parasite now plays key role in making critical nerve cell protein Read More »

corvids-seem-to-handle-temporary-memories-the-way-we-do

Corvids seem to handle temporary memories the way we do

Working on memory —

Birds show evidence that they lump temporary memories into categories.

A black bird with yellow eyes against a blue sky.

Enlarge / A jackdaw tries to remember what color it was thinking of.

Humans tend to think that we are the most intelligent life-forms on Earth, and that we’re largely followed by our close relatives such as chimps and gorillas. But there are some areas of cognition in which homo sapiens and other primates are not unmatched. What other animal’s brain could possibly operate at a human’s level, at least when it comes to one function? Birds—again.

This is far from the first time that bird species such as corvids and parrots have shown that they can think like us in certain ways. Jackdaws are clever corvids that belong to the same family as crows and ravens. After putting a pair of them to the test, an international team of researchers saw that the birds’ working memory operates the same way as that of humans and higher primates. All of these species use what’s termed “attractor dynamics,” where they organize information into specific categories.

Unfortunately for them, that means they also make the same mistakes we do. “Jackdaws (Corvus monedula) have similar behavioral biases as humans; memories are less precise and more biased as memory demands increase,” the researchers said in a study recently published in Communications Biology.

Remembering not to forget

Working memory is where we hang on to items for a brief period of time—like a postal code looked up in one browser tab and typed into a second. It can hold everything from numbers and words to images and concepts. But these memories deteriorate quickly, and the capacity is limited—the more things we try to remember, the less likely the brain is going to remember them all correctly.

Attractor dynamics give the brain an assist with working memory by taking sensory input, such as color, and categorizing it. The highly specific red shade “Fire Lily” might fade from working memory quickly, and fewer specifics will stick around as time passes, yet it will still be remembered as “red.” You lose specifics first, but hang on to the general idea longer.

Aside from time, the other thing that kills working memory is distractions. Less noise—meaning distracting factors inside and outside the brain—will make it easier to distinguish Fire Lily among the other reds. If a hypothetical customer was browsing paint swatches for Sandstone (a taupe) and London Fog (a gray) in addition to Fire Lily, remembering each color accurately would become even more difficult because of the increased demands on working memory.

Bias can also blur working memory and cause the brain to remember some red hues more accurately than others, especially if the brain compartmentalizes them all under “red.” This can happen when a particular customer has a certain idea of the color red that leans warmer or cooler than Fire Lily. If they view red as leaning slightly warmer than Fire Lily, they might believe a different, warmer red is Fire Lily.

In living color

To find out if corvids process stimuli using short-term memory with attractor dynamics, the researchers subjected two jackdaws to a variety of tests that involved remembering colors. Each bird had to peck on a white button to begin the test. They were then shown a color—the target color—before being shown a chart of 64 colors. The jackdaws had to look at that chart and peck the color they had previously been shown. A correct answer would get them their favorite treat, while responses that were close but not completely accurate would get them other treats.

While the birds performed well with just one color, their accuracy went down as the researchers challenged them to remember more target colors from the chart at once. They were more likely to pick colors that were close to, but not exactly, the target colors they had been shown—likely because there was a greater load on their short-term memory.

This is what we’d see if a customer had to remember not only Fire Lily, but Sandstone and London Fog. The only difference is that we humans would be able to read the color names, and the jackdaws only found out they were wrong when they didn’t get their favorite treat.

“Despite vastly different visual systems and brain organizations, corvids and primates show similar attractor dynamics, which can mitigate noise in visual working memory representations,” the researchers said in the same study.

How and why birds evolved attractor dynamics still needs to be understood. Because avian eyesight differs from human eyesight, there could have been differences in color perception that the research team was unable to account for. However, it seems that the same mechanisms for working memory that evolved in humans and other primates also evolved separately in corvids. “Birdbrain” should be taken as a compliment.

Communications Biology, 2023. DOI:  10.1038/s42003-023-05442-5

Corvids seem to handle temporary memories the way we do Read More »

human-brain-cells-put-much-more-energy-into-signaling

Human brain cells put much more energy into signaling

Being human is hard —

Signaling molecules help modulate the brain’s overall activity.

Image of a person staring pensively, with question marks drawn on the wall behind him.

Indian elephants have larger brains than we do (obviously). Mice have a higher brain-to-body mass ratio, and long-finned pilot whales have more neurons. So what makes humans—and more specifically, human brains—special?

As far as organs go, human brains certainly consume a ton of energy—almost 50 grams of sugar, or 12 lumps, every day. This is one of the highest energy demands relative to body metabolism known among species. But what uses up all of this energy? If the human brain is the predicted size and has the predicted number of neurons for a primate of its size, and each individual neuron uses comparable amounts of energy to those in other mammals, then its energy use shouldn’t be exceptional.

The cost of signaling

A group of neuroscientists speculated that maybe the amount of signaling that takes place within the human brain accounts for its heightened energy needs. A consequence of this would be that brain regions that are more highly connected and do more signaling will use more energy.

To test their hypothesis, the scientists started by imaging the brains of 30 healthy, right-handed volunteers between 20 and 50 years old. The imaging took place at two separate institutions, and it allowed the researchers to correlate a given brain region’s energy use (as measured by glucose metabolism) with its level of signaling and connectivity. They found that energy use and signaling scaled in tandem in all 30 brains. But certain regions stuck out. Signaling pathways in certain areas of the cortex—the front of the brain—require almost 70 percent more energy than those in sensory-motor regions.

The frontal cortex is one of the regions that expanded the most during human evolution. According to Robert Sapolsky, “What the prefrontal cortex is most about is making tough decisions in the face of temptation—gratification postponement, long-term planning, impulse control, emotional regulation. The PFC is essential for getting you to do the right thing when it is the harder thing to do.” This is the stuff that humans must constantly contend with. And energetically, it is extraordinarily costly.

Increased modulation is also key for cognition

It is not only signaling that takes energy; it is modulating that signaling, ensuring that it occurs at the appropriate levels and only at the appropriate times.

Using the Allen Human Brain Atlas, these researchers looked at gene activity in the frontal cortex. They found elevated activity of neuromodulators and their receptors. The authors note that “the human brain spends excessive energy on the long-lasting regulation of (fast) neurotransmission with (slow) neuromodulators such as serotonin, dopamine, or noradrenaline.” And also endogenous opiates. “This effect is more about setting the tone of general excitability than transferring individual bits of information,” they write.

Once they correlated energy use to signaling and slow-acting neuromodulation in the cortex, the last thing the scientists did was look at the Neurosynth project, which maps cognitive functions to brain regions. Lo and behold, the energy-hogging, highly connected, strongly modulated, and evolutionarily expanded parts of the cortex are the same ones involved in complex functions like memory processing, reading, and cognitive inhibition. This supports their idea of “an expensive signaling architecture being dedicated to human cognition.”

Science Advances, 2023.  DOI: 10.1126/sciadv.adi7632

Human brain cells put much more energy into signaling Read More »

what-happens-in-a-crow’s-brain-when-it-uses-tools? 

What happens in a crow’s brain when it uses tools? 

This is your brain on tools —

Researchers trace the areas of the brain that are active when birds are using tools.

Three crows on the streets in the foreground with traffic and city lights blurry in the background.

Enlarge / Sure, they can use tools, but do they know where the nearest subway stop is?

“A thirsty crow wanted water from a pitcher, so he filled it with pebbles to raise the water level to drink,” summarizes a famous Aesop Fable. While this tale is thousands of years old, animal behaviorists still use this challenge to study corvids (which include crows, ravens, jays, and magpies) and their use of tools. In a recent Nature Communications study, researchers from a collaboration of universities across Washington, Florida, and Utah used radioactive tracers within the brains of several American crows to see which parts of their brains were active when they used stones to obtain food from the bottom of a water-filled tube.

Their results indicate that the motor learning and tactile control centers were activated in the brains of the more proficient crows, while the sensory and higher-order processing centers lit up in the brains of less proficient crows. These results suggest that competence with tools is linked to certain memories and muscle control, which the researchers claimed is similar to a ski jumper visualizing the course before jumping.

The researchers also found that out of their avian test subjects, female crows were especially proficient at tool usage, succeeding in the challenge quickly. “[A] follow-up question is whether female crows actually have more need for creative thinking relative to male crows,” elaborates Loma Pendergraft, the study’s first author and a graduate student at the University of Washington, who wants to understand if the caregiving and less dominant role of female crows gives them a higher capacity for tool use.

While only two species of crow (the New Caledonian crow and the Hawaiian crow) inherently use twigs and sticks as foraging tools, this study also suggests that other crow species, like the American crow, have the neural flexibility to learn to use tools.

A less invasive look at bird brains

Due to their unique behaviors, complex social structures, and reported intelligence, crows have fascinated animal behavioralists for decades. Scientists can study crows’ brains in real time by using 18F-fluorodeoxyglucose (FDG), a radioactive tracer, which the researchers injected into the crows’ brains. They then use positron emission tomography (PET) scans to see which brain areas are activated during different tasks.

“FDG-PET is a method we use to remotely examine activity throughout the entire brain without needing to do any surgeries or implants,” explained Pendergraft. “It’s like [a functional] MRI.” The FDG-PET method is non-invasive, as the crows aren’t required to sit still, which minimizes the stress the crows feel during the experiment.  In the Nature Communications study, Pendergraft and his team ensured the crows were anesthetized before scanning them.

FDG is also used in various medical imaging techniques, such as diagnosing Alzheimer’s disease or screening for cancerous tissue. “Basically, the body treats it as glucose, a substance needed for cells to stay alive,” Pendergraft added. “If a body part is working harder than normal, it’s going to need extra glucose to power the additional activity. This means we can measure relative FDG concentrations within the brain as a proxy for relative brain activity.”

What happens in a crow’s brain when it uses tools?  Read More »