neurobiology

how-should-we-treat-beings-that-might-be-sentient?

How should we treat beings that might be sentient?


Being aware of the maybe self-aware

A book argues that we’ve not thought enough about things that might think.

What rights should a creature with ambiguous self-awareness, like an octopus, be granted. Credit: A. Martin UW Photography

If you aren’t yet worried about the multitude of ways you inadvertently inflict suffering onto other living creatures, you will be after reading The Edge of Sentience by Jonathan Birch. And for good reason. Birch, a Professor of Philosophy at the London College of Economics and Political Science, was one of a team of experts chosen by the UK government to establish the Animal Welfare Act (or Sentience Act) in 2022—a law that protects animals whose sentience status is unclear.

According to Birch, even insects may possess sentience, which he defines as the capacity to have valenced experiences, or experiences that feel good or bad. At the very least, Birch explains, insects (as well as all vertebrates and a selection of invertebrates) are sentience candidates: animals that may be conscious and, until proven otherwise, should be regarded as such.

Although it might be a stretch to wrap our mammalian minds around insect sentience, it is not difficult to imagine that fellow vertebrates have the capacity to experience life, nor does it come as a surprise that even some invertebrates, such as octopuses and other cephalopod mollusks (squid, cuttlefish, and nautilus) qualify for sentience candidature. In fact, one species of octopus, Octopus vulgaris, has been protected by the UK’s Animal Scientific Procedures Act (ASPA) since 1986, which illustrates how long we have been aware of the possibility that invertebrates might be capable of experiencing valenced states of awareness, such as contentment, fear, pleasure, and pain.

A framework for fence-sitters

Non-human animals, of course, are not the only beings with an ambiguous sentience stature that poses complicated questions. Birch discusses people with disorders of consciousness, embryos and fetuses, neural organoids (brain tissue grown in a dish), and even “AI technologies that reproduce brain functions and/or mimic human behavior,” all of which share the unenviable position of being perched on the edge of sentience—a place where it is excruciatingly unclear whether or not these individuals are capable of conscious experience.

What’s needed, Birch argues, when faced with such staggering uncertainty about the sentience stature of other beings, is a precautionary framework that outlines best practices for decision-making regarding their care. And in The Edge of Sentience, he provides exactly that, in meticulous, orderly detail.

Over more than 300 pages, he outlines three fundamental framework principles and 26 specific case proposals about how to handle complex situations related to the care and treatment of sentience-edgers. For example, Proposal 2 cautions that “a patient with a prolonged disorder of consciousness should not be assumed incapable of experience” and suggests that medical decisions made on their behalf cautiously presume they are capable of feeling pain. Proposal 16 warns about conflating brain size, intelligence, and sentience, and recommends decoupling the three so that we do not incorrectly assume that small-brained animals are incapable of conscious experience.

Surgeries and stem cells

Be forewarned, some topics in The Edge of Sentience are difficult. For example, Chapter 10 covers embryos and fetuses. In the 1980s, Birch shares, it was common practice to not use anesthesia on newborn babies or fetuses when performing surgery. Why? Because whether or not newborns and fetuses experience pain was up for debate. Rather than put newborns and fetuses through the risks associated with anesthesia, it was accepted practice to give them a paralytic (which prevents all movement) and carry on with invasive procedures, up to and including heart surgery.

After parents raised alarms over the devastating outcomes of this practice, such as infant mortality, it was eventually changed. Birch’s takeaway message is clear: When in doubt about the sentience stature of a living being, we should probably assume it is capable of experiencing pain and take all necessary precautions to prevent it from suffering. To presume the opposite can be unethical.

This guidance is repeated throughout the book. Neural organoids, discussed in Chapter 11, are mini-models of brains developed from stem cells. The potential for scientists to use neural organoids to unravel the mechanisms of debilitating neurological conditions—and to avoid invasive animal research while doing so—is immense. It is also ethical, Birch posits, since studying organoids lessens the suffering of research animals. However, we don’t yet know whether or not neural tissue grown in a dish has the potential to develop sentience, so he argues that we need to develop a precautionary approach that balances the benefits of reduced animal research against the risk that neural organoids are capable of being sentient.

A four-pronged test

Along this same line, Birch says, all welfare decisions regarding sentience-edgers require an assessment of proportionality. We must balance the nature of a given proposed risk to a sentience candidate with potential harms that could result if nothing is done to minimize the risk. To do this, he suggests testing four criteria: permissibility-in-principle, adequacy, reasonable necessity, and consistency. Birch refers to this assessment process as PARC, and deep dives into its implementation in chapter eight.

When applying the PARC criteria, one begins by testing permissibility-in-principle: whether or not the proposed response to a risk is ethically permissible. To illustrate this, Birch poses a hypothetical question: would it be ethically permissible to mandate vaccination in response to a pandemic? If a panel of citizens were in charge of answering this question, they might say “no,” because forcing people to be vaccinated feels unethical. Yet, when faced with the same question, a panel of experts might say “yes,” because allowing people to die who could be saved by vaccination also feels unethical. Gauging permissibility-in-principle, therefore, entails careful consideration of the likely possible outcomes of a proposed response. If an outcome is deemed ethical, it is permissible.

Next, the adequacy of a proposed response must be tested. A proportionate response to a risk must do enough to lessen the risk. This means the risk must be reduced to “an acceptable level” or, if that’s not possible, a response should “deliver the best level of risk reduction that can be achieved” via an ethically permissible option.

The third test is reasonable necessity. A proposed response to a risk must not overshoot—it should not go beyond what is reasonably necessary to reduce risk, in terms of either cost or imposed harm. And last, consistency should be considered. The example Birch presents is animal welfare policy. He suggests we should always “aim for taxonomic consistency: our treatment of one group of animals (e.g., vertebrates) should be consistent with our treatment of another (e.g., invertebrates).”

The Edge of Sentience, as a whole, is a dense text overflowing with philosophical rhetoric. Yet this rhetoric plays a crucial role in the storytelling: it is the backbone for Birch’s clear and organized conclusions, and it serves as a jumping-off point for the logical progression of his arguments. Much like “I think, therefore I am” gave René Descartes a foundation upon which to build his idea of substance dualism, Birch uses the fundamental position that humans should not inflict gratuitous suffering onto fellow creatures as a base upon which to build his precautionary framework.

For curious readers who would prefer not to wade too deeply into meaty philosophical concepts, Birch generously provides a shortcut to his conclusions: a cheat sheet of his framework principles and special case proposals is presented at the front of the book.

Birch’s ultimate message in The Edge of Sentience is that a massive shift in how we view beings with a questionable sentience status should be made. And we should ideally make this change now, rather than waiting for scientific research to infallibly determine who and what is sentient. Birch argues that one way that citizens and policy-makers can begin this process is by adopting the following decision-making framework: always avoid inflicting gratuitous suffering on sentience candidates; take precautions when making decisions regarding a sentience candidate; and make proportional decisions about the care of sentience candidates that are “informed, democratic and inclusive.”

You might be tempted to shake your head at Birch’s confidence in humanity. No matter how deeply you agree with his stance of doing no harm, it’s hard to have confidence in humanity given our track record of not making big changes for the benefit of living creatures, even when said creatures includes our own species (cue in global warming here). It seems excruciatingly unlikely that the entire world will adopt Birch’s rational, thoughtful, comprehensive plan for reducing the suffering of all potentially sentient creatures. Yet Birch, a philosopher at heart, ignores human history and maintains a tone of articulate, patient optimism. He clearly believes in us—he knows we can do better—and he offers to hold our hands and walk us through the steps to do so.

Lindsey Laughlin is a science writer and freelance journalist who lives in Portland, Oregon, with her husband and four children. She earned her BS from UC Davis with majors in physics, neuroscience, and philosophy.

How should we treat beings that might be sentient? Read More »

tweaking-non-neural-brain-cells-can-cause-memories-to-fade

Tweaking non-neural brain cells can cause memories to fade


Neurons and a second cell type called an astrocyte collaborate to hold memories.

Astrocytes (labelled in black) sit within a field of neurons. Credit: Ed Reschke

“If we go back to the early 1900s, this is when the idea was first proposed that memories are physically stored in some location within the brain,” says Michael R. Williamson, a researcher at the Baylor College of Medicine in Houston. For a long time, neuroscientists thought that the storage of memory in the brain was the job of engrams, ensembles of neurons that activate during a learning event. But it turned out this wasn’t the whole picture.

Williamson’s research investigated the role astrocytes, non-neuron brain cells, play in the read-and-write operations that go on in our heads. “Over the last 20 years the role of astrocytes has been understood better. We’ve learned that they can activate neurons. The addition we have made to that is showing that there are subsets of astrocytes that are active and involved in storing specific memories,” Williamson says in describing a new study his lab has published.

One consequence of this finding: Astrocytes could be artificially manipulated to suppress or enhance a specific memory, leaving all other memories intact.

Marking star cells

Astrocytes, otherwise known as star cells due to their shape, play various roles in the brain, and many are focused on the health and activity of their neighboring neurons. Williamson’s team started by developing techniques that enabled them to mark chosen ensembles of astrocytes to see when they activate genes (including one named c-Fos) that help neurons reconfigure their connections and are deemed crucial for memory formation. This was based on the idea that the same pathway would be active in neurons and astrocytes.

“In simple terms, we use genetic tools that allow us to inject mice with a drug that artificially makes astrocytes express some other gene or protein of interest when they become active,” says Wookbong Kwon, a biotechnologist at Baylor College and co-author of the study.

Those proteins of interest were mainly fluorescent proteins that make cells fluoresce bright red. This way, the team could spot the astrocytes in mouse brains that became active during learning scenarios. Once the tagging system was in place, Williamson and his colleagues gave their mice a little scare.

“It’s called fear conditioning, and it’s a really simple idea. You take a mouse, put it into a new box, one it’s never seen before. While the mouse explores this new box, we just apply a series of electrical shocks through the floor,” Williamson explains. A mouse treated this way remembers this as an unpleasant experience and associates it with contextual cues like the box’s appearance, the smells and sounds present, and so on.

The tagging system lit up all astrocytes that expressed the c-Fos gene in response to fear conditioning. Williamson’s team inferred that this is where the memory is stored in the mouse’s brain. Knowing that, they could move on to the next question, which was if and how astrocytes and engram neurons interacted during this process.

Modulating engram neurons

“Astrocytes are really bushy,” Williamson says. They have a complex morphology with lots and lots of micro or nanoscale processes that infiltrate the area surrounding them. A single astrocyte can contact roughly 100,000 synapses, and not all of them will be involved in learning events. So the team looked for correlations between astrocytes activated during memory formation and the neurons that were tagged at the same time.

“When we did that, we saw that engram neurons tended to be contacting the astrocytes that are active during the formation of the same memory,” Williamson says. To see how astrocytes’ activity affects neurons, the team artificially stimulated the astrocytes by microinjecting them with a virus engineered to induce the expression of the c-Fos gene. “It directly increased the activity of engram neurons but did not increase the activity of non-engram neurons in contact with the same astrocyte,” Williamson explains.

This way his team established that at least some astrocytes could preferentially communicate with engram neurons. The researchers also noticed that astrocytes involved in memorizing the fear conditioning event had elevated levels of a protein called NFIA, which is known to regulate memory circuits in the hippocampus.

But probably the most striking discovery came when the researchers tested whether the astrocytes involved in memorizing an event also played a role in recalling it later.

Selectively forgetting

The first test to see if astrocytes were involved in recall was to artificially activate them when the mice were in a box that they were not conditioned to fear. It turned out artificial activation of astrocytes that were active during the formation of a fear memory formed in one box caused the mice to freeze even when they were in a different one.

So, the next question was, if you just killed or otherwise disabled an astrocyte ensemble active during a specific memory formation, would it just delete this memory from the brain? To get that done, the team used their genetic tools to selectively delete the NFIA protein in astrocytes that were active when the mice received their electric shocks. “We found that mice froze a lot less when we put them in the boxes they were conditioned to fear. They could not remember. But other memories were intact,” Kwon claims.

The memory was not completely deleted, though. The mice still froze in the boxes they were supposed to freeze in, but they did it for a much shorter time on average. “It looked like their memory was maybe a bit foggy. They were not sure if they were in the right place,” Williamson says.

After figuring out how to suppress a memory, the team also figured out where the “undo” button was and brought it back to normal.

“When we deleted the NFIA protein in astrocytes, the memory was impaired, but the engram neurons were intact. So, the memory was still somewhere there. The mice just couldn’t access it,” Williamson claims. The team brought the memory back by artificially stimulating the engram neurons using the same technique they employed for activating chosen astrocytes. “That caused the neurons involved in this memory trace to be activated for a few hours. This artificial activity allowed the mice to remember it again,” Williamson says.

The team’s vision is that in the distant future this technique can be used in treatments targeting neurons that are overactive in disorders such as PTSD. “We now have a new cellular target that we can evaluate and potentially develop treatments that target the astrocyte component associated with memory,” Williamson claims. But there’s lot more to learn before anything like that becomes possible. “We don’t yet know what signal is released by an astrocyte that acts on the neuron. Another thing is our study was focused on one brain region, which was the hippocampus, but we know that engrams exist throughout the brain in lots of different regions. The next step is to see if astrocytes play the same role in other brain regions that are also critical for memory,” Williamson says.

Nature, 2024.  DOI: 10.1038/s41586-024-08170-w

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Tweaking non-neural brain cells can cause memories to fade Read More »

bats-use-echolocation-to-make-mental-maps-for-navigation

Bats use echolocation to make mental maps for navigation

Bat maps

To evaluate the route each bat took to get back to the roost, the team used their simulations to measure the echoic entropy it experienced along the way. The field where the bats were released was a low echoic entropy area, so during those first few minutes when they were flying around they were likely just looking for some more distinct, higher entropy landmarks to figure out where they were. Once they were oriented, they started flying to the roost, but not in a straight line. They meandered a bit, and the groups with higher sensory deprivation tended to meander more.

The meandering, researchers suspect, was due to trouble the bats had with maintaining the steady path relying on echolocation alone. When they were detecting distinctive landmarks like a specific orchard, they corrected the course. Repeating the process eventually brought them to their roost.

But could this be landmark-based navigation? Or perhaps simple beaconing, where an animal locks onto something like a distant light and moves toward it?

The researchers argue in favor of cognitive acoustic maps. “I think if echolocation wasn’t such a limited sensory modality, we couldn’t reach a conclusion about the bats using cognitive acoustic maps,” Goldshtein says. The distance between landmarks the bats used to correct their flight path was significantly longer than echolocation’s sensing range. Yet they knew which direction the roost was relative to one landmark, even when the next landmark on the way was acoustically invisible. You can’t do that without having the area mapped.

“It would be really interesting to understand how other bats do that, to compare between species,” Goldshtein says. There are bats that fly over a thousand meters above the ground, so they simply can’t sense any landmarks using echolocation. Other species hunt over sea, which, as per this team’s simulations, would be just one huge low-entropy area. “We are just starting. That’s why I do not study only navigation but also housing, foraging, and other aspects of their behavior. I think we still don’t know enough about bats in general,” Goldshtein claims.

Science, 2024.  DOI: 10.1126/science.adn6269

Bats use echolocation to make mental maps for navigation Read More »

bizarre-fish-has-sensory-“legs”-it-uses-for-walking-and-tasting

Bizarre fish has sensory “legs” it uses for walking and tasting

Finding out what controls the formation of sensory legs meant growing sea robins from eggs. The research team observed that the legs of sea robins develop from the three pectoral fin rays that are around the stomach area of the fish, then separate from the fin as they continue to develop. Among the most active genes in the developing legs is the transcription factor (a protein that binds to DNA and turns genes on and off) known as tbx3a. When genetically engineered sea robins had tbx3a edited out with CRISPR-Cas9, it resulted in fewer legs, deformed legs, or both.

“Disruption of tbx3a results in upregulation of pectoral fin markers prior to leg separation, indicating that leg rays become more similar to fins in the absence of tbx3a,” the researchers said in a second study, also published in Current Biology.

To see whether genes for sensory legs are a dominant feature, the research team also tried creating sea robin hybrids, crossing species with and without sensory legs. This resulted in offspring with legs that had sensory capabilities, indicating that it’s a genetically dominant trait.

Exactly why sea robins evolved the way they did is still unknown, but the research team came up with a hypothesis. They think the legs of sea robin ancestors were originally intended for locomotion, but they gradually started gaining some sensory utility, allowing the animal to search the visible surface of the seafloor for food. Those fish that needed to search deeper for food developed sensory legs that allowed them to taste and dig for hidden prey.

“Future work will leverage the remarkable biodiversity of sea robins to understand the genetic basis of novel trait formation and diversification in vertebrates,” the team also said in the first study. “Our work represents a basis for understanding how novel traits evolve.”

Current Biology, 2024. DOI:  10.1016/j.cub.2024.08.014, 10.1016/j.cub.2024.08.042

Bizarre fish has sensory “legs” it uses for walking and tasting Read More »

karaoke-reveals-why-we-blush

Karaoke reveals why we blush

Singing for science —

Volunteers watched their own performances as an MRI tracked brain activity.

A hand holding a microphone against a blurry backdrop, taken from an angle that implies the microphone is directly in front of your face.

Singing off-key in front of others is one way to get embarrassed. Regardless of how you get there, why does embarrassment almost inevitably come with burning cheeks that turn an obvious shade of red (which is possibly even more embarrassing)?

Blushing starts not in the face but in the brain, though exactly where has been debated. Previous thinking often reasoned that the blush reaction was associated with higher socio-cognitive processes, such as thinking of how one is perceived by others.

After studying subjects who watched videos of themselves singing karaoke, however, researchers led by Milica Nicolic of the University of Amsterdam have found that blushing is really the result of specific emotions being aroused.

Nicolic’s findings suggest that blushing “is a consequence of a high level of ambivalent emotional arousal that occurs when a person feels threatened and wants to flee but, at the same time, feels the urge not to give up,” as she and her colleagues put it in a study recently published in Proceedings of the Royal Society B.

Taking the stage

The researchers sought out test subjects who were most likely to blush when watching themselves sing bad karaoke: adolescent girls. Adolescents tend to be much more self-aware and more sensitive to being judged by others than adults are.

The subjects couldn’t pick just any song. Nicolic and her team had made sure to give them a choice of four songs that music experts had deemed difficult, which is why they selected “Hello” by Adele, “Let it Go” from Frozen, “All I Want For Christmas is You” by Mariah Carey, and “All the Things You Said” by tATu. Videos of the subjects were recorded as they sang.

On their second visit to the lab, subjects were put in an MRI scanner and were shown videos of themselves and others singing karaoke. They watched 15 video clips of themselves singing and, as a control, 15 segments of someone who was thought to have similar singing ability, so secondhand embarrassment could be ruled out.

The other control factor was videos of professional singers disguised as participants. Because the professionals sang better overall, it was unlikely they would trigger secondhand embarrassment.

Enough to make you blush

The researchers checked for an increase in cheek temperature, as blood flow measurements had been used in past studies but are more prone to error. This was measured with a fast-response temperature transducer as the subjects watched karaoke videos.

It was only when the subjects watched themselves sing that cheek temperature went up. There was virtually no increase or decrease when watching others—meaning no secondhand embarrassment—and a slight decrease when watching a professional singer.

The MRI scans revealed which regions of the brain were activated as subjects watched videos of themselves. These include the anterior insular cortex, or anterior insula, which responds to a range of emotions, including fear, anxiety, and, of course, embarrassment. There was also the mid-cingulate cortex, which emotionally and cognitively manages pain—including embarrassment—by trying to anticipate that pain and reacting with aversion and avoidance. The dorsolateral prefrontal cortex, which helps process fear and anxiety, also lit up.

There was also more activity detected in the cerebellum, which is responsible for much of the emotional processing in the brain, when subjects watched themselves sing. Those who blushed more while watching their own video clips showed the most cerebellum activity. This could mean they were feeling stronger emotions.

What surprised the researchers was that there was no additional activation in areas known for being involved in the process of understanding one’s mental state, meaning someone’s opinion of what others might think of them may not be necessary for blushing to happen.

So blushing is really more about the surge of emotions someone feels when being faced with things that pertain to the self and not so much about worrying what other people think. That can definitely happen if you’re watching a video of your own voice cracking at the high notes in an Adele song.

Proceedings of the Royal Society B, 2024.  DOI: 10.1098/rspb.2024.0958

Karaoke reveals why we blush Read More »

researchers-track-individual-neurons-as-they-respond-to-words

Researchers track individual neurons as they respond to words

Pondering phrasing —

When processing language, individual neurons respond to words with similar meanings.

Human Neuron, Digital Light Microscope. (Photo By BSIP/Universal Images Group via Getty Images)

Enlarge / Human Neuron, Digital Light Microscope. (Photo By BSIP/Universal Images Group via Getty Images)

BSIP/Universal Images Group via Getty Images

“Language is a huge field, and we are novices in this. We know a lot about how different areas of the brain are involved in linguistic tasks, but the details are not very clear,” says Mohsen Jamali, a computational neuroscience researcher at Harvard Medical School who led a recent study into the mechanism of human language comprehension.

“What was unique in our work was that we were looking at single neurons. There is a lot of studies like that on animals—studies in electrophysiology, but they are very limited in humans. We had a unique opportunity to access neurons in humans,” Jamali adds.

Probing the brain

Jamali’s experiment involved playing recorded sets of words to patients who, for clinical reasons, had implants that monitored the activity of neurons located in their left prefrontal cortex—the area that’s largely responsible for processing language. “We had data from two types of electrodes: the old-fashioned tungsten microarrays that can pick the activity of a few neurons; and the Neuropixel probes which are the latest development in electrophysiology,” Jamali says. The Neuropixels were first inserted in human patients in 2022 and could record the activity of over a hundred neurons.

“So we were in the operation room and asked the patient to participate. We had a mixture of sentences and words, including gibberish sounds that weren’t actual words but sounded like words. We also had a short story about Elvis,” Jamali explains. He said the goal was to figure out if there was some structure to the neuronal response to language. Gibberish words were used as a control to see if the neurons responded to them in a different way.

“The electrodes we used in the study registered voltage—it was a continuous signal at 30 kHz sampling rate—and the critical part was to dissociate how many neurons we had in each recording channel. We used statistical analysis to separate individual neurons in the signal,” Jamali says. Then, his team synchronized the neuronal activity signals with the recordings played to the patients down to a millisecond and started analyzing the data they gathered.

Putting words in drawers

“First, we translated words in our sets to vectors,” Jamali says. Specifically, his team used the Word2Vec, a technique used in computer science to find relationships between words contained in a large corpus of text. What Word2Vec can do is tell if certain words have something in common—if they are synonyms, for example. “Each word was represented by a vector in a 300-dimensional space. Then we just looked at the distance between those vectors and if the distance was close, we concluded the words belonged in the same category,” Jamali explains.

Then the team used these vectors to identify words that clustered together, which suggested they had something in common (something they later confirmed by examining which words were in a cluster together). They then determined whether specific neurons responded differently to different clusters of words. It turned out they did.

“We ended up with nine clusters. We looked at which words were in those clusters and labeled them,” Jamali says. It turned out that each cluster corresponded to a neat semantic domain. Specialized neurons responded to words referring to animals, while other groups responded to words referring to feelings, activities, names, weather, and so on. “Most of the neurons we registered had one preferred domain. Some had more, like two or three,” Jamali explained.

The mechanics of comprehension

The team also tested if the neurons were triggered by the mere sound of a word or by its meaning. “Apart from the gibberish words, another control we used in the study was homophones,” Jamali says. The idea was to test if the neurons responded differently to the word “sun” and the word “son,” for example.

It turned out that the response changed based on context. When the sentence made it clear the word referred to a star, the sound triggered neurons triggered by weather phenomena. When it was clear that the same sound referred to a person, it triggered neurons responsible for relatives. “We also presented the same words at random without any context and found that it didn’t elicit as strong a response as when the context was available,” Jamali claims.

But the language processing in our brains will need to involve more than just different semantic categories being processed by different groups of neurons.

“There are many unanswered questions in linguistic processing. One of them is how much a structure matters, the syntax. Is it represented by a distributed network, or can we find a subset of neurons that encode structure rather than meaning?” Jamali asked. Another thing his team wants to study is what the neural processing looks like during speech production, in addition to comprehension. “How are those two processes related in terms of brain areas and the way the information is processed,” Jamali adds.

The last thing—and according to Jamali the most challenging thing—is using the Neuropixel probes to see how information is processed across different layers of the brain. “The Neuropixel probe travels through the depths of the cortex, and we can look at the neurons along the electrode and say like, ‘OK, the information from this layer, which is responsible for semantics, goes to this layer, which is responsible for something else.’ We want to learn how much information is processed by each layer. This should be challenging, but it would be interesting to see how different areas of the brain are involved at the same time when presented with linguistic stimuli,” Jamali concludes.

Nature, 2024.  DOI: 10.1038/s41586-024-07643-2

Researchers track individual neurons as they respond to words Read More »

how-do-brainless-creatures-control-their-appetites?

How do brainless creatures control their appetites?

Feed me! —

Separate systems register when the animals have eaten and control feeding behaviors.

Image of a greenish creature with a long stalk and tentacles, against a black background.

The hydra is a Lovecraftian-looking microorganism with a mouth surrounded by tentacles on one end, an elongated body, and a foot on the other end. It has no brain or centralized nervous system. Despite the lack of either of those things, it can still feel hunger and fullness. How can these creatures know when they are hungry and realize when they have had enough?

While they lack brains, hydra do have a nervous system. Researchers from Kiel University in Germany found they have an endodermal (in the digestive tract) and ectodermal (in the outermost layer of the animal) neuronal population, both of which help them react to food stimuli. Ectodermal neurons control physiological functions such as moving toward food, while endodermal neurons are associated with feeding behavior such as opening the mouth—which also vomits out anything indigestible.

Even such a limited nervous system is capable of some surprisingly complex functions. Hydras might even give us some insights into how appetite evolved and what the early evolutionary stages of a central nervous system were like.

No, thanks, I’m full

Before finding out how the hydra’s nervous system controls hunger, the researchers focused on what causes the strongest feeling of satiety, or fullness, in the animals. They were fed with the brine shrimp Artemia salina, which is among their usual prey, and exposed to the antioxidant glutathione. Previous studies have suggested that glutathione triggers feeding behavior in hydras, causing them to curl their tentacles toward their mouths as if they are swallowing prey.

Hydra fed with as much Artemia as they could eat were given glutathione afterward, while the other group was only given only glutathione and no actual food. Hunger was gauged by how fast and how often they opened their mouths.

It turned out that the first group, which had already glutted themselves on shrimp, showed hardly any response to glutathione eight hours after being fed. Their mouths barely opened—and slowly if so—because they were not hungry enough for even a feeding trigger like glutathione to make them feel they needed seconds.

It was only at 14 hours post-feeding that the hydra that had eaten shrimp opened their mouths wide enough and fast enough to indicate hunger. However, those that were not fed and only exposed to glutathione started showing signs of hunger only four hours after exposure. Mouth opening was not the only behavior provoked by hunger since starved animals also somersaulted through the water and moved toward light, behaviors associated with searching for food. Sated animals would stop somersaulting and cling to the wall of the tank they were in until they were hungry again.

Food on the “brain”

After observing the behavioral changes in the hydra, the research team looked into the neuronal activity behind those behaviors. They focused on two neuronal populations, the ectodermal population known as N3 and the endodermal population known as N4, both known to be involved in hunger and satiety. While these had been known to influence hydra feeding responses, how exactly they were involved was unknown until now.

Hydra have N3 neurons all over their bodies, especially in the foot. Signals from these neurons tell the animal that it has eaten enough and is experiencing satiety. The frequency of these signals decreased as the animals grew hungrier and displayed more behaviors associated with hunger. The frequency of N3 signals did not change in animals that were only exposed to glutathione and not fed, and these hydra behaved just like animals that had gone without food for an extended period of time. It was only when they were given actual food that the N3 signal frequency increased.

“The ectodermal neuronal population N3 is not only responding to satiety by increasing neuronal activity, but is also controlling behaviors that changed due to feeding,” the researchers said in their study, which was recently published in Cell Reports.

Though N4 neurons were only seen to communicate indirectly with the N3 population in the presence of food, they were found to influence eating behavior by regulating how wide the hydras opened their mouths and how long they kept them open. Lower frequency of N4 signals was seen in hydra that were starved or only exposed to glutathione. Higher frequency of N4 signals were associated with the animals keeping their mouths shut.

So, what can the neuronal activity of a tiny, brainless creature possibly tell us about the evolution of our own complex brains?

The researchers think the hydra’s simple nervous system may parallel the much more complex central and enteric (in the gut) nervous systems that we have. While N3 and N4 operate independently, there is still some interaction between them. The team also suggests that the way N4 regulates the hydra’s eating behavior is similar to the way the digestive tracts of mammals are regulated.

“A similar architecture of neuronal circuits controlling appetite/satiety can be also found in mice where enteric neurons, together with the central nervous system, control mouth opening,” they said in the same study.

Maybe, in a way, we really do think with our gut.

Cell Reports, 2024. DOI: 10.1016/j.celrep.2024.114210

How do brainless creatures control their appetites? Read More »

mutations-in-a-non-coding-gene-associated-with-intellectual-disability

Mutations in a non-coding gene associated with intellectual disability

Splice of life —

A gene that only makes an RNA is linked to neurodevelopmental problems.

Colored ribbons that represent the molecular structure of a large collection of proteins and RNAs.

Enlarge / The spliceosome is a large complex of proteins and RNAs.

Almost 1,500 genes have been implicated in intellectual disabilities; yet for most people with such disabilities, genetic causes remain unknown. Perhaps this is in part because geneticists have been focusing on the wrong stretches of DNA when they go searching. To rectify this, Ernest Turro—a biostatistician who focuses on genetics, genomics, and molecular diagnostics—used whole genome sequencing data from the 100,000 Genomes Project to search for areas associated with intellectual disabilities.

His lab found a genetic association that is the most common one yet to be associated with neurodevelopmental abnormality. And the gene they identified doesn’t even make a protein.

Trouble with the spliceosome

Most genes include instructions for how to make proteins. That’s true. And yet human genes are not arranged linearly—or rather, they are arranged linearly, but not contiguously. A gene containing the instructions for which amino acids to string together to make a particular protein—hemoglobin, insulin, serotonin, albumin, estrogen, whatever protein you like—is modular. It contains part of the amino acid sequence, then it has a chunk of DNA that is largely irrelevant to that sequence, then a bit more of the protein’s sequence, then another chunk of random DNA, back and forth until the end of the protein. It’s as if each of these prose paragraphs were separated by a string of unrelated letters (but not a meaningful paragraph from a different article).

In order to read this piece through coherently, you’d have to take out the letters interspersed between its paragraphs. And that’s exactly what happens with genes. In order to read the gene through coherently, the cell has machinery that splices out the intervening sequences and links up the protein-making instructions into a continuous whole. (This doesn’t happen in the DNA itself; it happens to an RNA copy of the gene.) The cell’s machinery is obviously called the spliceosome.

There are about a hundred proteins that comprise the spliceosome. But the gene just found to be so strongly associated with neurodevelopmental disorders doesn’t encode any of them. Rather, it encodes one of five RNA molecules that are also part of the spliceosome complex and interact with the RNAs that are being spliced. Mutations in this gene were found to be associated with a syndrome with symptoms that include intellectual disability, seizures, short stature, neurodevelopmental delay, drooling, motor delay, hypotonia (low muscle tone), and microcephaly (having a small head).

Supporting data

The researchers buttressed their finding by examining three other databases; in all of them, they found more people with the syndrome who had mutations in this same gene. The mutations occur in a remarkably conserved region of the genome, suggesting that it is very important. Most of the mutations were new in the affected people—i.e. not inherited from their parents—but there was one case of one particular mutation in the gene that was inherited. Based on this, the researchers concluded that this particular variant may cause a less severe disorder than the other mutations.

Many studies that look for genes associated with diseases have focused on searching catalogs of protein coding genes. These results suggest that we could have been missing important mutations because of this focus.

Nature Medicine, 2024. DOI: 10.1038/s41591-024-03085-5

Mutations in a non-coding gene associated with intellectual disability Read More »

chemical-tweaks-to-a-toad-hallucinogen-turns-it-into-a-potential-drug

Chemical tweaks to a toad hallucinogen turns it into a potential drug

No licking toads! —

Targets a different serotonin receptor from other popular hallucinogens.

Image of the face of a large toad.

Enlarge / The Colorado River toad, also known as the Sonoran Desert Toad.

It is becoming increasingly accepted that classic psychedelics like LSD, psilocybin, ayahuasca, and mescaline can act as antidepressants and anti-anxiety treatments in addition to causing hallucinations. They act by binding to a serotonin receptor. But there are 14 known types of serotonin receptors, and most of the research into these compounds has focused on only one of them—the one these molecules like, called 5-HT2A. (5-HT, short for 5-hydroxytryptamine, is the chemical name for serotonin.)

The Colorado River toad (Incilius alvarius), also known as the Sonoran Desert toad, secretes a psychedelic compound that likes to bind to a different serotonin receptor subtype called 5-HT1A. And that difference may be the key to developing an entirely distinct class of antidepressants.

Uncovering novel biology

Like other psychedelics, the one the toad produces decreases depression and anxiety and induces meaningful and spiritually significant experiences. It has been used clinically to treat vets with post-traumatic stress disorder and is being developed as a treatment for other neurological disorders and drug abuse. 5-HT1A is a validated therapeutic target, as approved drugs, including the antidepressant Viibryd and the anti-anxiety med Buspar, bind to it. But little is known about how psychedelics engage with this receptor and which effects it mediates, so Daniel Wacker’s lab decided to look into it.

The researchers started by making chemical modifications to the frog psychedelic and noting how each of the tweaked molecules bound to both 5-HT2A  and 5-HT1A. As a group, these psychedelics are known as “designer tryptamines”—that’s tryp with a “y”, mind you—because they are metabolites of the amino acid tryptophan.

The lab made 10 variants and found one that is more than 800-fold selective about sticking to 5-HT1A as compared to 5-HT2A. That makes it a great research tool for elucidating the structure-activity relationship of the 5-HT1A receptor, as well as the molecular mechanisms behind the pharmacology of the drugs on the market that bind to it. The lab used it to explore both of those avenues. However, the variant’s ultimate utility might be as a new therapeutic for psychiatric disorders, so they tested it in mice.

Improving the lives of mice

The compound did not induce hallucinations in mice, as measured by the “head-twitch response.” But it did alleviate depression, as measured by a “chronic social defeat stress model.” In this model, for 10 days in a row, the experimental mouse was introduced to an “aggressor mouse” for “10-minute defeat bouts”; essentially, it got beat up by a bully at recess for two weeks. Understandably, after this experience, the experimental mouse tended not to be that friendly with new mice, as controls usually are. But when injected with the modified toad psychedelic, the bullied mice were more likely to interact positively with new mice they met.

Depressed mice, like depressed people, also suffer from anhedonia: a reduced ability to experience pleasure. In mice, this manifests in not taking advantage of drinking sugar water when given the opportunity. But treated bullied mice regained their preference for the sweet drink. About a third of mice seem to be “stress-resilient” in this model; the bullying doesn’t seem to phase them. The drug increased the number of resilient mice.

The 5-HT2A receptor has hogged all of the research love because it mediates the hallucinogenic effects of many popular psychedelics, so people assumed that it must mediate their therapeutic effects, too. However, Wacker argues that there is little evidence supporting this assumption. Wacker’s new toad-based psychedelic variant and its preference for the 5-HT1A receptor will help elucidate the complementary roles these two receptor subtypes play in mediating the cellular and psychological effects of psychedelic molecules. And it might provide the basis for a new tryptamine-based mental health treatment as well—one without hallucinatory side effects, disappointing as that may be to some.

Nature, 2024.  DOI: 10.1038/s41586-024-07403-2

Chemical tweaks to a toad hallucinogen turns it into a potential drug Read More »

the-science-of-smell-is-fragrant-with-submolecules

The science of smell is fragrant with submolecules

Did you smell something? —

A chemical that we smell may be a composite of multiple smell-making pieces.

cartoon of roses being smelled, with the nasal passages, neurons, and brain visible through cutaways.

When we catch a whiff of perfume or indulge in a scented candle, we are smelling much more than Floral Fantasy or Lavender Vanilla. We are actually detecting odor molecules that enter our nose and interact with cells that send signals to be processed by our brain. While certain smells feel like they’re unchanging, the complexity of this system means that large odorant molecules are perceived as the sum of their parts—and we are capable of perceiving the exact same molecule as a different smell.

Smell is more complex than we might think. It doesn’t consist of simply detecting specific molecules. Researcher Wen Zhou and his team from the Institute of Psychology of the Chinese Academy of Sciences have now found that parts of our brains analyze smaller parts of the odor molecules that make things smell.

Smells like…

So how do we smell? Odor molecules that enter our noses stimulate olfactory sensory neurons. They do this by binding to odorant receptors on these neurons (each of which makes only one of approximately 500 different odor receptors). Smelling something activates different neurons depending on what the molecules in that smell are and which receptors they interact with. The sensory neurons in the piriform cortex of the brain then use the information from the sensory neurons and interpret it as a message that makes us smell vanilla. Or a bouquet of flowers. Or whatever else.

Odor molecules were previously thought to be coded only as whole molecules, but Zhou and his colleagues wanted to see whether the brain’s analysis of odor molecules could perceive something less than a complete molecule. They reasoned that, if only whole molecules work, then after being exposed to a part of an odorant molecule, the test subjects would smell the original molecule exactly the same way. If, by contrast, the brain was able to pick up on the smell of a molecule’s substructures, neurons would adapt to the substructure. When re-exposed to the original molecule, subjects would not sense it nearly as strongly.

“If [sub-molecular factors are part of our perception of an odor]—the percept[ion] and its neural representation would be shifted towards those of the unadapted part of that compound,” the researchers said in a study recently published in Nature Human Behavior.

Doesn’t smell like…

To see whether their hypothesis held up, Zhou’s team presented test subjects with a compound abbreviated CP, its separate components C and P, and an unrelated component, U. P and U were supposed to have equal aromatic intensity despite being different scents.

In one session, subjects smelled CP and then sniffed P until they had adapted to it. When they smelled CP again, they reported it smelling more like C than P. Despite being exposed to the entire molecule, they were mostly smelling C, which was unadapted. In another session, subjects adapted to U, after which there was no change in how they perceived CP. So, the effect is specific to smelling a portion of the odorant molecule.

In yet another experiment, subjects were told to first smell CP and then adapt to the smell of P with just one nostril while they kept the other nostril closed. Once adapted, CP and C smelled similar, but only when snorted through the nostril that had been open. The two smelled much more different through the nostril that had been closed.

Previous research has shown that adaptation to odors takes place in the piriform cortex. Substructure adaptation causes this part of the brain to respond differently to the portions of a chemical that the nose has recently been exposed to.

This olfactory experiment showed that our brains perceive smells by doing more than just recognizing the presence of a whole odor molecule. Some molecules can be perceived as a collection of submolecular units that are perceived separately.

“The smells we perceived are the products of continuous analysis and synthesis in the olfactory system,” the team said in the same study, “breath by breath, of the structural features and relationships of volatile compounds in our ever-changing chemical environment.”

Nature Human Behaviour, 2024.  DOI: 10.1038/s41562-024-01849-0

The science of smell is fragrant with submolecules Read More »

dna-parasite-now-plays-key-role-in-making-critical-nerve-cell-protein

DNA parasite now plays key role in making critical nerve cell protein

Domesticated viruses —

An RNA has been adopted to help the production of myelin, a key nerve protein.

Graphic depiction of a nerve cell with a myelin coated axon.

Human brains (and the brains of other vertebrates) are able to process information faster because of myelin, a fatty substance that forms a protective sheath over the axons of our nerve cells and speeds up their impulses. How did our neurons evolve myelin sheaths? Part of the answer—which was unknown until now—almost sounds like science fiction.

Led by scientists from Altos Labs-Cambridge Institute of Science, a team of researchers has uncovered a bit of the gnarly past of how myelin ended up covering vertebrate neurons: a molecular parasite has been messing with our genes. Sequences derived from an ancient virus help regulate a gene that encodes a component of myelin, helping explain why vertebrates have an edge when it comes to their brains.

Prehistoric infection

Myelin is a fatty material produced by oligodendrocyte cells in the central nervous system and Schwann cells in the peripheral nervous system. Its insulating properties allow neurons to zap impulses to one another at faster speeds and greater lengths. Our brains can be complex in part because myelin enables longer, narrower axons, which means more nerves can be stacked together.

The un-myelinated brain cells of many invertebrates often need to rely on wider—and therefore fewer—axons for impulse conduction. Rapid impulse conduction makes quicker reactions possible, whether that means fleeing danger or capturing prey.

So, how do we make myelin? A key player in its production appears to be a type of molecular parasite called a retrotransposon.

Like other transposons, retrotransposons can move to new locations in the genome through an RNA intermediate. However, most retrotransposons in our genome have picked up too many mutations to move about anymore.

RNLTR12-int is a retrotransposon that is thought to have originally entered our ancestors’ genome as a virus. Rat genomes now have over 100 copies of the retrotransposon.

An RNA made by RNLTR12-int helps produce myelin by binding to a transcription factor or a protein that regulates the activity of other genes. The RNA/protein combination binds to DNA near the gene for myelin basic protein, or MBP, a major component of myelin.

“MBP is essential for the membrane growth and compression of [central nervous system] myelin,” the researchers said in a study recently published in Cell.

Technical knockout

To find out whether RNLTR12-int really was behind the regulation of MBP and, therefore, myelin production, the research team had to knock its level down and see if myelination still happened. They first experimented on rat brains before moving on to zebrafish and frogs.

When they inhibited RNLTR12-int, the results were drastic. In the central nervous system, genetically edited rats produced 98 percent less MBP than those where the gene was left unedited. The absence of RNLTR12-int also caused the oligodendrocytes that produce myelin to develop much simpler structures than they would normally form. When RNLTR12-int was knocked out in the peripheral nervous system, it reduced myelin produced by Schwann cells.

The researchers used a SOX10 antibody to show that SOX10 bound to the RNLTR12-int transcript in vivo. This was an important result, since there are lots of non-coding RNAs made by cells, and it wasn’t clear whether any RNA would work or if it was specific to RNLTR12-int.

Do these results hold up in other jawed vertebrates? Using CRISPR-CAS9 to perform knockout tests with retrotransposons related to RNLTR12-int in frogs and zebrafish showed similar results.

Myelination has enriched the vertebrate brain so it can work like never before. This is why the term “brain food” is literal. Healthy fats are so important for our brains; they help form myelin since it is a fatty acid. Think about that next time you’re pulling an all-nighter while reaching for a handful of nuts.

Cell, 2024. DOI: 10.1016/j.cell.2024.01.011

DNA parasite now plays key role in making critical nerve cell protein Read More »

corvids-seem-to-handle-temporary-memories-the-way-we-do

Corvids seem to handle temporary memories the way we do

Working on memory —

Birds show evidence that they lump temporary memories into categories.

A black bird with yellow eyes against a blue sky.

Enlarge / A jackdaw tries to remember what color it was thinking of.

Humans tend to think that we are the most intelligent life-forms on Earth, and that we’re largely followed by our close relatives such as chimps and gorillas. But there are some areas of cognition in which homo sapiens and other primates are not unmatched. What other animal’s brain could possibly operate at a human’s level, at least when it comes to one function? Birds—again.

This is far from the first time that bird species such as corvids and parrots have shown that they can think like us in certain ways. Jackdaws are clever corvids that belong to the same family as crows and ravens. After putting a pair of them to the test, an international team of researchers saw that the birds’ working memory operates the same way as that of humans and higher primates. All of these species use what’s termed “attractor dynamics,” where they organize information into specific categories.

Unfortunately for them, that means they also make the same mistakes we do. “Jackdaws (Corvus monedula) have similar behavioral biases as humans; memories are less precise and more biased as memory demands increase,” the researchers said in a study recently published in Communications Biology.

Remembering not to forget

Working memory is where we hang on to items for a brief period of time—like a postal code looked up in one browser tab and typed into a second. It can hold everything from numbers and words to images and concepts. But these memories deteriorate quickly, and the capacity is limited—the more things we try to remember, the less likely the brain is going to remember them all correctly.

Attractor dynamics give the brain an assist with working memory by taking sensory input, such as color, and categorizing it. The highly specific red shade “Fire Lily” might fade from working memory quickly, and fewer specifics will stick around as time passes, yet it will still be remembered as “red.” You lose specifics first, but hang on to the general idea longer.

Aside from time, the other thing that kills working memory is distractions. Less noise—meaning distracting factors inside and outside the brain—will make it easier to distinguish Fire Lily among the other reds. If a hypothetical customer was browsing paint swatches for Sandstone (a taupe) and London Fog (a gray) in addition to Fire Lily, remembering each color accurately would become even more difficult because of the increased demands on working memory.

Bias can also blur working memory and cause the brain to remember some red hues more accurately than others, especially if the brain compartmentalizes them all under “red.” This can happen when a particular customer has a certain idea of the color red that leans warmer or cooler than Fire Lily. If they view red as leaning slightly warmer than Fire Lily, they might believe a different, warmer red is Fire Lily.

In living color

To find out if corvids process stimuli using short-term memory with attractor dynamics, the researchers subjected two jackdaws to a variety of tests that involved remembering colors. Each bird had to peck on a white button to begin the test. They were then shown a color—the target color—before being shown a chart of 64 colors. The jackdaws had to look at that chart and peck the color they had previously been shown. A correct answer would get them their favorite treat, while responses that were close but not completely accurate would get them other treats.

While the birds performed well with just one color, their accuracy went down as the researchers challenged them to remember more target colors from the chart at once. They were more likely to pick colors that were close to, but not exactly, the target colors they had been shown—likely because there was a greater load on their short-term memory.

This is what we’d see if a customer had to remember not only Fire Lily, but Sandstone and London Fog. The only difference is that we humans would be able to read the color names, and the jackdaws only found out they were wrong when they didn’t get their favorite treat.

“Despite vastly different visual systems and brain organizations, corvids and primates show similar attractor dynamics, which can mitigate noise in visual working memory representations,” the researchers said in the same study.

How and why birds evolved attractor dynamics still needs to be understood. Because avian eyesight differs from human eyesight, there could have been differences in color perception that the research team was unable to account for. However, it seems that the same mechanisms for working memory that evolved in humans and other primates also evolved separately in corvids. “Birdbrain” should be taken as a compliment.

Communications Biology, 2023. DOI:  10.1038/s42003-023-05442-5

Corvids seem to handle temporary memories the way we do Read More »