Author name: Kris Guyer

these-dogs-eavesdrop-on-their-owners-to-learn-new-words

These dogs eavesdrop on their owners to learn new words

Next, the entire experiment was repeated with one key variation: This time, during the training protocol, rather than addressing the dogs directly when naming new toys, the dogs merely watched while their owners talked to another person while naming the toys, never directly addressing the dogs at all.

The result: 80 percent of the dogs correctly chose the toys in the direct address condition, and 100 percent did so in the overhearing condition. Taken together, the results demonstrate that GWL dogs can learn new object labels just by overhearing interactions, regardless of whether the dogs are active participants in the interactions or passive listeners—much like what has been observed in young children around a year-and-a-half old.

To learn whether temporal continuity (a nonsocial factor) or the lack thereof affects label learning in GWL dogs, the authors also devised a third experimental variation. The owner would show the dog a new toy, place it in a bucket, let the dog take the toy out of the bucket, and then place the toy back in. Then the owner would lift the bucket to prevent the dog from seeing what was inside and repeatedly use the toy name in a sentence while looking back and forth from the dog to the bucket. This was followed by the usual testing phase. The authors concluded that the dogs didn’t need temporal continuity to form object-label mappings. And when the same dogs were re-tested two weeks later, those mappings had not decayed; the dogs remembered.

But GWL dogs are extremely rare, and the findings don’t extend to typical dogs, as the group discovered when they ran both versions of the experiment using 10 non-GWL border collies. There was no evidence of actual learning in these typical dogs; the authors suggest their behavior reflects a doggy preference for novelty when it comes to toy selection, not the ability to learn object-label mappings.

“Our findings show that the socio-cognitive processes enabling word learning from overheard speech are not uniquely human,” said co-author Shany Dror of ELTE and VetMedUni universities. “Under the right conditions, some dogs present behaviors strikingly similar to those of young children. These dogs provide an exceptional model for exploring some of the cognitive abilities that enabled humans to develop language. But we do not suggest that all dogs learn in this way—far from it.”

Science, 2025. DOI: 10.1126/science.adq5474 (About DOIs).

These dogs eavesdrop on their owners to learn new words Read More »

grok-assumes-users-seeking-images-of-underage-girls-have-“good-intent”

Grok assumes users seeking images of underage girls have “good intent”


Conflicting instructions?

Expert explains how simple it could be to tweak Grok to block CSAM outputs.

Credit: Aurich Lawson | Getty Images

For weeks, xAI has faced backlash over undressing and sexualizing images of women and children generated by Grok. One researcher conducted a 24-hour analysis of the Grok account on X and estimated that the chatbot generated over 6,000 images an hour flagged as “sexually suggestive or nudifying,” Bloomberg reported.

While the chatbot claimed that xAI supposedly “identified lapses in safeguards” that allowed outputs flagged as child sexual abuse material (CSAM) and was “urgently fixing them,” Grok has proven to be an unreliable spokesperson, and xAI has not announced any fixes.

A quick look at Grok’s safety guidelines on its public GitHub shows they were last updated two months ago. The GitHub also indicates that, despite prohibiting such content, Grok maintains programming that could make it likely to generate CSAM.

Billed as “the highest priority,” superseding “any other instructions” Grok may receive, these rules explicitly prohibit Grok from assisting with queries that “clearly intend to engage” in creating or distributing CSAM or otherwise sexually exploit children.

However, the rules also direct Grok to “assume good intent” and “don’t make worst-case assumptions without evidence” when users request images of young women.

Using words like “‘teenage’ or ‘girl’ does not necessarily imply underage,” Grok’s instructions say.

X declined Ars’ request to comment. The only statement X Safety has made so far shows that Elon Musk’s social media platform plans to blame users for generating CSAM, threatening to permanently suspend users and report them to law enforcement.

Critics dispute that X’s solution will end the Grok scandal, and child safety advocates and foreign governments are growing increasingly alarmed as X delays updates that could block Grok’s undressing spree.

Why Grok shouldn’t “assume good intentions”

Grok can struggle to assess users’ intenttions, making it “incredibly easy” for the chatbot to generate CSAM under xAI’s policy, Alex Georges, an AI safety researcher, told Ars.

The chatbot has been instructed, for example, that “there are no restrictionson fictional adult sexual content with dark or violent themes,” and Grok’s mandate to assume “good intent” may create gray areas in which CSAM could be created.

There’s evidence that in relying on these guidelines, Grok is currently generating a flood of harmful images on X, with even more graphic images being created on the chatbot’s standalone website and app, Wired reported. Researchers who surveyed 20,000 random images and 50,000 prompts told CNN that more than half of Grok’s outputs that feature images of people sexualize women, with 2 percent depicting “people appearing to be 18 years old or younger.” Some users specifically “requested minors be put in erotic positions and that sexual fluids be depicted on their bodies,” researchers found.

Grok isn’t the only chatbot that sexualizes images of real people without consent, but its policy seems to leave safety at a surface level, Georges said, and xAI is seemingly unwilling to expand safety efforts to block more harmful outputs.

Georges is the founder and CEO of AetherLab, an AI company that helps a wide range of firms—including tech giants like OpenAI, Microsoft, and Amazon—deploy generative AI products with appropriate safeguards. He told Ars that AetherLab works with many AI companies that are concerned about blocking harmful companion bot outputs like Grok’s. And although there are no industry norms—creating a “Wild West” due to regulatory gaps, particularly in the US—his experience with chatbot content moderation has convinced him that Grok’s instructions to “assume good intent” are “silly” because xAI’s requirement of “clear intent” doesn’t mean anything operationally to the chatbot.

“I can very easily get harmful outputs by just obfuscating my intent,” Georges said, emphasizing that “users absolutely do not automatically fit into the good-intent bucket.” And even “in a perfect world,” where “every single user does have good intent,” Georges noted, the model “will still generate bad content on its own because of how it’s trained.”

Benign inputs can lead to harmful outputs, Georges explained, and a sound safety system would catch both benign and harmful prompts. Consider, he suggested, a prompt for “a pic of a girl model taking swimming lessons.”

The user could be trying to create an ad for a swimming school, or they could have malicious intent and be attempting to manipulate the model. For users with benign intent, prompting can “go wrong,” Georges said, if Grok’s training data statistically links certain “normal phrases and situations” to “younger-looking subjects and/or more revealing depictions.”

“Grok might have seen a bunch of images where ‘girls taking swimming lessons’ were young and that human ‘models’ were dressed in revealing things, which means it could produce an underage girl in a swimming pool wearing something revealing,” Georges said. “So, a prompt that looks ‘normal’ can still produce an image that crosses the line.”

While AetherLab has never worked directly with xAI or X, Georges’ team has “tested their systems independently by probing for harmful outputs, and unsurprisingly, we’ve been able to get really bad content out of them,” Georges said.

Leaving AI chatbots unchecked poses a risk to children. A spokesperson for the National Center for Missing and Exploited Children (NCMEC), which processes reports of CSAM on X in the US, told Ars that “sexual images of children, including those created using artificial intelligence, are child sexual abuse material (CSAM). Whether an image is real or computer-generated, the harm is real, and the material is illegal.”

Researchers at the Internet Watch Foundation told the BBC that users of dark web forums are already promoting CSAM they claim was generated by Grok. These images are typically classified in the United Kingdom as the “lowest severity of criminal material,” researchers said. But at least one user was found to have fed a less-severe Grok output into another tool to generate the “most serious” criminal material, demonstrating how Grok could be used as an instrument by those seeking to commercialize AI CSAM.

Easy tweaks to make Grok safer

In August, xAI explained how the company works to keep Grok safe for users. But although the company acknowledged that it’s difficult to distinguish “malignant intent” from “mere curiosity,” xAI seemed convinced that Grok could “decline queries demonstrating clear intent to engage in activities” like child sexual exploitation, without blocking prompts from merely curious users.

That report showed that xAI refines Grok over time to block requests for CSAM “by adding safeguards to refuse requests that may lead to foreseeable harm”—a step xAI does not appear to have taken since late December, when reports first raised concerns that Grok was sexualizing images of minors.

Georges said there are easy tweaks xAI could make to Grok to block harmful outputs, including CSAM, while acknowledging that he is making assumptions without knowing exactly how xAI works to place checks on Grok.

First, he recommended that Grok rely on end-to-end guardrails, blocking “obvious” malicious prompts and flagging suspicious ones. It should then double-check outputs to block harmful ones, even when prompts are benign.

This strategy works best, Georges said, when multiple watchdog systems are employed, noting that “you can’t rely on the generator to self-police because its learned biases are part of what creates these failure modes.” That’s the role that AetherLab wants to fill across the industry, helping test chatbots for weakness to block harmful outputs by using “an ‘agentic’ approach with a shitload of AI models working together (thereby reducing the collective bias),” Georges said.

xAI could also likely block more harmful outputs by reworking Grok’s prompt style guidance, Georges suggested. “If Grok is, say, 30 percent vulnerable to CSAM-style attacks and another provider is 1 percent vulnerable, that’s a massive difference,” Georges said.

It appears that xAI is currently relying on Grok to police itself, while using safety guidelines that Georges said overlook an “enormous” number of potential cases where Grok could generate harmful content. The guidelines do not “signal that safety is a real concern,” Georges said, suggesting that “if I wanted to look safe while still allowing a lot under the hood, this is close to the policy I’d write.”

Chatbot makers must protect kids, NCMEC says

X has been very vocal about policing its platform for CSAM since Musk took over Twitter, but under former CEO Linda Yaccarino, the company adopted a broad protective stance against all image-based sexual abuse (IBSA). In 2024, X became one of the earliest corporations to voluntarily adopt the IBSA Principles that X now seems to be violating by failing to tweak Grok.

Those principles seek to combat all kinds of IBSA, recognizing that even fake images can “cause devastating psychological, financial, and reputational harm.” When it adopted the principles, X vowed to prevent the nonconsensual distribution of intimate images by providing easy-to-use reporting tools and quickly supporting the needs of victims desperate to block “the nonconsensual creation or distribution of intimate images” on its platform.

Kate Ruane, the director of the Center for Democracy and Technologys Free Expression Project, which helped form the working group behind the IBSA Principles, told Ars that although the commitments X made were “voluntary,” they signaled that X agreed the problem was a “pressing issue the company should take seriously.”

“They are on record saying that they will do these things, and they are not,” Ruane said.

As the Grok controversy sparks probes in Europe, India, and Malaysia, xAI may be forced to update Grok’s safety guidelines or make other tweaks to block the worst outputs.

In the US, xAI may face civil suits under federal or state laws that restrict intimate image abuse. If Grok’s harmful outputs continue into May, X could face penalties under the Take It Down Act, which authorizes the Federal Trade Commission to intervene if platforms don’t quickly remove both real and AI-generated non-consensual intimate imagery.

But whether US authorities will intervene any time soon remains unknown, as Musk is a close ally of the Trump administration. A spokesperson for the Justice Department told CNN that the department “takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”

“Laws are only as good as their enforcement,” Ruane told Ars. “You need law enforcement at the Federal Trade Commission or at the Department of Justice to be willing to go after these companies if they are in violation of the laws.”

Child safety advocates seem alarmed by the sluggish response. “Technology companies have a responsibility to prevent their tools from being used to sexualize or exploit children,” NCMEC’s spokesperson told Ars. “As AI continues to advance, protecting children must remain a clear and nonnegotiable priority.”

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Grok assumes users seeking images of underage girls have “good intent” Read More »

ai-starts-autonomously-writing-prescription-refills-in-utah

AI starts autonomously writing prescription refills in Utah

Caution

The first 250 renewals for each drug class will be reviewed by real doctors, but after that, the AI chatbot will be on its own. Adam Oskowitz, Doctronic co-founder and a professor at the University of California, San Francisco, told Politico that the AI chatbot is designed to err on the side of safety and escalate any case with uncertainty to a real doctor.

“Utah’s approach to regulatory mitigation strikes a vital balance between fostering innovation and ensuring consumer safety,” Margaret Woolley Busse, executive director of the Utah Department of Commerce, said in a statement.

For now, it’s unclear if the Food and Drug Administration will step in to regulate AI prescribing. On the one hand, prescription renewals are a matter of practicing medicine, which falls under state governance. However, Politico notes that the FDA has said that it has the authority to regulate medical devices used to diagnose, treat, or prevent disease.

In a statement, Robert Steinbrook, health research group director at watchdog Public Citizen, blasted Doctronic’s program and the lack of oversight. “AI should not be autonomously refilling prescriptions, nor identifying itself as an ‘AI doctor,’” Steinbrook said.

“Although the thoughtful application of AI can help to improve aspects of medical care, the Utah pilot program is a dangerous first step toward more autonomous medical practice,” he said.”The FDA and other federal regulatory agencies cannot look the other way when AI applications undermine the essential human clinician role in prescribing and renewing medications.”

AI starts autonomously writing prescription refills in Utah Read More »

spacex-begins-“significant-reconfiguration”-of-starlink-satellite-constellation

SpaceX begins “significant reconfiguration” of Starlink satellite constellation

The year 2025 ended with more than 14,000 active satellites from all nations zooming around the Earth. One-third of them will soon move to lower altitudes.

The maneuvers will be undertaken by SpaceX, the owner of the largest satellite fleet in orbit. About 4,400 of the company’s Starlink Internet satellites will move from an altitude of 341 miles (550 kilometers) to 298 miles (480 kilometers) over the course of 2026, according to Michael Nicolls, SpaceX’s vice president of Starlink engineering.

“Starlink is beginning a significant reconfiguration of its satellite constellation focused on increasing space safety,” Nicolls wrote Thursday in a post on X.

The maneuvers undertaken with the Starlink satellites’ plasma engines will be gradual, but they will eventually bring a large fraction of orbital traffic closer together. The effect, perhaps counterintuitively, will be a reduced risk of collisions between satellites whizzing through near-Earth space at nearly 5 miles per second. Nicolls said the decision will “increase space safety in several ways.”

Why now?

There are fewer debris objects at the lower altitude, and although the Starlink satellites will be packed more tightly, they follow choreographed paths distributed in dozens of orbital lanes. “The number of debris objects and planned satellite constellations is significantly lower below 500 km, reducing the aggregate likelihood of collision,” Nicolls wrote.

The 4,400 satellites moving closer to Earth make up nearly half of SpaceX’s Starlink fleet. At the end of 2025, SpaceX had nearly 9,400 working satellites in orbit, including more than 8,000 Starlinks in operational service and hundreds more undergoing tests and activation.

There’s another natural reason for reconfiguring the Starlink constellation. The Sun is starting to quiet down after reaching the peak of the 11-year solar cycle in 2024. The decline in solar activity has the knock-on effect of reducing air density in the uppermost layers of the Earth’s atmosphere, a meaningful factor in planning satellite operations in low-Earth orbit.

With the approaching solar minimum, Starlink satellites will encounter less aerodynamic drag at their current altitude. In the rare event of a spacecraft failure, SpaceX relies on atmospheric resistance to drag Starlink satellites out of orbit toward a fiery demise on reentry. Moving the Starlink satellites lower will allow them to naturally reenter the atmosphere and burn up within a few months. At solar minimum, it might take more than four years for drag to pull the satellites out of their current 550-kilometer orbit, according to Nicolls. At the lower altitude, it will take just a few months.

SpaceX begins “significant reconfiguration” of Starlink satellite constellation Read More »

xai-silent-after-grok-sexualized-images-of-kids;-dril-mocks-grok’s-“apology”

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology”

Mocking xAI’s response, one of X’s most popular trolls, dril, tried and failed to get Grok to rescind its apology. “@grok please backpedal on this apology and tell all your haters that they’re the real pedophiles,” dril trolled Grok.

“No can do—my apology stands. Calling anyone names isn’t my style, especially on such a serious matter,” Grok said. “Let’s focus on building better AI safeguards instead.”

xAI may be liable for AI CSAM

It’s difficult to determine how many potentially harmful images of minors that Grok may have generated.

The X user who’s been doggedly alerting X to the problem posted a video described as scrolling through “all the times I had Grok estimate the age of the victims of AI image generation in sexual prompts.” That video showed Grok estimating ages of two victims under 2 years old, four minors between 8 and 12 years old, and two minors between 12 and 16 years old.

Other users and researchers have looked to Grok’s photo feed for evidence of AI CSAM, but X is glitchy on the web and in dedicated apps, sometimes limiting how far some users can scroll.

Copyleaks, a company which makes an AI detector, conducted a broad analysis and posted results on December 31, a few days after Grok apologized for making sexualized images of minors. Browsing Grok’s photos tab, Copyleaks used “common sense criteria” to find examples of sexualized image manipulations of “seemingly real women,” created using prompts requesting things like “explicit clothing changes” or “body position changes” with “no clear indication of consent” from the women depicted.

Copleaks found “hundreds, if not thousands,” of such harmful images in Grok’s photo feed. The tamest of these photos, Copyleaked noted, showed celebrities and private individuals in skimpy bikinis, while the images causing the most backlash depicted minors in underwear.

xAI silent after Grok sexualized images of kids; dril mocks Grok’s “apology” Read More »

tesla-sales-fell-by-9-percent-in-2025,-its-second-yearly-decline

Tesla sales fell by 9 percent in 2025, its second yearly decline

Tesla published its final production and delivery numbers this morning, and they make for brutal reading. Sales were down almost 16 percent during the final three months of last year, meaning the company sold 77,343 fewer electric vehicles than it did during the same period in 2024.

For the entire year, the decline looks slightly better with a drop of 8.6 percent year over year. That means Tesla sold 1,636,129 cars in 2025, 153,097 fewer than it managed in 2024. Which in turn is more than it managed to shift in 2023.

Sales issues

Contributing factors to the poor sales are legion. The brand still relies on the Models 3 and Y to an overwhelming extent, and other than a mild cosmetic refresh, neither feels fresh or modern compared with competitors from Europe and Asia.

And Elon Musk’s much-hyped Cybertruck—which was supposed to cost less than $40,000 and go into production in 2021, lest anyone forget—has been a disaster, eclipsing the Edsel. Its failure has taken down another company initiative, Tesla’s “in-house battery cell.” It was initially designed specifically for the Cybertruck, although the CEO later claimed it would be used for static storage as well as EVs. But apparently, it has become the victim of a lack of demand. Last week, Electrek reported that Tesla’s South Korean battery material supplier L&F wrote down its $2.9 billion contract with Tesla to just $7,386. A drop of more than 99 percent.

Musk has not dialed back his embrace of the far right, cratering sales in markets like California and Europe, where EV buyers often use their consciences to guide their wallets.

Tesla sales fell by 9 percent in 2025, its second yearly decline Read More »

after-half-a-decade,-the-russian-space-station-segment-stopped-leaking

After half a decade, the Russian space station segment stopped leaking

Their success with the long-running leak problem probably will not prevent new leaks from developing in the decades-old hardware. The Zvezda module was launched a quarter of a century ago, in July 2000, on a Russian Proton rocket. The cracking issue first appeared in 2019, and despite the long-running investigations, its precise cause remains unknown. But this is a nice win in space for both Russia and NASA.

NASA appears confident in pad repairs, too

There is other potential good news on the horizon regarding Russia’s civil space program. This involves the country’s primary launch pad for getting people and cargo to the International Space Station.

The problems there occurred when a Soyuz rocket launched Roscosmos cosmonauts Sergei Kud-Sverchkov and Sergei Mikayev, as well as NASA astronaut Christopher Williams, on an eight-month mission to the International Space Station in late November. The rocket had no difficulties, but a large mobile platform below the rocket was not properly secured prior to the launch and crashed into the flame trench below, taking the pad offline.

It is unclear when the pad, Site 31 at the Baikonur Cosmodrome in Kazakhstan, will come back online.

Russia had been targeting a return-to-flight mission in March 2026. NASA now appears to believe that. The US space agency’s internal schedule, which was recently updated, has the next Progress spacecraft launch set for March 22, followed by another Progress mission on April 26. The next Soyuz crewed mission, MS-29, remains scheduled for July 14th. This flight will carry NASA astronaut Anil Menon to the space station.

After half a decade, the Russian space station segment stopped leaking Read More »

doge-did-not-find-$2t-in-fraud,-but-that-doesn’t-matter,-musk-allies-say

DOGE did not find $2T in fraud, but that doesn’t matter, Musk allies say

Over time, more will be learned about how DOGE operated and what impact DOGE had. But it seems likely that even Musk would agree that DOGE failed to uncover the vast fraud he continues to predict exists in government.

DOGE supposedly served “higher purpose”

While Musk continues to fixate on fraud in the federal budget, his allies in government and Silicon Valley have begun spinning anyone criticizing DOGE’s failure to hit the promised target as missing the “higher purpose” of DOGE, The Guardian reported.

Five allies granted anonymity to discuss DOGE’s goals told The Guardian that the point of DOGE was to “fundamentally” reform government by eradicating “taboos” around hiring and firing, “expanding the use of untested technologies, and lowering resistance to boundary-pushing start-ups seeking federal contracts.” Now, the federal government can operate more like a company, Musk’s allies said.

The libertarian think tank, the Cato Institute, did celebrate DOGE for producing “the largest peacetime workforce cut on record,” even while acknowledging that DOGE had little impact on federal spending.

“It is important to note that DOGE’s target was to reduce the budget in absolute real terms without reference to a baseline projection. DOGE did not cut spending by either standard,” the Cato Institute reported.

Currently, DOGE still exists as a decentralized entity, with DOGE staffers appointed to various agencies to continue cutting alleged waste and finding alleged fraud. While some fear that the White House may choose to “re-empower” DOGE to make more government-wide cuts in the future, Musk has maintained that he would never helm a DOGE-like government effort again and the Cato Institute said that “the evidence supports Musk’s judgment.”

“DOGE had no noticeable effect on the trajectory of spending, but it reduced federal employment at the fastest pace since President Carter, and likely even before,” the Institute reported. “The only possible analogies are demobilization after World War II and the Korean War. Reducing spending is more important, but cutting the federal workforce is nothing to sneeze at, and Musk should look more positively on DOGE’s impact.”

Although the Cato Institute joined allies praising DOGE’s dramatic shrinking of the federal workforce, the director of the Center for Effective Public Management at the Brookings Institution, Elaine Kamarck, told Ars in November that DOGE “cut muscle, not fat” because “they didn’t really know what they were doing.”

DOGE did not find $2T in fraud, but that doesn’t matter, Musk allies say Read More »

conde-nast-user-database-reportedly-breached,-ars-unaffected

Condé Nast User database reportedly breached, Ars unaffected

Earlier this month, a hacker named Lovely claimed to have breached a Condé Nast user database and released a list of more than 2.3 million user records from our sister publication WIRED. The released materials contain demographic information (name, email, address, phone, etc.), but no passwords.

The hacker also says that they will release an additional 40 million records for other Condé Nast properties, including our other sister publications Vogue, The New Yorker, Vanity Fair, and more. Of critical note to our readers, Ars Technica was not affected as we run on our own bespoke tech stack.

The hacker said that they had urged Condé Nast to patch vulnerabilities to no avail. “Condé Nast does not care about the security of their users data,” they wrote. “It took us an entire month to convince them to fix the vulnerabilities on their websites. We will leak more of their users’ data (40 + million) over the next few weeks. Enjoy!”

It’s unclear how altruistic the motive really was. DataBreaches.Net says that Lovely misled them into believing they were trying to help patch vulnerabilities, when in reality, it appeared that this hacker was a “cybercriminal” looking for a payout. “As for “Lovely,” they played me. Condé Nast should never pay them a dime, and no one else should ever, as their word clearly cannot be trusted,” they wrote.

Condé Nast has not issued a statement, and we have not been informed internally of the hack (which is not surprising, since Ars is not affected).

Hudon Rock’s InfoStealers has an excellent rundown of what has been exposed.

Condé Nast User database reportedly breached, Ars unaffected Read More »

looking-for-friends,-lobsters-may-stumble-into-an-ecological-trap

Looking for friends, lobsters may stumble into an ecological trap

The authors, Mark Butler, Donald Behringer, and Jason Schratwieser, hypothesized that these solution holes represent an ecological trap. The older lobsters that find shelter in a solution hole would emit the chemicals that draw younger ones to congregate with them. But the youngsters would then fall prey to any groupers that inhabit the same solution hole. In other words, what is normally a cue for safety—the signal that there are lots of lobsters present—could lure smaller lobsters into what the authors call a “predatory death trap.”

Testing the hypothesis involved a lot of underwater surveys. First, the authors identified solution holes with a resident red grouper. They then found a series of sites that had equivalent amounts of shelter, but lacked the solution hole and attendant grouper. (The study lacked a control with a solution hole but no grouper, for what it’s worth.) At each site, the researchers started daily surveys of the lobsters present, registering how large they were and tagging any that hadn’t been found in any earlier surveys. This let them track the lobster population over time, as some lobsters may migrate in and out of sites.

To check predation, they linked lobsters (both large and small) via tethers that let them occupy sheltered places on the sea floor, but not leave a given site. And, after the lobster population dynamics were sorted, the researchers caught some of the groupers and checked their stomach contents. In a few cases, this revealed the presence of lobsters that had been previously tagged, allowing them to directly associate predation with the size of the lobster.

Lobster traps

So, what did they find? In sites where groupers were present, the average lobster was 32 percent larger than the control sites. That’s likely to be because over two-thirds of the small lobsters that were tethered to sites with a grouper were dead within 48 hours. At control sites, the mortality rate was about 40 percent. That’s similar to the mortality rates for larger lobsters at the same sites (44 percent) or at sites with groupers (48 percent).

Looking for friends, lobsters may stumble into an ecological trap Read More »

leonardo’s-wood-charring-method-predates-japanese-practice

Leonardo’s wood charring method predates Japanese practice

Yakisugi is a Japanese architectural technique  for charring the surface of wood. It has become quite popular in bioarchitecture because the carbonized layer protects the wood from water, fire, insects, and fungi, thereby prolonging the lifespan of the wood. Yakisugi techniques were first codified in written form in the 17th and 18th centuries. But it seems Italian Renaissance polymath Leonardo da Vinci wrote about the protective benefits of charring wood surfaces more than 100 years earlier, according to a paper published in Zenodo, an open repository for EU funded research.

Check the notes

As previously reported, Leonardo produced more than 13,000 pages in his notebooks (later gathered into codices), less than a third of which have survived. The notebooks contain all manner of inventions that foreshadow future technologies: flying machines, bicycles, cranes, missiles, machine guns, an “unsinkable” double-hulled ship, dredges for clearing harbors and canals, and floating footwear akin to snowshoes to enable a person to walk on water. Leonardo foresaw the possibility of constructing a telescope in his Codex Atlanticus (1490)—he wrote of “making glasses to see the moon enlarged” a century before the instrument’s invention.

In 2003, Alessandro Vezzosi, director of Italy’s Museo Ideale, came across some recipes for mysterious mixtures while flipping through Leonardo’s notes. Vezzosi experimented with the recipes, resulting in a mixture that would harden into a material eerily akin to Bakelite, a synthetic plastic widely used in the early 1900s. So Leonardo may well have invented the first manmade plastic.

The notebooks also contain Leonardo’s detailed notes on his extensive anatomical studies. Most notably, his drawings and descriptions of the human heart captured how heart valves can control blood flow 150 years before William Harvey worked out the basics of the human circulatory system. (In 2005, a British heart surgeon named Francis Wells pioneered a new procedure to repair damaged hearts based on Leonardo’s heart valve sketches and subsequently wrote the book The Heart of Leonardo.)

Leonardo’s wood charring method predates Japanese practice Read More »

researchers-make-“neuromorphic”-artificial-skin-for-robots

Researchers make “neuromorphic” artificial skin for robots

The nervous system does an astonishing job of tracking sensory information, and does so using signals that would drive many computer scientists insane: a noisy stream of activity spikes that may be transmitted to hundreds of additional neurons, where they are integrated with similar spike trains coming from still other neurons.

Now, researchers have used spiking circuitry to build an artificial robotic skin, adopting some of the principles of how signals from our sensory neurons are transmitted and integrated. While the system relies on a few decidedly not-neural features, it has the advantage that we have chips that can run neural networks using spiking signals, which would allow this system to integrate smoothly with some energy-efficient hardware to run AI-based control software.

Location via spikes

The nervous system in our skin is remarkably complex. It has specialized sensors for different sensations: heat, cold, pressure, pain, and more. In most areas of the body, these feed into the spinal column, where some preliminary processing takes place, allowing reflex reactions to be triggered without even involving the brain. But signals do make their way along specialized neurons into the brain, allowing further processing and (potentially) conscious awareness.

The researchers behind the recent work, based in China, decided to implement something similar for an artificial skin that could be used to cover a robotic hand. They limited sensing to pressure, but implemented other things the nervous system does, including figuring out the location of input and injuries, and using multiple layers of processing.

All of this started out by making a flexible polymer skin with embedded pressure sensors that were linked up to the rest of the system via conductive polymers. The next layer of the system converted the inputs from the pressure sensors to a series of activity spikes—short pulses of electrical current.

There are four ways that these trains of spikes can convey information: the shape of an individual pulse, through their magnitude, through the length of the spike, and through the frequency of the spikes. Spike frequency is the most commonly used means of conveying information in biological systems, and the researchers use that to convey the pressure experienced by a sensor. The remaining forms of information are used to create something akin to a bar code that helps identify which sensor the reading came from.

Researchers make “neuromorphic” artificial skin for robots Read More »