robotics

lightening-the-load:-ai-helps-exoskeleton-work-with-different-strides

Lightening the load: AI helps exoskeleton work with different strides

One model to rule them all —

A model trained in a virtual environment does remarkably well in the real world.

Image of two people using powered exoskeletons to move heavy items around, as seen in the movie Aliens.

Enlarge / Right now, the software doesn’t do arms, so don’t go taking on any aliens with it.

20th Century Fox

Exoskeletons today look like something straight out of sci-fi. But the reality is they are nowhere near as robust as their fictional counterparts. They’re quite wobbly, and it takes long hours of handcrafting software policies, which regulate how they work—a process that has to be repeated for each individual user.

To bring the technology a bit closer to Avatar’s Skel Suits or Warhammer 40k power armor, a team at North Carolina University’s Lab of Biomechatronics and Intelligent Robotics used AI to build the first one-size-fits-all exoskeleton that supports walking, running, and stair-climbing. Critically, its software adapts itself to new users with no need for any user-specific adjustments. “You just wear it and it works,” says Hao Su, an associate professor and co-author of the study.

Tailor-made robots

An exoskeleton is a robot you wear to aid your movements—it makes walking, running, and other activities less taxing, the same way an e-bike adds extra watts on top of those you generate yourself, making pedaling easier. “The problem is, exoskeletons have a hard time understanding human intentions, whether you want to run or walk or climb stairs. It’s solved with locomotion recognition: systems that recognize human locomotion intentions,” says Su.

Building those locomotion recognition systems currently relies on elaborate policies that define what actuators in an exoskeleton need to do in each possible scenario. “Let’s take walking. The current state of the art is we put the exoskeleton on you and you walk on a treadmill for an hour. Based on that, we try to adjust its operation to your individual set of movements,” Su explains.

Building handcrafted control policies and doing long human trials for each user makes exoskeletons super expensive, with prices reaching $200,000 or more. So, Su’s team used AI to automatically generate control policies and eliminate human training. “I think within two or three years, exoskeletons priced between $2,000 and $5,000 will be absolutely doable,” Su claims.

His team hopes these savings will come from developing the exoskeleton control policy using a digital model, rather than living, breathing humans.

Digitizing robo-aided humans

Su’s team started by building digital models of a human musculoskeletal system and an exoskeleton robot. Then they used multiple neural networks that operated each component. One was running the digitized model of a human skeleton, moved by simplified muscles. The second neural network was running the exoskeleton model. Finally, the third neural net was responsible for imitating motion—basically predicting how a human model would move wearing the exoskeleton and how the two would interact with each other. “We trained all three neural networks simultaneously to minimize muscle activity,” says Su.

One problem the team faced is that exoskeleton studies typically use a performance metric based on metabolic rate reduction. “Humans, though, are incredibly complex, and it is very hard to build a model with enough fidelity to accurately simulate metabolism,” Su explains. Luckily, according to the team, reducing muscle activations is rather tightly correlated with metabolic rate reduction, so it kept the digital model’s complexity within reasonable limits. The training of the entire human-exoskeleton system with all three neural networks took roughly eight hours on a single RTX 3090 GPU. And the results were record-breaking.

Bridging the sim-to-real gap

After developing the controllers for the digital exoskeleton model, which were developed by the neural networks in simulation, Su’s team simply copy-pasted the control policy to a real controller running a real exoskeleton. Then, they tested how an exoskeleton trained this way would work with 20 different participants. The averaged metabolic rate reduction in walking was over 24 percent, over 13 percent in running, and 15.4 percent in stair climbing—all record numbers, meaning their exoskeleton beat every other exoskeleton ever made in each category.

This was achieved without needing any tweaks to fit it to individual gaits. But the neural networks’ magic didn’t end there.

“The problem with traditional, handcrafted policies was that it was just telling it ‘if walking is detected do one thing; if walking faster is detected do another thing.’ These were [a mix of] finite state machines and switch controllers. We introduced end-to-end continuous control,” says Su. What this continuous control meant was that the exoskeleton could follow the human body as it made smooth transitions between different activities—from walking to running, from running to climbing stairs, etc. There was no abrupt mode switching.

“In terms of software, I think everyone will be using this neural network-based approach soon,” Su claims. To improve the exoskeletons in the future, his team wants to make them quieter, lighter, and more comfortable.

But the plan is also to make them work for people who need them the most. “The limitation now is that we tested these exoskeletons with able-bodied participants, not people with gait impairments. So, what we want to do is something they did in another exoskeleton study at Stanford University. We would take a one-minute video of you walking, and based on that, we would build a model to individualize our general model. This should work well for people with impairments like knee arthritis,” Su claims.

Nature, 2024.  DOI: 10.1038/s41586-024-07382-4

Lightening the load: AI helps exoskeleton work with different strides Read More »

cats-playing-with-robots-proves-a-winning-combo-in-novel-art-installation

Cats playing with robots proves a winning combo in novel art installation

The feline factor —

Cat Royale project explores what it takes to trust a robot to look after beloved pets.

Cat with the robot arm in the Cat Royale installation

Enlarge / A kitty named Clover prepares to play with a robot arm in the Cat Royale “multi-species” science/art installation .

Blast Theory – Stephen Daly

Cats and robots are a winning combination, as evidenced by all those videos of kitties riding on Roombas. And now we have Cat Royale, a “multispecies” live installation in which three cats regularly “played” with a robot over 12 days, carefully monitored by human operators. Created by computer scientists from the University of Nottingham in collaboration with artists from a group called Blast Theory, the installation debuted at the World Science Festival in Brisbane, Australia, last year and is now a touring exhibit. The accompanying YouTube video series recently won a Webby Award, and a paper outlining the insights gleaned from the experience was similarly voted best paper at the recent Computer-Human Conference (CHI’24).

“At first glance, the project is about designing a robot to enrich the lives of a family of cats by playing with them,” said co-author Steve Benford of the University of Nottingham, who led the research, “Under the surface, however, it explores the question of what it takes to trust a robot to look after our loved ones and potentially ourselves.” While cats might love Roombas, not all animal encounters with robots are positive: Guide dogs for the visually impaired can get confused by delivery robots, for example, while the rise of lawn mowing robots can have a negative impact on hedgehogs, per Benford et al.

Blast Theory and the scientists first held a series of exploratory workshops to ensure the installation and robotic design would take into account the welfare of the cats. “Creating a multispecies system—where cats, robots, and humans are all accounted for—takes more than just designing the robot,” said co-author Eike Schneiders of Nottingham’s Mixed Reality Lab about the primary takeaway from the project. “We had to ensure animal well-being at all times, while simultaneously ensuring that the interactive installation engaged the (human) audiences around the world. This involved consideration of many elements, including the design of the enclosure, the robot, and its underlying systems, the various roles of the humans-in-the-loop, and, of course, the selection of the cats.”

Based on those discussions, the team set about building the installation: a bespoke enclosure that would be inhabited by three cats for six hours a day over 12 days. The lucky cats were named Ghostbuster, Clover, and Pumpkin—a parent and two offspring to ensure the cats were familiar with each other and comfortable sharing the enclosure. The enclosure was tricked out to essentially be a “utopia for cats,” per the authors, with perches, walkways, dens, a scratching post, a water fountain, several feeding stations, a ball run, and litter boxes tucked away in secluded corners.

(l-r) Clover, Pumpkin, and Ghostbuster spent six hours a day for 12 days in the installation.

Enlarge / (l-r) Clover, Pumpkin, and Ghostbuster spent six hours a day for 12 days in the installation.

E. Schneiders et al., 2024

As for the robot, the team chose the Kino Gen3 lite robot arm, and the associated software was trained on over 7,000 videos of cats. A decision engine gave the robot autonomy and proposed activities for specific cats. Then a human operator used an interface control system to instruct the robot to execute the movements. The robotic arm’s two-finger gripper was augmented with custom 3D-printed attachments so that the robot could manipulate various cat toys and accessories.

Each cat/robot interaction was evaluated for a “happiness score” based on the cat’s level of engagement, body language, and so forth. Eight cameras monitored the cat and robot activities, and that footage was subsequently remixed and edited into daily YouTube highlight videos and, eventually, an eight-hour film.

Cats playing with robots proves a winning combo in novel art installation Read More »

exploration-focused-training-lets-robotics-ai-immediately-handle-new-tasks

Exploration-focused training lets robotics AI immediately handle new tasks

Exploratory —

Maximum Diffusion Reinforcement Learning focuses training on end states, not process.

A woman performs maintenance on a robotic arm.

boonchai wedmakawand

Reinforcement-learning algorithms in systems like ChatGPT or Google’s Gemini can work wonders, but they usually need hundreds of thousands of shots at a task before they get good at it. That’s why it’s always been hard to transfer this performance to robots. You can’t let a self-driving car crash 3,000 times just so it can learn crashing is bad.

But now a team of researchers at Northwestern University may have found a way around it. “That is what we think is going to be transformative in the development of the embodied AI in the real world,” says Thomas Berrueta who led the development of the Maximum Diffusion Reinforcement Learning (MaxDiff RL), an algorithm tailored specifically for robots.

Introducing chaos

The problem with deploying most reinforcement-learning algorithms in robots starts with the built-in assumption that the data they learn from is independent and identically distributed. The independence, in this context, means the value of one variable does not depend on the value of another variable in the dataset—when you flip a coin two times, getting tails on the second attempt does not depend on the result of your first flip. Identical distribution means that the probability of seeing any specific outcome is the same. In the coin-flipping example, the probability of getting heads is the same as getting tails: 50 percent for each.

In virtual, disembodied systems, like YouTube recommendation algorithms, getting such data is easy because most of the time it meets these requirements right off the bat. “You have a bunch of users of a website, and you get data from one of them, and then you get data from another one. Most likely, those two users are not in the same household, they are not highly related to each other. They could be, but it is very unlikely,” says Todd Murphey, a professor of mechanical engineering at Northwestern.

The problem is that, if those two users were related to each other and were in the same household, it could be that the only reason one of them watched a video was that their housemate watched it and told them to watch it. This would violate the independence requirement and compromise the learning.

“In a robot, getting this independent, identically distributed data is not possible in general. You exist at a specific point in space and time when you are embodied, so your experiences have to be correlated in some way,” says Berrueta. To solve this, his team designed an algorithm that pushes robots be as randomly adventurous as possible to get the widest set of experiences to learn from.

Two flavors of entropy

The idea itself is not new. Nearly two decades ago, people in AI figured out algorithms, like Maximum Entropy Reinforcement Learning (MaxEnt RL), that worked by randomizing actions during training. “The hope was that when you take as diverse set of actions as possible, you will explore more varied sets of possible futures. The problem is that those actions do not exist in a vacuum,” Berrueta claims. Every action a robot takes has some kind of impact on its environment and on its own condition—disregarding those impacts completely often leads to trouble. To put it simply, an autonomous car that was teaching itself how to drive using this approach could elegantly park into your driveway but would be just as likely to hit a wall at full speed.

To solve this, Berrueta’s team moved away from maximizing the diversity of actions and went for maximizing the diversity of state changes. Robots powered by MaxDiff RL did not flail their robotic joints at random to see what that would do. Instead, they conceptualized goals like “can I reach this spot ahead of me” and then tried to figure out which actions would take them there safely.

Berrueta and his colleagues achieved that through something called ergodicity, a mathematical concept that says that a point in a moving system will eventually visit all parts of the space that the system moves in. Basically, MaxDiff RL encouraged the robots to achieve every available state in their environment. And the results of first tests in simulated environments were quite surprising.

Racing pool noodles

“In reinforcement learning there are standard benchmarks that people run their algorithms on so we can have a good way of comparing different algorithms on a standard framework,” says Allison Pinosky, a researcher at Northwestern and co-author of the MaxDiff RL study. One of those benchmarks is a simulated swimmer: a three-link body resting on the ground in a viscous environment that needs to learn to swim as fast as possible in a certain direction.

In the swimmer test, MaxDiff RL outperformed two other state-of-the-art reinforcement learning algorithms (NN-MPPI and SAC). These two needed several resets to figure out how to move the swimmers. To complete the task, they were following a standard AI learning process divided down into a training phase where an algorithm goes through multiple failed attempts to slowly improve its performance, and a testing phase where it tries to perform the learned task. MaxDiff RL, by contrast, nailed it, immediately adapting its learned behaviors to the new task.

The earlier algorithms ended up failing to learn because they got stuck trying the same options and never progressing to where they could learn that alternatives work. “They experienced the same data repeatedly because they were locally doing certain actions, and they assumed that was all they could do and stopped learning,” Pinosky explains. MaxDiff RL, on the other hand, continued changing states, exploring, getting richer data to learn from, and finally succeeded. And because, by design, it seeks to achieve every possible state, it can potentially complete all possible tasks within an environment.

But does this mean we can take MaxDiff RL, upload it to a self-driving car, and let it out on the road to figure everything out on its own? Not really.

Exploration-focused training lets robotics AI immediately handle new tasks Read More »

expedition-uses-small-underwater-drone-to-discover-100-year-old-shipwreck

Expedition uses small underwater drone to discover 100-year-old shipwreck

The sunken place —

The underwater drone Hydrus can capture georeferenced 4K video and images simultaneously.

3D model of a 100-year-old shipwreck off the western coast of Australia. Credit: Daniel Adams, Curtin University HIVE.

A small underwater drone called Hydrus has located the wreckage of a 100-year-old coal hulk in the deep waters off the coast of western Australia. Based on the data the drone captured, scientists were able to use photogrammetry to virtually “rebuild” the 210-foot ship into a 3D model (above). You can explore an interactive 3D rendering of the wreckage here.

The use of robotic submersibles to locate and explore historic shipwrecks is well established. For instance, researchers relied on remotely operated vehicles (ROVs) to study the wreckage of the HMS Terror, Captain Sir John S. Franklin‘s doomed Arctic expedition to cross the Northwest Passage in 1846. In 2007, a pair of brothers (printers based in Norfolk) discovered the wreck of the Gloucester, which ran aground on a sandbank off the coast of Norfolk in 1682 and sank within the hour. Among the passengers was James Stuart, Duke of York and future King James II of England, who escaped in a small boat just before the ship sank.

In 2022, the Falklands Maritime Heritage Trust and National Geographic announced the discovery of British explorer Sir Ernest Shackleton‘s ship Endurance. In 1915, Shackleton and his crew were stranded for months on the Antarctic ice after the ship was crushed by pack ice and sank into the freezing depths of the Weddell Sea. The wreckage was found nearly 107 years later, 3,008 meters down, roughly four miles (6.4 km) south of the ship’s last recorded position. The wreck was in pristine condition partly because of the lack of wood-eating microbes in those waters. In fact, the lettering “ENDURANCE” was clearly visible in shots of the stern.

And just last year, an ROV was used to verify the discovery of the wreckage of a schooner barge called Ironton, which collided with a Great Lakes freighter called Ohio in Lake Huron’s infamous “Shipwreck Alley” in 1894. The wreck was so well-preserved in the frigid waters of the Great Lakes that its three masts were still standing and its rigging still attached. That discovery could help resolve unanswered questions about the ship’s final hours.

Deployment of one of Advanced Navigation's Micro Autonomous Underwater Vehicles (AUV).

Enlarge / Deployment of one of Advanced Navigation’s Micro Autonomous Underwater Vehicles (AUV).

Advanced Navigation

According to Advanced Navigation, there are some 3 million undiscovered shipwrecks around the world—1,819 recorded wrecks lying off the coast of Western Australia alone. That includes the Rottnest ship graveyard just southwest of Rottnest Island, with a seabed some 50 to 200 meters below sea level (164 to 656 feet). The island is known for the number of ships wrecked near its shore since the 17th century. The Rottnest graveyard is more of a dump site for scuttling obsolete ships, at least 47 of which would be considered historically significant.

However, this kind of deep ocean exploration can be both time-consuming and expensive, particularly at depths of more than 50 meters (164 feet). Hydrus was designed to reduce the cost of this kind of ocean exploration significantly. One person can deploy the drone because of its compact size, so there is no need for large vessels or complicated launch systems. And Hydrus can capture georeferenced 4K video and still images at the same time. Once this latest expedition realized they had found a shipwreck, they were able to deploy a pair of the drones to take a complete survey in just five hours.

Hydrus captured this footage of the 210-foot wreck of a 19th-century coal hulk. Credit: Advanced Navigation

Ross Anderson, curator of the Western Australia Museum, was able to identify the wreck as an iron coal hulk once used in Freemantle Port to service steamships, probably built in the 1860s–1890s and scuttled in the graveyard sometime in the 1920s. The geolocation data provided to scientists at Curtin University HIVE enabled them to use photogrammetry to convert that data into a 3D digital model. “It can’t be overstated how much this structure in data assists with constraining feature matching and reducing the processing time, especially in large datasets,” Andrew Woods, a professor at the university, said in a statement.

The expedition team’s next target using the Hydrus technology is the wreck of the luxury passenger steamship SS Koombana, which disappeared somewhere off Port Hedland en route to Broome during a tropical cyclone in 1912, with 150 on board presumed to have perished. The only wreckage recovered at the time was part of a starboard bow planking, a stateroom door, a panel from the promenade deck, and a few air tanks. There were a couple of reports in the 1980s of “magnetic anomalies” in the seabed off Bedout Island, part of the route the Koombana would have taken. But despite several deep-water expeditions in the early 2010s, to date the actual shipwreck has not been found.

Listing image by Advanced Navigation

Expedition uses small underwater drone to discover 100-year-old shipwreck Read More »

this-four-legged-robot-learned-parkour-to-better-navigate-obstacles

This four-legged robot learned parkour to better navigate obstacles

teaching an old robot new tricks —

Latest improvements to ANYmal make it better at navigating rubble and tricky terrain.

ANYmal can do parkour and walk across rubble. The quadrupedal robot went back to school and has learned a lot.

Meet ANYmal, a four-legged dog-like robot designed by researchers at ETH Zürich in Switzerland, in hopes of using such robots for search-and-rescue on building sites or disaster areas, among other applications. Now ANYmal has been upgraded to perform rudimentary parkour moves, aka “free running.” Human parkour enthusiasts are known for their remarkably agile, acrobatic feats, and while ANYmal can’t match those, the robot successfully jumped across gaps, climbed up and down large obstacles, and crouched low to maneuver under an obstacle, according to a recent paper published in the journal Science Robotics.

The ETH Zürich team introduced ANYmal’s original approach to reinforcement learning back in 2019 and enhanced its proprioception (the ability to sense movement, action, and location) three years later. Just last year, the team showcased a trio of customized ANYmal robots, tested in environments as close to the harsh lunar and Martian terrain as possible. As previously reported, robots capable of walking could assist future rovers and mitigate the risk of damage from sharp edges or loss of traction in loose regolith. Every robot had a lidar sensor. but they were each specialized for particular functions and still flexible enough to cover for each other—if one glitches, the others can take over its tasks.

For instance, the Scout model’s main objective was to survey its surroundings using RGB cameras. This robot also used another imager to map regions and objects of interest using filters that let through different areas of the light spectrum. The Scientist model had the advantage of an arm featuring a MIRA (Metrohm Instant Raman Analyzer) and a MICRO (microscopic imager). The MIRA was able to identify chemicals in materials found on the surface of the demonstration area based on how they scattered light, while the MICRO on its wrist imaged them up close. The Hybrid was more of a generalist, helping out the Scout and the Scientist with measurements of scientific targets such as boulders and craters.

As advanced as ANYmal and similar-legged robots have become in recent years, significant challenges still remain before they are as nimble and agile as humans and other animals. “Before the project started, several of my researcher colleagues thought that legged robots had already reached the limits of their development potential,” said co-author Nikita Rudin, a graduate student at ETH Zurich who also does parkour. “But I had a different opinion. In fact, I was sure that a lot more could be done with the mechanics of legged robots.”

The quadrupedal robot ANYmal practices parkour in a hall at ETH Zürich.

Enlarge / The quadrupedal robot ANYmal practices parkour in a hall at ETH Zürich.

ETH Zurich / Nikita Rudin

Parkour is quite complex from a robotics standpoint, making it an ideal aspirational task for the Swiss team’s next step in ANYmal’s capabilities. Parkour can involve large obstacles, requiring the robot “to perform dynamic maneuvers at the limits of actuation while accurately controlling the motion of the base and limbs,” the authors wrote. To succeed, ANYmal must be able to sense its environment and adapt to rapid changes, selecting a feasible path and sequence of motions from its programmed skill set. And it has to do all that in real time with limited onboard computing.

The Swiss team’s overall approach combines machine learning with model-based control. They split the task into three interconnected components: a perception module that processes the data from onboard cameras and LiDAR to estimate the terrain; a locomotion module with a programmed catalog of movements to overcome specific terrains; and a navigation module that guides the locomotion module in selecting which skills to use to navigate different obstacles and terrain using intermediate commands.

Rudin, for example, used machine learning to teach ANYmal some new skills through trial and error, namely, scaling obstacles and figuring out how to climb up and jump back down from them. The robot’s camera and artificial neural network enable it to pick the best maneuvers based on its prior training. Another graduate student, Fabian Jenelten, used model-based control to teach ANYmal how to recognize and negotiate gaps in piles of rubble, augmented with machine learning so the robot could have more flexibility in applying known movement patterns to unexpected situations.

ANYmal on a civil defense training ground.

Enlarge / ANYmal on a civil defense training ground.

ETH Zurich / Fabian Jenelten

Among the tasks ANYmal was able to perform was jumping from one box to a neighboring box up to 1 meter away. This required the robot to approach the gap sideways, place its feet as close as possible to the edge, and then use three legs to jump while extending the fourth to land on the other box. It could then transfer two diagonal legs before bringing the final leg across the gap. This meant ANYmal could recover from any missteps and slippage by transferring its weight between the non-leaping legs.

ANYmal also was able to climb down from a 1-meter-high box to reach a target on the ground, as well as climbing up the box. It can also crouch down to reach a target on the other side of a narrow passage, lowering its base and adapting its gait accordingly. The team also tested ANYmal’s walking abilities, in which the robot successfully traversed stairs, slopes, random small obstacles and so forth.

ANYmal still has its limitations when it comes to navigating real-world environments, whether it be a parkour course or the debris of a collapsed building. For instance, the authors note that they have yet to test the scalability of their approach to more diverse and unstructured scenarios that incorporate a wider variety of obstacles; the robot was only tested in a few select scenarios. “It remains to be seen how well these different modules can generalize to completely new scenarios,” they wrote. The approach is also time-consuming since it requires eight neural networks that must be tuned separately, and some of the networks are interdependent, so changing one means changing and retraining the others as well.

Still, ANYmal “can now evolve in complex scenes where it must climb and jump on large obstacles while selecting a nontrivial path toward its target location,” the authors wrote. Thus, “by aiming to match the agility of free runners, we can better understand the limitations of each component in the pipeline from perception to actuation, circumvent those limits, and generally increase the capabilities of our robots.”

Science Robotics, 2024. DOI: 10.1126/scirobotics.adi7566  (About DOIs).

Listing image by ETH Zurich / Nikita Rudin

This four-legged robot learned parkour to better navigate obstacles Read More »

building-robots-for-“zero-mass”-space-exploration

Building robots for “Zero Mass” space exploration

A robot performing construction on the surface of the moon against the black backdrop of space.

Sending 1 kilogram to Mars will set you back roughly $2.4 million, judging by the cost of the Perseverance mission. If you want to pack up supplies and gear for every conceivable contingency, you’re going to need a lot of those kilograms.

But what if you skipped almost all that weight and only took a do-it-all Swiss Army knife instead? That’s exactly what scientists at NASA Ames Research Center and Stanford University are testing with robots, algorithms, and highly advanced building materials.

Zero mass exploration

“The concept of zero mass exploration is rooted in self-replicating machines, an engineering concept John von Neumann conceived in the 1940s”, says Kenneth C. Cheung, a NASA Ames researcher. He was involved in the new study published recently in Science Robotics covering self-reprogrammable metamaterials—materials that do not exist in nature and have the ability to change their configuration on their own. “It’s the idea that an engineering system can not only replicate, but sustain itself in the environment,” he adds.

Based on this concept, Robert A. Freitas Jr. in the 1980s proposed a self-replicating interstellar spacecraft called the Von Neumann probe that would visit a nearby star system, find resources to build a copy of itself, and send this copy to another star system. Rinse and repeat.

“The technology of reprogrammable metamaterials [has] advanced to the point where we can start thinking about things like that. It can’t make everything we need yet, but it can make a really big chunk of what we need,” says Christine E. Gregg, a NASA Ames researcher and the lead author of the study.

Building blocks for space

One of the key problems with Von Neumann probes was that taking elements found in the soil on alien worlds and processing them into actual engineering components was resource-intensive and required huge amounts of energy. The NASA Ames team solved that with using prefabricated “voxels”—standardized reconfigurable building blocks.

The system derives its operating principles from the way nature works on a very fundamental level. “Think how biology, one of the most scalable systems we have ever seen, builds stuff,” says Gregg. “It does that with building blocks. There are on the order of 20 amino acids which your body uses to make proteins to make 200 different types of cells and then combines trillions of those cells to make organs as complex as my hair and my eyes. We are using the same strategy,” she adds.

To demo this technology, they built a set of 256 of those blocks—extremely strong 3D structures made with a carbon-fiber-reinforced polymer called StattechNN-40CF. Each block had fastening interfaces on every side that could be used to reversibly attach them to other blocks and form a strong truss structure.

A 3×3 truss structure made with these voxels had an average failure load of 900 Newtons, which means it could hold over 90 kilograms despite being incredibly light itself (its density is just 0.0103 grams per cubic centimeter). “We took these voxels out in backpacks and built a boat, a shelter, a bridge you could walk on. The backpacks weighed around 18 kilograms. Without technology like that, you wouldn’t even think about fitting a boat and a bridge in a backpack,” says Cheung. “But the big thing about this study is that we implemented this reconfigurable system autonomously with robots,” he adds.

Building robots for “Zero Mass” space exploration Read More »

robo-dinosaur-scares-grasshoppers-to-shed-light-on-why-dinos-evolved-feathers

Robo-dinosaur scares grasshoppers to shed light on why dinos evolved feathers

What’s the point of half a wing? —

The feathers may have helped dinosaurs frighten and flush out prey.

Grasshoppers, beware! Robopteryx is here to flush you from your hiding place.

Enlarge / Grasshoppers, beware! Robopteryx is here to flush you from your hiding place.

Jinseok Park, Piotr Jablonski et al., 2024

Scientists in South Korea built a robotic dinosaur and used it to startle grasshoppers to learn more about why dinosaurs evolved feathers, according to a recent paper published in the journal Scientific Reports. The results suggest that certain dinosaurs may have employed a hunting strategy in which they flapped their proto-wings to flush out prey, and this behavior may have led to the evolution of larger and stiffer feathers.

As reported previously, feathers are the defining feature of birds, but that wasn’t always the case. For millions of years, various species of dinosaurs sported feathers, some of which have left behind fossilized impressions. For the most part, the feathers we’ve found have been attached to smaller dinosaurs, many of them along the lineage that gave rise to birds—although in 2012, scientists discovered three nearly complete skeletons of a “gigantic” feathered dinosaur species, Yutyrannus huali, related to the ancestors of Tyrannosaurus Rex.

Various types of dino-feathers have been found in the fossil record over the last 30 years, such as so-called pennaceous feathers (present in most modern birds). These were found on distal forelimbs of certain species like Caudipteryx, serving as proto-wings that were too small to use for flight, as well as around the tip of the tail as plumage. Paleontologists remain unsure of the function of pennaceous feathers—what use could there be for half a wing? A broad range of hypotheses have been proposed: foraging or hunting, pouncing or immobilizing prey, brooding, gliding, or wing-assisted incline running, among others.

Caudipteryx zoui skeleton at the Löwentor Museum in Stuttgart, Germany.” height=”475″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/dino2-640×475.jpg” width=”640″>

Enlarge / Mounted Caudipteryx zoui skeleton at the Löwentor Museum in Stuttgart, Germany.

Co-author Jinseok Park of Seoul National University in South Korea and colleagues thought the pennaceous feathers might have been used to flush out potential prey from hiding places so they could be more easily caught. It’s a strategy employed by certain modern bird species, like roadrunners, and typically involves a visual display of the plumage on wings and tails.

There is evidence that this flush-pursuit hunting strategy evolved multiple times. According to Park et al., it’s based on the “rare enemy effect,” i.e., certain prey (like insects) wouldn’t be capable of responding to different predators in different ways and would not respond effectively to an unusual flush-pursuit strategy. Rather than escaping a predator, the insects fly toward their own demise. “The use of plumage to flush prey could have increased the frequency of chase after escaping prey, thus amplifying the importance of plumage in drag-based or lift-based maneuvering for a successful pursuit,” the authors wrote.  “This, in turn, could have led to the larger and stiffer feathers for faster movements and more visual flush displays.”

To test their hypothesis, Park et al. constructed a robot dinosaur they dubbed “Robopteryx,” using Caudipteryx as a model. They built the robot’s body out of aluminum, with the proto-wings and tail plumage made from black paper and plastic ribbing. The head was made of black polystyrene, the wing folds were made of black elastic stocking, and the whole contraption was covered in felt. They scanned the scientific literature on Caudipteryx to determine resting posture angles and motion ranges. The motion of the forelimbs and tail was controlled by a mechanism controlled by custom software running on a mobile phone.

Robopteryx faces off against a grasshopper and prepares to flap its wings.

Enlarge / Robopteryx faces off against a grasshopper and prepares to flap its wings.

Jinseok Park, Piotr Jablonski et al., 2024

Park et al. then conducted experiments with the robot performing motions consistent with a flush display using the band-winged grasshopper (a likely prey), which has relatively simple neural circuits. They placed a wooden stick with scale marks next to the grasshopper and photographed it to record its body orientation relative to the robot, and then made the robot’s forelimbs and tail flap to mimic a flush display. If the grasshopper escaped, they ended the individual test; if the grasshopper didn’t respond, they slowly moved the robot closer and closer using a long beam. The team also attached electrodes to grasshoppers in the lab to measure neural spikes as the insects were shown projected Cauderyx animations of a flush display on a flat-screen monitor.

The results: around half the grasshoppers fled in response to Robopteryx without feathers, compared to over 90 percent when feathered wings flapped. They also measured stronger neural signals when feathers were present. For Park et al., this is solid evidence in support of their hypothesis that a flush-pursuit hunting strategy may have been a factor in the evolution of pennaceous feathers. “Our results emphasize the significance of considering sensory aspects of predator-prey interactions in the studies of major evolutionary innovations among predatory species,” the authors wrote.

Not everyone is convinced by these results. “It seems to me to be very unlikely that a structure as complex as a pennaceous feather would evolve for such a specific behavioral role,” Steven Salisbury of the University of Queensland in Australia, who was not involved with the research, told New Scientist. “I am sure there are lots of ways to scare grasshoppers other than to flap some feathers at it. You can have feathers to scare grasshoppers and you can have them to insulate and incubate eggs. They’re good for display, the stabilization of body position when running, and, of course, for gliding and powered flight. Feathers help for all sorts of things.”

Scientific Reports, 2024. DOI: 10.1038/s41598-023-50225-x  (About DOIs).

Robo-dinosaur scares grasshoppers to shed light on why dinos evolved feathers Read More »

vr-robots:-enhancing-robot-functions-with-vr-technology

VR Robots: Enhancing Robot Functions With VR Technology

 

VR robots are slowly moving into the mainstream with applications that go beyond the usual manufacturing processes. Robots have been in use for years in industrial settings where they perform automated repetitive tasks. But their practical use has been quite limited. Today, however, we see some of them in the consumer sector delivering robotic solutions that require customization.

Augmented by other technologies such as AR, VR, and AI, robots show improved efficiency and safety in accomplishing more complex processes. With VR, humans can supervise the robots remotely to enhance their performance. VR technology provides human operators with a more immersive environment. This enables them to interact with robots better and view the actual surroundings of the robots in real time. Consequently, this opens vast opportunities for practical uses that enhance our lives.

Real-Life Use Cases of VR Robots

1. TX SCARA: Automated Restocking of Refrigerated Shelves

Developed by Telexistence, TX SCARA is powered by three main technologies—robotics, artificial intelligence, and virtual reality. This robot specializes in restocking refrigerated shelves in stores. It relies on GORDON, its AI system, to know when and where to place products. When issues arise due to external factors or system miscalculation, Telexistence employees use VR headsets to control the robot remotely and address the problem.

TX SCARA is present in 300 FamilyMart stores in Japan. Plans to expand their use in convenience stores in the United States are already underway. With TX SCARA capable of working 24/7 with a pace of up to 1,000 bottles or cans per day, it can replace up to three hours of human work each day for a single store alone.

2. Reachy: A Robot That Shows Emotions

Reachy gives VR robots a human side. An expressive humanoid platform, Reachy mimics human expressions and body language. It conveys human emotions through its antennas and motions.

VR robots - Reachy
Reachy

Users operate Reachy remotely using VR equipment that shows the environment surrounding the robot. They can move Reachy’s head, arms, and hands to manipulate objects and interact with people around the robot. They can also control Reachy’s mobile base to move around and explore its environment.

Since it can be programmed with Python and ROS to perform almost any task, its use cases are virtually limitless. It has applications across various sectors, such as research (to explore new frontiers in robotics), healthcare (to replace mechanical tasks), retail (to enhance customer experiences), education (to make learning more immersive), and many others. Reachy is also fully customizable, with many different configurations, modules, and hardware options available.

3. Robotic VR: Haptic Technology for Medical Care

A team of researchers co-led by the City University of Hong Kong has developed an advanced robotic VR system that has great potential for use in healthcare. Robotic VR, an innovative human-machine interface (HMI), can be used to perform medical procedures. This includes conducting swab tests and caring for patients with infectious diseases.

Doctors, nurses, and other health practitioners control the VR robot using a VR headset and flexible electronic skin that enables them to experience tactile sensations while interacting remotely with patients. This allows them to control and adjust the robot’s motion and strength as they collect bio-samples or provide nursing care. Robotic VR can help minimize the risk of infection and prevent contagion.

4. Skippy: Your Neighborhood Delivery Robot

Skippy elevates deliveries to a whole new level. Human operators, called Skipsters, control these VR robots remotely. They use VR headsets to supervise the robots as they move about the neighborhood. When you order food or groceries from a partner establishment, Skippy picks it up and delivers it to your doorstep. Powered by AI and controlled by Skipsters, the cute robot rolls through pedestrian paths while avoiding foot traffic and obstacles.

VR robots - Skippy
Skippy

You can now have Skippy deliver your food orders from a handful of restaurants in Minneapolis and Jacksonville. With its maker, Carbon Origins, planning to expand the fleet this year, it won’t be long until you spot a Skippy around your city.

Watch Out for More VR-Enabled Robots

Virtual reality is an enabling technology in robotics. By merging these two technologies, we’re bound to see more practical uses of VR-enabled robots in the consumer market. As the technologies become more advanced and the hardware required becomes more affordable, we can expect to see more VR robots that we can interact with as we go through our daily lives.

Developments in VR interface and robotics technology will eventually pave the way for advancements in the usability of VR robots in real-world applications.

VR Robots: Enhancing Robot Functions With VR Technology Read More »

doosan-robotics’-collaborative-robots-marks-annual-sales-of-over-1,000-units,-breaking-through-domestic-records

Doosan Robotics’ collaborative robots marks annual sales of over 1,000 units, breaking through domestic records

January 5, 2022 by

Doosan Robotics announced it has become South Korea’s first to achieve an annual sales record of 1,000 units for collaborative robots (cobots).

Established in 2015, Doosan Robotics has manufactured cobots using proprietary technology and has maintained the position of number one market share holder in Korea since 2018.

Doosan Robotics has also performed remarkably in global markets, becoming the first Korean company to be named as one of world’s top five cobot manufacturers. The company’s global performance now account for 70 percent of its total sales, with demand continuing to increase from markets including North America and Western Europe. The company plans to establish subsidiaries in these regions to further accelerate growth.

The company also announced it has successfully raised an investment worth US$33.7 million from Praxis Capital Partners and Korea Investment Partners, with investors valuing highly of the company’s achievements and competitiveness. Funds will be used to expand global sales base and strengthen R&D to attract additional partnerships both global and domestic. The company also plans to pursue an initial public offering (IPO) with the ambition to become a global market leader of cobots in the manufacturing and service fields.

“We’re looking forward to expediting the growth of our business with the recent funds raised,” said William (Junghoon Ryu), CEO at Doosan Robotics. “We will further enhance the competitiveness of new products and software that are mounted with our proprietary technology and strive to attain the position as number one market share holder in the global cobot market,” he added.

Meanwhile, Doosan Robotics will be exhibiting at the Consumer Electronics Show (CES) 2022 between January 5-8. This will mark the first global reveal of the company’s latest service robots including the unmanned modular robot café, DR. PRESSO and CES award-winning camera robot system, “New Inspiration. New Angle. (NINA)”.

(Visited 4 times, 4 visits today)

Last modified: January 4, 2022

About the Author:

Jeremy is the Consumer Tech Reporter at TheCESBible.com

Doosan Robotics’ collaborative robots marks annual sales of over 1,000 units, breaking through domestic records Read More »