Robots

robot-dogs-armed-with-ai-aimed-rifles-undergo-us-marines-special-ops-evaluation

Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops evaluation

The future of warfare —

Quadrupeds being reviewed have automatic targeting systems but require human oversight to fire.

A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

Enlarge / A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic “dogs” developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics’ quadrupedal unmanned ground vehicles (called “Q-UGVs” for short) for various applications, including reconnaissance and surveillance, it’s the possibility of arming them with weapons for remote engagement that may draw the most attention. But it’s not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx’s SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

On LinkedIn, Onyx Industries shared a video of a similar system in action.

In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs “should not be construed as a capability or a singular interest in one of many use cases during an evaluation.” The command further stresses that it is aware of and adheres to all Department of Defense policies concerning autonomous weapons.

The rise of robotic unmanned ground vehicles

An unauthorized video of a gun bolted onto a $3,000 Unitree robodog spread quickly on social media in July 2022 and prompted a response from several robotics companies.

Enlarge / An unauthorized video of a gun bolted onto a $3,000 Unitree robodog spread quickly on social media in July 2022 and prompted a response from several robotics companies.

Alexander Atamanov

The evaluation of armed robotic dogs reflects a growing interest in small robotic unmanned ground vehicles for military use. While unmanned aerial vehicles (UAVs) have been remotely delivering lethal force under human command for at least two decades, the rise of inexpensive robotic quadrupeds—some available for as little as $1,600—has led to a new round of experimentation with strapping weapons to their backs.

In July 2022, a video of a rifle bolted to the back of a Unitree robodog went viral on social media, eventually leading Boston Robotics and other robot vendors to issue a pledge that October to not weaponize their robots (with notable exceptions for military uses). In April, we covered a Unitree Go2 robot dog, with a flame thrower strapped on its back, on sale to the general public.

The prospect of deploying armed robotic dogs, even with human oversight, raises significant questions about the future of warfare and the potential risks and ethical implications of increasingly autonomous weapons systems. There’s also the potential for backlash if similar remote weapons systems eventually end up used domestically by police. Such a concern would not be unfounded: In November 2022, we covered a decision by the San Francisco Board of Supervisors to allow the San Francisco Police Department to use lethal robots against suspects.

There’s also concern that the systems will become more autonomous over time. As The War Zone’s Howard Altman and Oliver Parken describe in their article, “While further details on MARSOC’s use of the gun-armed robot dogs remain limited, the fielding of this type of capability is likely inevitable at this point. As AI-enabled drone autonomy becomes increasingly weaponized, just how long a human will stay in the loop, even for kinetic acts, is increasingly debatable, regardless of assurances from some in the military and industry.”

While the technology is still in the early stages of testing and evaluation, Q-UGVs do have the potential to provide reconnaissance and security capabilities that reduce risks to human personnel in hazardous environments. But as armed robotic systems continue to evolve, it will be crucial to address ethical concerns and ensure that their use aligns with established policies and international law.

Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops evaluation Read More »

you-can-now-buy-a-flame-throwing-robot-dog-for-under-$10,000

You can now buy a flame-throwing robot dog for under $10,000

burninating the countryside —

Thermonator, the first “flamethrower-wielding robot dog,” is completely legal in 48 US states.

The Thermonator robot flamethrower dog.

Enlarge / The Thermonator robot flamethrower dog.

If you’ve been wondering when you’ll be able to order the flame-throwing robot that Ohio-based Throwflame first announced last summer, that day has finally arrived. The Thermonator, what Throwflame bills as “the first-ever flamethrower-wielding robot dog” is now available for purchase. The price? $9,420.

Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone.

It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration.

The Robot Dog With A Flamethrower | Thermonator

The company lists possible applications of the new robot as “wildfire control and prevention,” “agricultural management,” “ecological conservation,” “snow and ice removal,” and “entertainment and SFX.” But most of all, it sets things on fire in a variety of real-world scenarios.

  • Remote controlling rhe Thermonator robot flamethrower dog.

  • The Thermonator robot flamethrower dog.

  • The Thermonator robot flamethrower dog.

  • The Thermonator robot flamethrower dog.

Back in 2018, Elon Musk made the news for offering an official Boring Company flamethrower that reportedly sold 10,000 units in 48 hours. It sparked some controversy because flamethrowers can also double as weapons or potentially start wildfires.

In the US, flamethrowers are legally unregulated in 48 states and are not considered firearms by federal agencies. Restrictions exist in Maryland, where flamethrowers require a Federal Firearms License to own, and California, where the range of flamethrowers cannot exceed 10 feet.

Even so, to state the obvious, flamethrowers can easily burn both things and people, starting fires and wreaking havoc if not used safely. Accordingly, the Thermonator might be one Christmas present you should skip for little Johnny this year.

You can now buy a flame-throwing robot dog for under $10,000 Read More »

the-best-robot-to-search-for-life-could-look-like-a-snake

The best robot to search for life could look like a snake

Image of two humans sitting behind a control console dressed in heavy clothing, while a long tube sits on the ice in front of them.

Enlarge / Trying out the robot on a glacier.

Icy ocean worlds like Europa or Enceladus are some of the most promising locations for finding extra-terrestrial life in the Solar System because they host liquid water. But to determine if there is something lurking in their alien oceans, we need to get past ice cover that can be dozens of kilometers thick. Any robots we send through the ice would have to do most of the job on their own because communication with these moons takes as much as 155 minutes.

Researchers working on NASA Jet Propulsion Laboratory’s technology development project called Exobiology Extant Life Surveyor (EELS) might have a solution to both those problems. It involves using an AI-guided space snake robot. And they actually built one.

Geysers on Enceladus

The most popular idea to get through the ice sheet on Enceladus or Europa so far has been thermal drilling, a technique used for researching glaciers on Earth. It involves a hot drill that simply melts its way through the ice. “Lots of people work on different thermal drilling approaches, but they all have a challenge of sediment accumulation, which impacts the amount of energy needed to make significant progress through the ice sheet,” says Matthew Glinder, the hardware lead of the EELS project.

So, instead of drilling new holes in ice, the EELS team focuses on using ones that are already there. The Cassini mission discovered geyser-like jets shooting water into space from vents in the ice cover near Enceladus’ south pole. “The concept was you’d have a lander to land near a vent and the robot would move on the surface and down into the vent, search the vent, and through the vent go further down into the ocean”, says Matthew Robinson, the EELS project manager.

The problem was that the best Cassini images of the area where that lander would need to touch down have a resolution of roughly 6 meters per pixel, meaning major obstacles to landing could be undetected. To make things worse, those close-up images were monocular, which meant we could not properly figure out the topography. “Look at Mars. First we sent an orbiter. Then we sent a lander. Then we sent a small robot. And then we sent a big robot. This paradigm of exploration allowed us to get very detailed information about the terrain,” says Rohan Thakker, the EELS autonomy lead. “But it takes between seven to 11 years to get to Enceladus. If we followed the same paradigm, it would take a century,” he adds.

All-terrain snakes

To deal with unknown terrain, the EELS team built a robot that could go through almost anything—a versatile, bio-inspired, snake-like design about 4.4 meters long and 35 centimeters in diameter. It weighs about 100 kilograms (on Earth, at least). It’s made of 10 mostly identical segments. “Each of those segments share a combination of shape actuation and screw actuation that rotates the screws fitted on the exterior of the segments to propel the robot through its environment,” explains Glinder. By using those two types of actuators, the robot can move using what the team calls “skin propulsion,” which relies on the rotation of screws, or using one of various shape-based movements that rely on shape actuators. “Sidewinding is one of those gaits where you are just pressing the robot against the environment,” Glinder says.

The basic design also works on surfaces other than ice.

Enlarge / The basic design also works on surfaces other than ice.

The standard sensor suite is fitted on the head and includes a set of stereo cameras providing a 360-degree viewing angle. There are also inertial measuring units (IMUs) that use gyroscopes to estimate the robot’s position, and lidar sensors. But it also has a sense of touch. “We are going to have torque force sensors in each segment. This way we will have direct torque plus direct force sensing at each joint,” explains Robinson. All this is supposed to let the EELS robot safely climb up and down Enceladus’ vents, hold in place in case of eruptions by pressing itself against the walls, and even navigate by touch alone if cameras and lidar don’t work.

But perhaps the most challenging part of building the EELS robot was its brain.

The best robot to search for life could look like a snake Read More »

nvidia-announces-“moonshot”-to-create-embodied-human-level-ai-in-robot-form

Nvidia announces “moonshot” to create embodied human-level AI in robot form

Here come the robots —

As companies race to pair AI with general-purpose humanoid robots, Nvidia’s GR00T emerges.

An illustration of a humanoid robot created by Nvidia.

Enlarge / An illustration of a humanoid robot created by Nvidia.

Nvidia

In sci-fi films, the rise of humanlike artificial intelligence often comes hand in hand with a physical platform, such as an android or robot. While the most advanced AI language models so far seem mostly like disembodied voices echoing from an anonymous data center, they might not remain that way for long. Some companies like Google, Figure, Microsoft, Tesla, Boston Dynamics, and others are working toward giving AI models a body. This is called “embodiment,” and AI chipmaker Nvidia wants to accelerate the process.

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Nvidia CEO Jensen Huang in a statement. Huang spent a portion of Nvidia’s annual GTC conference keynote on Monday going over Nvidia’s robotics efforts. “The next generation of robotics will likely be humanoid robotics,” Huang said. “We now have the necessary technology to imagine generalized human robotics.”

To that end, Nvidia announced Project GR00T, a general-purpose foundation model for humanoid robots. As a type of AI model itself, Nvidia hopes GR00T (which stands for “Generalist Robot 00 Technology” but sounds a lot like a famous Marvel character) will serve as an AI mind for robots, enabling them to learn skills and solve various tasks on the fly. In a tweet, Nvidia researcher Linxi “Jim” Fan called the project “our moonshot to solve embodied AGI in the physical world.”

AGI, or artificial general intelligence, is a poorly defined term that usually refers to hypothetical human-level AI (or beyond) that can learn any task a human could without specialized training. Given a capable enough humanoid body driven by AGI, one could imagine fully autonomous robotic assistants or workers. Of course, some experts think that true AGI is long way off, so it’s possible that Nvidia’s goal is more aspirational than realistic. But that’s also what makes Nvidia’s plan a moonshot.

NVIDIA Robotics: A Journey From AVs to Humanoids.

“The GR00T model will enable a robot to understand multimodal instructions, such as language, video, and demonstration, and perform a variety of useful tasks,” wrote Fan on X. “We are collaborating with many leading humanoid companies around the world, so that GR00T may transfer across embodiments and help the ecosystem thrive.” We reached out to Nvidia researchers, including Fan, for comment but did not hear back by press time.

Nvidia is designing GR00T to understand natural language and emulate human movements, potentially allowing robots to learn coordination, dexterity, and other skills necessary for navigating and interacting with the real world like a person. And as it turns out, Nvidia says that making robots shaped like humans might be the key to creating functional robot assistants.

The humanoid key

Robotics startup figure, an Nvidia partner, recently showed off its humanoid

Enlarge / Robotics startup figure, an Nvidia partner, recently showed off its humanoid “Figure 01” robot.

Figure

So far, we’ve seen plenty of robotics platforms that aren’t human-shaped, including robot vacuum cleaners, autonomous weed pullers, industrial units used in automobile manufacturing, and even research arms that can fold laundry. So why focus on imitating the human form? “In a way, human robotics is likely easier,” said Huang in his GTC keynote. “And the reason for that is because we have a lot more imitation training data that we can provide robots, because we are constructed in a very similar way.”

That means that researchers can feed samples of training data captured from human movement into AI models that control robot movement, teaching them how to better move and balance themselves. Also, humanoid robots are particularly convenient because they can fit anywhere a person can, and we’ve designed a world of physical objects and interfaces (such as tools, furniture, stairs, and appliances) to be used or manipulated by the human form.

Along with GR00T, Nvidia also debuted a new computer platform called Jetson Thor, based on NVIDIA’s Thor system-on-a-chip (SoC), as part of the new Blackwell GPU architecture, which it hopes will power this new generation of humanoid robots. The SoC reportedly includes a transformer engine capable of 800 teraflops of 8-bit floating point AI computation for running models like GR00T.

Nvidia announces “moonshot” to create embodied human-level AI in robot form Read More »

huge-funding-round-makes-“figure”-big-tech’s-favorite-humanoid-robot-company

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company

They’ve got an aluminum CNC machine, and they aren’t afraid to use it —

Investors Microsoft, OpenAI, Nvidia, Jeff Bezos, and Intel value Figure at $2.6B.

The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Enlarge / The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Figure

Humanoid robotics company Figure AI announced it raised $675 million in a funding round from an all-star cast of Big Tech investors. The company, which aims to commercialize a humanoid robot, now has a $2.6 billion valuation. Participants in the latest funding round include Microsoft, the OpenAI Startup Fund, Nvidia, Jeff Bezos’ Bezos Expeditions, Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. With all these big-name investors, Figure is officially Big Tech’s favorite humanoid robotics company. The manufacturing industry is taking notice, too. In January, Figure even announced a commercial agreement with BMW to have robots work on its production line.

“In conjunction with this investment,” the press release reads, “Figure and OpenAI have entered into a collaboration agreement to develop next generation AI models for humanoid robots, combining OpenAI’s research with Figure’s deep understanding of robotics hardware and software. The collaboration aims to help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

With all this hype and funding, the robot must be incredible, right? Well, the company is new and only unveiled its first humanoid “prototype,” the “Figure 01,” in October. At that time, the company said it represented about 12 months of work. With veterans from “Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation,” the company has a strong starting point.

  • Ok, it’s time to pick up a box, so get out your oversized hands and grab hold.

    Figure

  • Those extra-big hands seem to be the focus of the robot. They are just incredibly complex and look to be aiming at a 1:1 build of a human hand.

    Figure

  • Just look at everything inside those fingers. It looks like there are tendons of some kind.

    Figure

  • Not impressed with this “pooped your pants” walk cycle, which doesn’t really use the knees or ankles.

    Figure

  • A lot of the hardware appears to be waiting for software to use it, like the screen that serves as the robot’s face. It only seems to run a screen saver.

    Figure

The actual design of the robot appears to be solid aluminum and electrically actuated, aiming for an exact 1:1 match for a human. The website says the goal is a 5-foot 6-inch, 130-lb humanoid that can lift 44 pounds. That’s a very small form-over-function package to try and fit all these robot parts into. For alternative humanoid designs, you’ve got Boston Dynamics’ Atlas, which is more of a hulking beast thanks to the function-over-form design. There’s also the more purpose-built “Digit” from Agility Robotics, which has backward-bending bird legs for warehouse work, allowing it to bend down in front of a shelf without having to worry about the knees colliding with anything.

The best insight into the company’s progress is the official YouTube channel, which shows the Figure 01 robot doing a few tasks. The last video, from a few days ago, showed a robot doing a “fully autonomous” box-moving task at “16.7 percent” of normal human speed. For a bipedal robot, I have to say the walking is not impressive. Figure has a slow, timid shuffle that only lets it wobble forward at a snail’s pace. The walk cycle is almost entirely driven by the hips. The knees are bent the entire time and always out in front of the robot; the ankles barely move. It seems only to be able to walk in a straight line, and turning is a slow stop-and-spin-in-place motion that has the feet peddling in place the entire time. The feet seem to move at a constant up-and-down motion even when the robot isn’t moving forward, almost as if foot planning just runs on a set timer for balance. It can walk, but it walks about as slowly and awkwardly as a robot can. A lot of the hardware seems built for software that isn’t ready yet.

Figure seems more focused on the hands than anything. The 01 has giant oversized hands that are a close match for a human’s, with five fingers, all with three joints each. In January, Figure posted a video of the robot working a Keurig coffee maker. That means flipping up the lid with a fingertip, delicately picking up an easily crushable plastic cup with two fingers, dropping it into the coffee maker, casually pushing the lid down with about three different fingers, and pressing the “go” button with a single finger. It’s impressive to not destroy the coffee maker or the K-cup, but that Keurig is still living a rough life—a few of the robot interactions incidentally lift one side or the other of the coffee maker off the table thanks to way too much force.

  • For some very delicate hand work, here’s the Figure 01 making coffee. They went and sourced a silver Keurig machine so this image only contains two colors, black and silver.

    Figure

  • Time to press the “go” button. Also is that a wrist-mounted lidar puck for vision? Occasionally, flashes of light shoot out of it in the video.

    Figure

  • These hand close-ups are just incredible. I really do think they are tendon-actuated. You can also see all sorts of pads on the inside of the hand.

    Figure

  • I love the ridiculous T-pose it assumes while it waits for coffee.

    Figure

The video says the coffee task was performed via an “end-to-end neural network” using 10 hours of training time. Unlike walking, the hands really feel like they have a human influence when it comes to their movement. When the robot picks up the K-cup via a pinch of its thumb and index finger or goes to push a button, it also closes the other three fingers into a fist. There isn’t a real reason to move the three fingers that aren’t doing anything, but that’s what a human would do, so presumably, it’s in the training data. Closing the lid is interesting because I don’t think you could credit a single finger with the task—it’s just kind of a casual push using whatever fingers connect with the lid. The last clip of the video even shows the Figure 01 correcting a mistake—the K-cup doesn’t sit in the coffee maker correctly, and the robot recognizes this and can poke it around until it falls into place.

A lot of assembly line jobs are done at a station or sitting down, so the focus on hand dexterity makes sense. Boston Dynamics’ Atlas is way more impressive as a walking robot, but that’s also a multi-million dollar research bot that will never see the market. Figure’s goal, according to the press release, is to “bring humanoid robots into commercial operations as soon as possible.” The company openly posts a “master plan” on its website, which reads, “1) Build a feature-complete electromechanical humanoid. 2) Perform human-like manipulation. 3) Integrate humanoids into the labor force.” The robots are coming for our jobs.

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company Read More »

a-“robot”-should-be-chemical,-not-steel,-argues-man-who-coined-the-word

A “robot” should be chemical, not steel, argues man who coined the word

Dispatch from 1935 —

Čapek: “The world needed mechanical robots, for it believes in machines more than it believes in life.”

In 1921, Czech playwright Karel Čapek and his brother Josef invented the word “robot” in a sci-fi play called R.U.R. (short for Rossum’s Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn’t happy about how the term’s meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

In a newly translated column called “The Author of the Robots Defends Himself,” published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

This recently resurfaced article comes courtesy of a new English translation of Čapek’s play called R.U.R. and the Vision of Artificial Life accompanied by 20 essays on robotics, philosophy, politics, and AI. The editor, Jitka Čejková, a professor at the Chemical Robotics Laboratory in Prague, aligns her research with Čapek’s original vision. She explores “chemical robots”—microparticles resembling living cells—which she calls “liquid robots.”

Enlarge / “An assistant of inventor Captain Richards works on the robot the Captain has invented, which speaks, answers questions, shakes hands, tells the time and sits down when it’s told to.” – September 1928

In Čapek’s 1935 column, he clarifies that his robots were not intended to be mechanical marvels, but organic products of modern chemistry, akin to living matter. Čapek emphasizes that he did not want to glorify mechanical systems but to explore the potential of science, particularly chemistry. He refutes the idea that machines could replace humans or develop emotions and consciousness.

The author of the robots would regard it as an act of scientific bad taste if he had brought something to life with brass cogwheels or created life in the test tube; the way he imagined it, he created only a new foundation for life, which began to behave like living matter, and which could therefore have become a vehicle of life—but a life which remains an unimaginable and incomprehensible mystery. This life will reach its fulfillment only when (with the aid of considerable inaccuracy and mysticism) the robots acquire souls. From which it is evident that the author did not invent his robots with the technological hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.

The reason for the transition from chemical to mechanical in the public perception of robots isn’t entirely clear (though Čapek does mention a Russian film which went the mechanical route and was likely influential). The early 20th century was a period of rapid industrialization and technological advancement that saw the emergence of complex machinery and electronic automation, which probably influenced the public and scientific community’s perception of autonomous beings, leading them to associate the idea of robots with mechanical and electronic devices rather than chemical creations.

The 1935 piece is full of interesting quotes (you can read the whole thing in IEEE Spectrum or here), and we’ve grabbed a few highlights below that you can conveniently share with your robot-loving friends to blow their minds:

  • “He pronounces that his robots were created quite differently—that is, by a chemical path”
  • “He has learned, without any great pleasure, that genuine steel robots have started to appear”
  • “Well then, the author cannot be blamed for what might be called the worldwide humbug over the robots.”
  • “The world needed mechanical robots, for it believes in machines more than it believes in life; it is fascinated more by the marvels of technology than by the miracle of life.”

So it seems, over 100 years later, that we’ve gotten it wrong all along. Čapek’s vision, rooted in chemical synthesis and the philosophical mysteries of life, offers a different narrative from the predominant mechanical and electronic interpretation of robots we know today. But judging from what Čapek wrote, it sounds like he would be firmly against AI takeover scenarios. In fact, Čapek, who died in 1938, probably would think they would be impossible.

A “robot” should be chemical, not steel, argues man who coined the word Read More »

this-robotic-digger-could-construct-the-buildings-of-the-future

This robotic digger could construct the buildings of the future

Construction is a tough job, and in Europe there is a chronic shortage of workers to build the homes, schools, and roads we use every single day. So why not get a robot to do the hard work so we don’t have to?

That’s exactly what researchers at ETH Zurich’s Robotic Systems Lab in Switzerland are working on. They’ve trained an autonomous excavator to construct stone walls using boulders weighing several tonnes — without any human interference. In the machine’s first assignment, it built a six metre-​high and 65 metre-long loading bearing wall. If scaled, the solution could to pave the way for faster, more sustainable construction.  

Using LiDAR sensors, the excavator autonomously draws a 3D map of the construction site and identifies existing building blocks and stones for the wall. Specifically designed tools and machine vision (the ability of a computer to see) enable the excavator to scan and grab large stones in its immediate environment. It can also register their approximate weight as well as their centre of gravity. 

An algorithm then determines the best position for each stone, and the excavator places each piece in the desired location to within a centimetre of accuracy. The autonomous machine can place 20 to 30 stones in a single consignment – about as many as one delivery could supply.

researcher looks at computer with image of 3d model of wall
The researchers designed digital blueprints for the robotic digger to follow. Credit: ETH Zurich

The digger, named HEAP, is a modified Menzi Muck M545 developed by the researchers to test the potential of autonomous machines for construction. Because HEAP is so precise, it opens up the possibility of using locally sourced stones and rubble for the construction of walls, instead of new material like bricks. 

The wall was constructed at an industrial park next to Zurich Airport, managed by Eberhard construction company. The firm is using the site, and various ETH Zurich technologies, to demonstrate ways to make construction more circular — by minimising waste to the greatest extent possible. 

The use of autonomous diggers has been on the cards for a while now, not just in Switzerland. In 2017, US startup Built Robotics was founded to bring robot diggers into the mainstream. At the time, CEO Noah Ready-Campbell predicted that fully autonomous equipment would become commonplace on construction sites before fully autonomous cars hit public roads. But the idea has yet to advance beyond the prototype stage.

Automation is easiest to implement on repetitive tasks with predictable outcomes — like in manufacturing assembly lines. But a construction site is a complex, messy environment where safety if paramount. Similar to autonomous cars, the world is simply not yet ready for the widespread deployment of autonomous diggers, cranes, and trucks. 

However, there are other applications of robotics technologies in construction that are being implemented right now. For instance, UK startup hyperTunnel combines swarm robotics and AI to excavate tunnels up to 10 times faster than conventional methods. The proposed process involves injecting the lining of a tunnel into the ground and then removing the waste using a swarm of small autonomous robotic vehicles.

Another area of rapid growth is the construction of homes using giant 3D printers, like those developed by Danish company COBOD. In the UK, a 36-home housing development is currently being built this way. Its proponents claim the huge robots will build the homes faster, safer, and more sustainably than traditional methods.

This robotic digger could construct the buildings of the future Read More »