Robots

startup-set-to-brick-$800-kids-robot-is-trying-to-open-source-it-first

Startup set to brick $800 kids robot is trying to open source it first

Earlier this month, startup Embodied announced that it is going out of business and taking its Moxie robot with it. The $800 robots, aimed at providing emotional support for kids ages 5 to 10, would soon be bricked, the company said, because they can’t perform their core features without the cloud. Following customer backlash, Embodied is trying to create a way for the robots to live an open sourced second life.

Embodied CEO Paolo Pirjanian shared a document via a LinkedIn blog post today saying that people who used to be part of Embodied’s technical team are developing a “potential” and open source way to keep Moxies running. The document reads:

This initiative involves developing a local server application (‘OpenMoxie’) that you can run on your own computer. Once available, this community-driven option will enable you (or technically inclined individuals) to maintain Moxie’s basic functionality, develop new features, and modify her capabilities to better suit your needs—without reliance on Embodied’s cloud servers.

The notice says that after releasing OpenMoxie, Embodied plans to release “all necessary code and documentation” for developers and users.

Pirjanian said that an over-the-air (OTA) update is now available for download that will allow previously purchased Moxies to support OpenMoxie. The executive noted that Embodied is still “seeking long-term answers” but claimed that the update is a “vital first step” to “keep the door open” for the robot’s continued functionality.

At this time, OpenMoxie isn’t available and doesn’t have a release date. Embodied’s wording also seems careful to leave an opening for OpenMoxie to not actually release; although, the company seems optimistic.

However, there’s also a risk of users failing to update their robots in time and properly. Embodied noted that it won’t be able to support users who have trouble with the update or with OpenMoxie post-release. Updating the robot includes connecting to Wi-Fi and leaving it on for at least an hour.

Startup set to brick $800 kids robot is trying to open source it first Read More »

startup-will-brick-$800-emotional-support-robot-for-kids-without-refunds

Startup will brick $800 emotional support robot for kids without refunds

In addition to the robot being bricked, Embodied noted that warranties, repair services, the corresponding parent app and guides, and support staff will no longer be accessible.

“Unable to offer refunds”

Embodied said it is “unable” to offer most Moxie owners refunds due to its “financial situation and impending dissolution.” The potential exception is for people who bought a Moxie within 30 days. For those customers, Embodied said that “if the company or its assets are sold, we will do our best to prioritize refunds for purchases,” but it emphasized that this is not a guarantee.

Embodied also acknowledged complications for those who acquired the expensive robot through a third-party lender. Embodied advised such customers to contact their lender, but it’s possible that some will end up paying interest on a toy that no longer works.

Embodied said it’s looking for another company to buy Moxie. Should that happen, the new company will receive Embodied customer data and determine how it may use it, according to Embodied’s Terms of Service. Otherwise, Embodied said it “securely” erases user data “in accordance with our privacy policy and applicable law,” which includes deleting personally identifiable information from Embodied systems.

Another smart gadget bites the dust

Currently, there’s some hope that Moxies can be resurrected. Things look grim for Moxie owners, but we’ve seen failed smart device companies, like Insteon, be resurrected before. It’s also possible that someone will release of an open-source version of the product, like the one made for Spotify Car Thing, which Spotify officially bricked today.

But the short-lived, expensive nature of Moxie is exactly why some groups, like right-to-repair activists, are pushing the FTC to more strongly regulate smart devices, particularly when it comes to disclosure and commitments around software support. With smart gadget makers trying to determine how to navigate challenging economic landscapes, the owners of various types of smart devices—from AeroGarden indoor gardening systems to Snoo bassinets —have had to deal with the consequences, including broken devices and paywalled features. Last month, the FTC noted that smart device manufacturers that don’t commit to software support may be breaking the law.

For Moxie owners, disappointment doesn’t just come from wasted money and e-waste creation but also from the pain of giving a child a tech “companion” to grow with and then have it suddenly taken away.

Startup will brick $800 emotional support robot for kids without refunds Read More »

are-tesla’s-robot-prototypes-ai-marvels-or-remote-controlled-toys?

Are Tesla’s robot prototypes AI marvels or remote-controlled toys?

Two years ago, Tesla’s Optimus prototype was an underwhelming mess of exposed wires that could only operate in a carefully controlled stage presentation. Last night, Tesla’s “We, Robot” event featured much more advanced Optimus prototypes that could walk around without tethers and interact directly with partygoers.

It was an impressive demonstration of the advancement of a technology Tesla’s Elon Musk said he thinks “will be the biggest product ever of any kind” (way to set reasonable expectations, there). But the live demos have also set off a firestorm of discussion over just how autonomous these Optimus robots currently are.

A robot in every garage

Before the human/robot party could get started, Musk introduced the humanoid Optimus robots as a logical extension of some of the technology that Tesla uses in its cars, from batteries and motors to software. “It’s just a robot with arms and legs instead of a robot with wheels,” Musk said breezily, easily underselling the huge differences between human-like movements and a car’s much more limited input options.

After confirming that the company “started off with someone in a robot suit”—a reference to a somewhat laughable 2021 Tesla presentation—Musk said that “rapid progress” has been made in the Optimus program in recent years. Extrapolating that progress to the “long term” future, Musk said, would lead to a point where you could purchase “your own personal R2-D2, C-3PO” for $20,000 to $30,000 (though he did allow that it could “take us a minute to get to the long term”).

And what will you get for that $30,000 when the “long term” finally comes to pass? Musk grandiosely promised that Optimus will be able to do “anything you want,” including babysitting kids, walking dogs, getting groceries, serving drinks, or “just be[ing] your friend.” Given those promised capabilities, it’s perhaps no wonder that Musk confidently predicted that “every one of the 8 billion people of Earth” will want at least one Optimus, leading to an “age of abundance” where the labor costs for most services “declines dramatically.”

Are Tesla’s robot prototypes AI marvels or remote-controlled toys? Read More »

man-vs.-machine:-deepmind’s-new-robot-serves-up-a-table-tennis-triumph

Man vs. machine: DeepMind’s new robot serves up a table tennis triumph

John Henry was a steel-driving man —

Human-beating ping-pong AI learned to play in a simulated environment.

A blue illustration of a robotic arm playing table tennis.

Benj Edwards / Google DeepMind

On Wednesday, researchers at Google DeepMind revealed the first AI-powered robotic table tennis player capable of competing at an amateur human level. The system combines an industrial robot arm called the ABB IRB 1100 and custom AI software from DeepMind. While an expert human player can still defeat the bot, the system demonstrates the potential for machines to master complex physical tasks that require split-second decision-making and adaptability.

“This is the first robot agent capable of playing a sport with humans at human level,” the researchers wrote in a preprint paper listed on arXiv. “It represents a milestone in robot learning and control.”

The unnamed robot agent (we suggest “AlphaPong”), developed by a team that includes David B. D’Ambrosio, Saminda Abeyruwan, and Laura Graesser, showed notable performance in a series of matches against human players of varying skill levels. In a study involving 29 participants, the AI-powered robot won 45 percent of its matches, demonstrating solid amateur-level play. Most notably, it achieved a 100 percent win rate against beginners and a 55 percent win rate against intermediate players, though it struggled against advanced opponents.

A Google DeepMind video of the AI agent rallying with a human table tennis player.

The physical setup consists of the aforementioned IRB 1100, a 6-degree-of-freedom robotic arm, mounted on two linear tracks, allowing it to move freely in a 2D plane. High-speed cameras track the ball’s position, while a motion-capture system monitors the human opponent’s paddle movements.

AI at the core

To create the brains that power the robotic arm, DeepMind researchers developed a two-level approach that allows the robot to execute specific table tennis techniques while adapting its strategy in real time to each opponent’s playing style. In other words, it’s adaptable enough to play any amateur human at table tennis without requiring specific per-player training.

The system’s architecture combines low-level skill controllers (neural network policies trained to execute specific table tennis techniques like forehand shots, backhand returns, or serve responses) with a high-level strategic decision-maker (a more complex AI system that analyzes the game state, adapts to the opponent’s style, and selects which low-level skill policy to activate for each incoming ball).

The researchers state that one of the key innovations of this project was the method used to train the AI models. The researchers chose a hybrid approach that used reinforcement learning in a simulated physics environment, while grounding the training data in real-world examples. This technique allowed the robot to learn from around 17,500 real-world ball trajectories—a fairly small dataset for a complex task.

A Google DeepMind video showing an illustration of how the AI agent analyzes human players.

The researchers used an iterative process to refine the robot’s skills. They started with a small dataset of human-vs-human gameplay, then let the AI loose against real opponents. Each match generated new data on ball trajectories and human strategies, which the team fed back into the simulation for further training. This process, repeated over seven cycles, allowed the robot to continuously adapt to increasingly skilled opponents and diverse play styles. By the final round, the AI had learned from over 14,000 rally balls and 3,000 serves, creating a body of table tennis knowledge that helped it bridge the gap between simulation and reality.

Interestingly, Nvidia has also been experimenting with similar simulated physics systems, such as Eureka, that allow an AI model to rapidly learn to control a robotic arm in simulated space instead of the real world (since the physics can be accelerated inside the simulation, and thousands of simultaneous trials can take place). This method is likely to dramatically reduce the time and resources needed to train robots for complex interactions in the future.

Humans enjoyed playing against it

Beyond its technical achievements, the study also explored the human experience of playing against an AI opponent. Surprisingly, even players who lost to the robot reported enjoying the experience. “Across all skill groups and win rates, players agreed that playing with the robot was ‘fun’ and ‘engaging,'” the researchers noted. This positive reception suggests potential applications for AI in sports training and entertainment.

However, the system is not without limitations. It struggles with extremely fast or high balls, has difficulty reading intense spin, and shows weaker performance in backhand plays. Google DeepMind shared an example video of the AI agent losing a point to an advanced player due to what appears to be difficulty reacting to a speedy hit, as you can see below.

A Google DeepMind video of the AI agent playing against an advanced human player.

The implications of this robotic ping-pong prodigy extend beyond the world of table tennis, according to the researchers. The techniques developed for this project could be applied to a wide range of robotic tasks that require quick reactions and adaptation to unpredictable human behavior. From manufacturing to health care (or just spanking someone with a paddle repeatedly), the potential applications seem large indeed.

The research team at Google DeepMind emphasizes that with further refinement, they believe the system could potentially compete with advanced table tennis players in the future. DeepMind is no stranger to creating AI models that can defeat human game players, including AlphaZero and AlphaGo. With this latest robot agent, it’s looking like the research company is moving beyond board games and into physical sports. Chess and Jeopardy have already fallen to AI-powered victors—perhaps table tennis is next.

Man vs. machine: DeepMind’s new robot serves up a table tennis triumph Read More »

amazon-is-bricking-$2,350-astro-robots-10-months-after-release

Amazon is bricking $2,350 Astro robots 10 months after release

RIP —

Amazon giving refunds for business bot, will focus on home version instead.

Amazon Astro for business

Amazon

Amazon is bricking all Astro for Business robots on September 25. It first released the robot about eight months ago as a security device for small and medium-sized businesses (SMBs) for $2,350, but the device will soon be a pricey new addition to Amazon’s failed products list.

Amazon announced Astro in September 2021 as a home robot; that version of the device is still only available as a $1,600, invite-only preview.

In November, Amazon pivoted Astro to SMBs. But as first reported by GeekWire, Amazon on Wednesday sent emails to employees working on Astro for Business and customers telling them that the devices will stop working on September 25. At the time, Amazon’s email to customers said: “Your personal data will be deleted from the device. Any patrol or investigation videos recorded by Astro will still be available in your Ring app until your video storage time expires or your Ring Protect subscription ends.” According to The Verge, the email adds:

While we are proud of what we’ve built, we’ve made the decision to end support for Astro for Business to put our focus on making Astro the best robot for the home.

As of this week, Amazon will no longer charge users for subscriptions associated with Astro for Business, such as Astro Secure, which let the robot patrol businesses via customized routes, or Ring Protect Pro, which let Astro for Business owners store video history and sync the robot with Ring devices.

Amazon said it would refund customers $2,350 and give them a $300 Amazon credit. It also said it would refund unused, prepaid subscription fees.

Amazon has declined to share how many robots it sold, but it’s unfortunate to see such an expensive, complex piece of technology become obsolete after less than a year. Amazon hasn’t shared any ways to make further use of the devices, and spokesperson Courtney Ramirez told The Verge that Astro for Business can’t be used as a home robot instead. Amazon’s email to customers encourages owners to recycle Astro for Business through the Amazon Recycling Program, with Amazon covering associated costs.

Astro slow to take off

Amazon introduced Astro in late 2021, but as of 2024, it’s still not available to the general public. When Amazon released Astro for SMBs, it seemed like it might have found a new niche for the product. A May 2023 report from Business Insider claimed that Amazon opted to release Astro for Business over “an internal plan to release a lower-cost model” in 2022 for consumers.

Astro for Business could autonomously patrol spaces up to 5,000 square feet with an HD periscope and night vision, it could carry small devices, and, of course, was controllable by Amazon Alexa. Since its release, we’ve learned about Alexa’s dire financial straits and seen David Limp, who headed the Astro project as Amazon SVP of devices and services, exit Amazon, while his division has suffered notable layoffs (an Amazon rep told GeekWire that the shuttering of Astro for Business won’t result in layoffs as employees will start working on the home version of the robot instead).

Astro’s future

Per Amazon’s emails, the company is still keen to release the home version of Astro, which may surprise some since there has been no sign of an impending release since Amazon announced Astro years ago.

In May 2023, an Amazon representative told Insider that the firm had eyes on the potential of generative AI for Astro. It’s likely that Amazon is hoping to one day release Astro to consumers with the generative AI version of Alexa (which is expected this year with a subscription fee). In May 2023, Insider cited internal documents that it said discussed adding “intelligence and a conversational spoken interface” to Astro.

But considering that it has taken Amazon more than two and a half years (and counting) and reportedly the work of over 800 people to make Astro generally available, plus the sudden demise of the business version, there are reasons to be hesitant about paying the high price and any subscription fees for a consumer Astro—if it ever comes out. Early adopters could find themselves in similarly disappointing positions as the SMBs that bought into Astro for Business.

Astro’s development comes during a tumultuous time for Amazon’s devices business as it seeks to make Alexa a competitive and, critically, lucrative AI assistant. In June, Reuters reported that Amazon senior management had been telling employees that 2024 is a “must-win” for Alexa. Some analysts expect more reduced investment in Alexa if the paid tier doesn’t take off.

Amazon’s Astro home robot faces an uphill climb toward any potential release or consumer demand. Meanwhile, the version of it that actually made it to market is rolling toward a graveyard filled with other dead Amazon products—like Just Walk Out, Amazon Glow, Fire Phone, Dash buttons, and the Amazon Smart Oven.

Amazon is bricking $2,350 Astro robots 10 months after release Read More »

robot-dogs-armed-with-ai-aimed-rifles-undergo-us-marines-special-ops-evaluation

Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops evaluation

The future of warfare —

Quadrupeds being reviewed have automatic targeting systems but require human oversight to fire.

A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

Enlarge / A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic “dogs” developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics’ quadrupedal unmanned ground vehicles (called “Q-UGVs” for short) for various applications, including reconnaissance and surveillance, it’s the possibility of arming them with weapons for remote engagement that may draw the most attention. But it’s not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx’s SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

On LinkedIn, Onyx Industries shared a video of a similar system in action.

In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs “should not be construed as a capability or a singular interest in one of many use cases during an evaluation.” The command further stresses that it is aware of and adheres to all Department of Defense policies concerning autonomous weapons.

The rise of robotic unmanned ground vehicles

An unauthorized video of a gun bolted onto a $3,000 Unitree robodog spread quickly on social media in July 2022 and prompted a response from several robotics companies.

Enlarge / An unauthorized video of a gun bolted onto a $3,000 Unitree robodog spread quickly on social media in July 2022 and prompted a response from several robotics companies.

Alexander Atamanov

The evaluation of armed robotic dogs reflects a growing interest in small robotic unmanned ground vehicles for military use. While unmanned aerial vehicles (UAVs) have been remotely delivering lethal force under human command for at least two decades, the rise of inexpensive robotic quadrupeds—some available for as little as $1,600—has led to a new round of experimentation with strapping weapons to their backs.

In July 2022, a video of a rifle bolted to the back of a Unitree robodog went viral on social media, eventually leading Boston Robotics and other robot vendors to issue a pledge that October to not weaponize their robots (with notable exceptions for military uses). In April, we covered a Unitree Go2 robot dog, with a flame thrower strapped on its back, on sale to the general public.

The prospect of deploying armed robotic dogs, even with human oversight, raises significant questions about the future of warfare and the potential risks and ethical implications of increasingly autonomous weapons systems. There’s also the potential for backlash if similar remote weapons systems eventually end up used domestically by police. Such a concern would not be unfounded: In November 2022, we covered a decision by the San Francisco Board of Supervisors to allow the San Francisco Police Department to use lethal robots against suspects.

There’s also concern that the systems will become more autonomous over time. As The War Zone’s Howard Altman and Oliver Parken describe in their article, “While further details on MARSOC’s use of the gun-armed robot dogs remain limited, the fielding of this type of capability is likely inevitable at this point. As AI-enabled drone autonomy becomes increasingly weaponized, just how long a human will stay in the loop, even for kinetic acts, is increasingly debatable, regardless of assurances from some in the military and industry.”

While the technology is still in the early stages of testing and evaluation, Q-UGVs do have the potential to provide reconnaissance and security capabilities that reduce risks to human personnel in hazardous environments. But as armed robotic systems continue to evolve, it will be crucial to address ethical concerns and ensure that their use aligns with established policies and international law.

Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops evaluation Read More »

you-can-now-buy-a-flame-throwing-robot-dog-for-under-$10,000

You can now buy a flame-throwing robot dog for under $10,000

burninating the countryside —

Thermonator, the first “flamethrower-wielding robot dog,” is completely legal in 48 US states.

The Thermonator robot flamethrower dog.

Enlarge / The Thermonator robot flamethrower dog.

If you’ve been wondering when you’ll be able to order the flame-throwing robot that Ohio-based Throwflame first announced last summer, that day has finally arrived. The Thermonator, what Throwflame bills as “the first-ever flamethrower-wielding robot dog” is now available for purchase. The price? $9,420.

Thermonator is a quadruped robot with an ARC flamethrower mounted to its back, fueled by gasoline or napalm. It features a one-hour battery, a 30-foot flame-throwing range, and Wi-Fi and Bluetooth connectivity for remote control through a smartphone.

It also includes a LIDAR sensor for mapping and obstacle avoidance, laser sighting, and first-person view (FPV) navigation through an onboard camera. The product appears to integrate a version of the Unitree Go2 robot quadruped that retails alone for $1,600 in its base configuration.

The Robot Dog With A Flamethrower | Thermonator

The company lists possible applications of the new robot as “wildfire control and prevention,” “agricultural management,” “ecological conservation,” “snow and ice removal,” and “entertainment and SFX.” But most of all, it sets things on fire in a variety of real-world scenarios.

  • Remote controlling rhe Thermonator robot flamethrower dog.

  • The Thermonator robot flamethrower dog.

  • The Thermonator robot flamethrower dog.

  • The Thermonator robot flamethrower dog.

Back in 2018, Elon Musk made the news for offering an official Boring Company flamethrower that reportedly sold 10,000 units in 48 hours. It sparked some controversy because flamethrowers can also double as weapons or potentially start wildfires.

In the US, flamethrowers are legally unregulated in 48 states and are not considered firearms by federal agencies. Restrictions exist in Maryland, where flamethrowers require a Federal Firearms License to own, and California, where the range of flamethrowers cannot exceed 10 feet.

Even so, to state the obvious, flamethrowers can easily burn both things and people, starting fires and wreaking havoc if not used safely. Accordingly, the Thermonator might be one Christmas present you should skip for little Johnny this year.

You can now buy a flame-throwing robot dog for under $10,000 Read More »

the-best-robot-to-search-for-life-could-look-like-a-snake

The best robot to search for life could look like a snake

Image of two humans sitting behind a control console dressed in heavy clothing, while a long tube sits on the ice in front of them.

Enlarge / Trying out the robot on a glacier.

Icy ocean worlds like Europa or Enceladus are some of the most promising locations for finding extra-terrestrial life in the Solar System because they host liquid water. But to determine if there is something lurking in their alien oceans, we need to get past ice cover that can be dozens of kilometers thick. Any robots we send through the ice would have to do most of the job on their own because communication with these moons takes as much as 155 minutes.

Researchers working on NASA Jet Propulsion Laboratory’s technology development project called Exobiology Extant Life Surveyor (EELS) might have a solution to both those problems. It involves using an AI-guided space snake robot. And they actually built one.

Geysers on Enceladus

The most popular idea to get through the ice sheet on Enceladus or Europa so far has been thermal drilling, a technique used for researching glaciers on Earth. It involves a hot drill that simply melts its way through the ice. “Lots of people work on different thermal drilling approaches, but they all have a challenge of sediment accumulation, which impacts the amount of energy needed to make significant progress through the ice sheet,” says Matthew Glinder, the hardware lead of the EELS project.

So, instead of drilling new holes in ice, the EELS team focuses on using ones that are already there. The Cassini mission discovered geyser-like jets shooting water into space from vents in the ice cover near Enceladus’ south pole. “The concept was you’d have a lander to land near a vent and the robot would move on the surface and down into the vent, search the vent, and through the vent go further down into the ocean”, says Matthew Robinson, the EELS project manager.

The problem was that the best Cassini images of the area where that lander would need to touch down have a resolution of roughly 6 meters per pixel, meaning major obstacles to landing could be undetected. To make things worse, those close-up images were monocular, which meant we could not properly figure out the topography. “Look at Mars. First we sent an orbiter. Then we sent a lander. Then we sent a small robot. And then we sent a big robot. This paradigm of exploration allowed us to get very detailed information about the terrain,” says Rohan Thakker, the EELS autonomy lead. “But it takes between seven to 11 years to get to Enceladus. If we followed the same paradigm, it would take a century,” he adds.

All-terrain snakes

To deal with unknown terrain, the EELS team built a robot that could go through almost anything—a versatile, bio-inspired, snake-like design about 4.4 meters long and 35 centimeters in diameter. It weighs about 100 kilograms (on Earth, at least). It’s made of 10 mostly identical segments. “Each of those segments share a combination of shape actuation and screw actuation that rotates the screws fitted on the exterior of the segments to propel the robot through its environment,” explains Glinder. By using those two types of actuators, the robot can move using what the team calls “skin propulsion,” which relies on the rotation of screws, or using one of various shape-based movements that rely on shape actuators. “Sidewinding is one of those gaits where you are just pressing the robot against the environment,” Glinder says.

The basic design also works on surfaces other than ice.

Enlarge / The basic design also works on surfaces other than ice.

The standard sensor suite is fitted on the head and includes a set of stereo cameras providing a 360-degree viewing angle. There are also inertial measuring units (IMUs) that use gyroscopes to estimate the robot’s position, and lidar sensors. But it also has a sense of touch. “We are going to have torque force sensors in each segment. This way we will have direct torque plus direct force sensing at each joint,” explains Robinson. All this is supposed to let the EELS robot safely climb up and down Enceladus’ vents, hold in place in case of eruptions by pressing itself against the walls, and even navigate by touch alone if cameras and lidar don’t work.

But perhaps the most challenging part of building the EELS robot was its brain.

The best robot to search for life could look like a snake Read More »

nvidia-announces-“moonshot”-to-create-embodied-human-level-ai-in-robot-form

Nvidia announces “moonshot” to create embodied human-level AI in robot form

Here come the robots —

As companies race to pair AI with general-purpose humanoid robots, Nvidia’s GR00T emerges.

An illustration of a humanoid robot created by Nvidia.

Enlarge / An illustration of a humanoid robot created by Nvidia.

Nvidia

In sci-fi films, the rise of humanlike artificial intelligence often comes hand in hand with a physical platform, such as an android or robot. While the most advanced AI language models so far seem mostly like disembodied voices echoing from an anonymous data center, they might not remain that way for long. Some companies like Google, Figure, Microsoft, Tesla, Boston Dynamics, and others are working toward giving AI models a body. This is called “embodiment,” and AI chipmaker Nvidia wants to accelerate the process.

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Nvidia CEO Jensen Huang in a statement. Huang spent a portion of Nvidia’s annual GTC conference keynote on Monday going over Nvidia’s robotics efforts. “The next generation of robotics will likely be humanoid robotics,” Huang said. “We now have the necessary technology to imagine generalized human robotics.”

To that end, Nvidia announced Project GR00T, a general-purpose foundation model for humanoid robots. As a type of AI model itself, Nvidia hopes GR00T (which stands for “Generalist Robot 00 Technology” but sounds a lot like a famous Marvel character) will serve as an AI mind for robots, enabling them to learn skills and solve various tasks on the fly. In a tweet, Nvidia researcher Linxi “Jim” Fan called the project “our moonshot to solve embodied AGI in the physical world.”

AGI, or artificial general intelligence, is a poorly defined term that usually refers to hypothetical human-level AI (or beyond) that can learn any task a human could without specialized training. Given a capable enough humanoid body driven by AGI, one could imagine fully autonomous robotic assistants or workers. Of course, some experts think that true AGI is long way off, so it’s possible that Nvidia’s goal is more aspirational than realistic. But that’s also what makes Nvidia’s plan a moonshot.

NVIDIA Robotics: A Journey From AVs to Humanoids.

“The GR00T model will enable a robot to understand multimodal instructions, such as language, video, and demonstration, and perform a variety of useful tasks,” wrote Fan on X. “We are collaborating with many leading humanoid companies around the world, so that GR00T may transfer across embodiments and help the ecosystem thrive.” We reached out to Nvidia researchers, including Fan, for comment but did not hear back by press time.

Nvidia is designing GR00T to understand natural language and emulate human movements, potentially allowing robots to learn coordination, dexterity, and other skills necessary for navigating and interacting with the real world like a person. And as it turns out, Nvidia says that making robots shaped like humans might be the key to creating functional robot assistants.

The humanoid key

Robotics startup figure, an Nvidia partner, recently showed off its humanoid

Enlarge / Robotics startup figure, an Nvidia partner, recently showed off its humanoid “Figure 01” robot.

Figure

So far, we’ve seen plenty of robotics platforms that aren’t human-shaped, including robot vacuum cleaners, autonomous weed pullers, industrial units used in automobile manufacturing, and even research arms that can fold laundry. So why focus on imitating the human form? “In a way, human robotics is likely easier,” said Huang in his GTC keynote. “And the reason for that is because we have a lot more imitation training data that we can provide robots, because we are constructed in a very similar way.”

That means that researchers can feed samples of training data captured from human movement into AI models that control robot movement, teaching them how to better move and balance themselves. Also, humanoid robots are particularly convenient because they can fit anywhere a person can, and we’ve designed a world of physical objects and interfaces (such as tools, furniture, stairs, and appliances) to be used or manipulated by the human form.

Along with GR00T, Nvidia also debuted a new computer platform called Jetson Thor, based on NVIDIA’s Thor system-on-a-chip (SoC), as part of the new Blackwell GPU architecture, which it hopes will power this new generation of humanoid robots. The SoC reportedly includes a transformer engine capable of 800 teraflops of 8-bit floating point AI computation for running models like GR00T.

Nvidia announces “moonshot” to create embodied human-level AI in robot form Read More »

huge-funding-round-makes-“figure”-big-tech’s-favorite-humanoid-robot-company

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company

They’ve got an aluminum CNC machine, and they aren’t afraid to use it —

Investors Microsoft, OpenAI, Nvidia, Jeff Bezos, and Intel value Figure at $2.6B.

The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Enlarge / The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Figure

Humanoid robotics company Figure AI announced it raised $675 million in a funding round from an all-star cast of Big Tech investors. The company, which aims to commercialize a humanoid robot, now has a $2.6 billion valuation. Participants in the latest funding round include Microsoft, the OpenAI Startup Fund, Nvidia, Jeff Bezos’ Bezos Expeditions, Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. With all these big-name investors, Figure is officially Big Tech’s favorite humanoid robotics company. The manufacturing industry is taking notice, too. In January, Figure even announced a commercial agreement with BMW to have robots work on its production line.

“In conjunction with this investment,” the press release reads, “Figure and OpenAI have entered into a collaboration agreement to develop next generation AI models for humanoid robots, combining OpenAI’s research with Figure’s deep understanding of robotics hardware and software. The collaboration aims to help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

With all this hype and funding, the robot must be incredible, right? Well, the company is new and only unveiled its first humanoid “prototype,” the “Figure 01,” in October. At that time, the company said it represented about 12 months of work. With veterans from “Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation,” the company has a strong starting point.

  • Ok, it’s time to pick up a box, so get out your oversized hands and grab hold.

    Figure

  • Those extra-big hands seem to be the focus of the robot. They are just incredibly complex and look to be aiming at a 1:1 build of a human hand.

    Figure

  • Just look at everything inside those fingers. It looks like there are tendons of some kind.

    Figure

  • Not impressed with this “pooped your pants” walk cycle, which doesn’t really use the knees or ankles.

    Figure

  • A lot of the hardware appears to be waiting for software to use it, like the screen that serves as the robot’s face. It only seems to run a screen saver.

    Figure

The actual design of the robot appears to be solid aluminum and electrically actuated, aiming for an exact 1:1 match for a human. The website says the goal is a 5-foot 6-inch, 130-lb humanoid that can lift 44 pounds. That’s a very small form-over-function package to try and fit all these robot parts into. For alternative humanoid designs, you’ve got Boston Dynamics’ Atlas, which is more of a hulking beast thanks to the function-over-form design. There’s also the more purpose-built “Digit” from Agility Robotics, which has backward-bending bird legs for warehouse work, allowing it to bend down in front of a shelf without having to worry about the knees colliding with anything.

The best insight into the company’s progress is the official YouTube channel, which shows the Figure 01 robot doing a few tasks. The last video, from a few days ago, showed a robot doing a “fully autonomous” box-moving task at “16.7 percent” of normal human speed. For a bipedal robot, I have to say the walking is not impressive. Figure has a slow, timid shuffle that only lets it wobble forward at a snail’s pace. The walk cycle is almost entirely driven by the hips. The knees are bent the entire time and always out in front of the robot; the ankles barely move. It seems only to be able to walk in a straight line, and turning is a slow stop-and-spin-in-place motion that has the feet peddling in place the entire time. The feet seem to move at a constant up-and-down motion even when the robot isn’t moving forward, almost as if foot planning just runs on a set timer for balance. It can walk, but it walks about as slowly and awkwardly as a robot can. A lot of the hardware seems built for software that isn’t ready yet.

Figure seems more focused on the hands than anything. The 01 has giant oversized hands that are a close match for a human’s, with five fingers, all with three joints each. In January, Figure posted a video of the robot working a Keurig coffee maker. That means flipping up the lid with a fingertip, delicately picking up an easily crushable plastic cup with two fingers, dropping it into the coffee maker, casually pushing the lid down with about three different fingers, and pressing the “go” button with a single finger. It’s impressive to not destroy the coffee maker or the K-cup, but that Keurig is still living a rough life—a few of the robot interactions incidentally lift one side or the other of the coffee maker off the table thanks to way too much force.

  • For some very delicate hand work, here’s the Figure 01 making coffee. They went and sourced a silver Keurig machine so this image only contains two colors, black and silver.

    Figure

  • Time to press the “go” button. Also is that a wrist-mounted lidar puck for vision? Occasionally, flashes of light shoot out of it in the video.

    Figure

  • These hand close-ups are just incredible. I really do think they are tendon-actuated. You can also see all sorts of pads on the inside of the hand.

    Figure

  • I love the ridiculous T-pose it assumes while it waits for coffee.

    Figure

The video says the coffee task was performed via an “end-to-end neural network” using 10 hours of training time. Unlike walking, the hands really feel like they have a human influence when it comes to their movement. When the robot picks up the K-cup via a pinch of its thumb and index finger or goes to push a button, it also closes the other three fingers into a fist. There isn’t a real reason to move the three fingers that aren’t doing anything, but that’s what a human would do, so presumably, it’s in the training data. Closing the lid is interesting because I don’t think you could credit a single finger with the task—it’s just kind of a casual push using whatever fingers connect with the lid. The last clip of the video even shows the Figure 01 correcting a mistake—the K-cup doesn’t sit in the coffee maker correctly, and the robot recognizes this and can poke it around until it falls into place.

A lot of assembly line jobs are done at a station or sitting down, so the focus on hand dexterity makes sense. Boston Dynamics’ Atlas is way more impressive as a walking robot, but that’s also a multi-million dollar research bot that will never see the market. Figure’s goal, according to the press release, is to “bring humanoid robots into commercial operations as soon as possible.” The company openly posts a “master plan” on its website, which reads, “1) Build a feature-complete electromechanical humanoid. 2) Perform human-like manipulation. 3) Integrate humanoids into the labor force.” The robots are coming for our jobs.

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company Read More »

a-“robot”-should-be-chemical,-not-steel,-argues-man-who-coined-the-word

A “robot” should be chemical, not steel, argues man who coined the word

Dispatch from 1935 —

Čapek: “The world needed mechanical robots, for it believes in machines more than it believes in life.”

In 1921, Czech playwright Karel Čapek and his brother Josef invented the word “robot” in a sci-fi play called R.U.R. (short for Rossum’s Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn’t happy about how the term’s meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

In a newly translated column called “The Author of the Robots Defends Himself,” published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

This recently resurfaced article comes courtesy of a new English translation of Čapek’s play called R.U.R. and the Vision of Artificial Life accompanied by 20 essays on robotics, philosophy, politics, and AI. The editor, Jitka Čejková, a professor at the Chemical Robotics Laboratory in Prague, aligns her research with Čapek’s original vision. She explores “chemical robots”—microparticles resembling living cells—which she calls “liquid robots.”

Enlarge / “An assistant of inventor Captain Richards works on the robot the Captain has invented, which speaks, answers questions, shakes hands, tells the time and sits down when it’s told to.” – September 1928

In Čapek’s 1935 column, he clarifies that his robots were not intended to be mechanical marvels, but organic products of modern chemistry, akin to living matter. Čapek emphasizes that he did not want to glorify mechanical systems but to explore the potential of science, particularly chemistry. He refutes the idea that machines could replace humans or develop emotions and consciousness.

The author of the robots would regard it as an act of scientific bad taste if he had brought something to life with brass cogwheels or created life in the test tube; the way he imagined it, he created only a new foundation for life, which began to behave like living matter, and which could therefore have become a vehicle of life—but a life which remains an unimaginable and incomprehensible mystery. This life will reach its fulfillment only when (with the aid of considerable inaccuracy and mysticism) the robots acquire souls. From which it is evident that the author did not invent his robots with the technological hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.

The reason for the transition from chemical to mechanical in the public perception of robots isn’t entirely clear (though Čapek does mention a Russian film which went the mechanical route and was likely influential). The early 20th century was a period of rapid industrialization and technological advancement that saw the emergence of complex machinery and electronic automation, which probably influenced the public and scientific community’s perception of autonomous beings, leading them to associate the idea of robots with mechanical and electronic devices rather than chemical creations.

The 1935 piece is full of interesting quotes (you can read the whole thing in IEEE Spectrum or here), and we’ve grabbed a few highlights below that you can conveniently share with your robot-loving friends to blow their minds:

  • “He pronounces that his robots were created quite differently—that is, by a chemical path”
  • “He has learned, without any great pleasure, that genuine steel robots have started to appear”
  • “Well then, the author cannot be blamed for what might be called the worldwide humbug over the robots.”
  • “The world needed mechanical robots, for it believes in machines more than it believes in life; it is fascinated more by the marvels of technology than by the miracle of life.”

So it seems, over 100 years later, that we’ve gotten it wrong all along. Čapek’s vision, rooted in chemical synthesis and the philosophical mysteries of life, offers a different narrative from the predominant mechanical and electronic interpretation of robots we know today. But judging from what Čapek wrote, it sounds like he would be firmly against AI takeover scenarios. In fact, Čapek, who died in 1938, probably would think they would be impossible.

A “robot” should be chemical, not steel, argues man who coined the word Read More »

this-robotic-digger-could-construct-the-buildings-of-the-future

This robotic digger could construct the buildings of the future

Construction is a tough job, and in Europe there is a chronic shortage of workers to build the homes, schools, and roads we use every single day. So why not get a robot to do the hard work so we don’t have to?

That’s exactly what researchers at ETH Zurich’s Robotic Systems Lab in Switzerland are working on. They’ve trained an autonomous excavator to construct stone walls using boulders weighing several tonnes — without any human interference. In the machine’s first assignment, it built a six metre-​high and 65 metre-long loading bearing wall. If scaled, the solution could to pave the way for faster, more sustainable construction.  

Using LiDAR sensors, the excavator autonomously draws a 3D map of the construction site and identifies existing building blocks and stones for the wall. Specifically designed tools and machine vision (the ability of a computer to see) enable the excavator to scan and grab large stones in its immediate environment. It can also register their approximate weight as well as their centre of gravity. 

An algorithm then determines the best position for each stone, and the excavator places each piece in the desired location to within a centimetre of accuracy. The autonomous machine can place 20 to 30 stones in a single consignment – about as many as one delivery could supply.

researcher looks at computer with image of 3d model of wall
The researchers designed digital blueprints for the robotic digger to follow. Credit: ETH Zurich

The digger, named HEAP, is a modified Menzi Muck M545 developed by the researchers to test the potential of autonomous machines for construction. Because HEAP is so precise, it opens up the possibility of using locally sourced stones and rubble for the construction of walls, instead of new material like bricks. 

The wall was constructed at an industrial park next to Zurich Airport, managed by Eberhard construction company. The firm is using the site, and various ETH Zurich technologies, to demonstrate ways to make construction more circular — by minimising waste to the greatest extent possible. 

The use of autonomous diggers has been on the cards for a while now, not just in Switzerland. In 2017, US startup Built Robotics was founded to bring robot diggers into the mainstream. At the time, CEO Noah Ready-Campbell predicted that fully autonomous equipment would become commonplace on construction sites before fully autonomous cars hit public roads. But the idea has yet to advance beyond the prototype stage.

Automation is easiest to implement on repetitive tasks with predictable outcomes — like in manufacturing assembly lines. But a construction site is a complex, messy environment where safety if paramount. Similar to autonomous cars, the world is simply not yet ready for the widespread deployment of autonomous diggers, cranes, and trucks. 

However, there are other applications of robotics technologies in construction that are being implemented right now. For instance, UK startup hyperTunnel combines swarm robotics and AI to excavate tunnels up to 10 times faster than conventional methods. The proposed process involves injecting the lining of a tunnel into the ground and then removing the waste using a swarm of small autonomous robotic vehicles.

Another area of rapid growth is the construction of homes using giant 3D printers, like those developed by Danish company COBOD. In the UK, a 36-home housing development is currently being built this way. Its proponents claim the huge robots will build the homes faster, safer, and more sustainably than traditional methods.

This robotic digger could construct the buildings of the future Read More »