Robots

huge-funding-round-makes-“figure”-big-tech’s-favorite-humanoid-robot-company

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company

They’ve got an aluminum CNC machine, and they aren’t afraid to use it —

Investors Microsoft, OpenAI, Nvidia, Jeff Bezos, and Intel value Figure at $2.6B.

The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Enlarge / The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Figure

Humanoid robotics company Figure AI announced it raised $675 million in a funding round from an all-star cast of Big Tech investors. The company, which aims to commercialize a humanoid robot, now has a $2.6 billion valuation. Participants in the latest funding round include Microsoft, the OpenAI Startup Fund, Nvidia, Jeff Bezos’ Bezos Expeditions, Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. With all these big-name investors, Figure is officially Big Tech’s favorite humanoid robotics company. The manufacturing industry is taking notice, too. In January, Figure even announced a commercial agreement with BMW to have robots work on its production line.

“In conjunction with this investment,” the press release reads, “Figure and OpenAI have entered into a collaboration agreement to develop next generation AI models for humanoid robots, combining OpenAI’s research with Figure’s deep understanding of robotics hardware and software. The collaboration aims to help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

With all this hype and funding, the robot must be incredible, right? Well, the company is new and only unveiled its first humanoid “prototype,” the “Figure 01,” in October. At that time, the company said it represented about 12 months of work. With veterans from “Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation,” the company has a strong starting point.

  • Ok, it’s time to pick up a box, so get out your oversized hands and grab hold.

    Figure

  • Those extra-big hands seem to be the focus of the robot. They are just incredibly complex and look to be aiming at a 1:1 build of a human hand.

    Figure

  • Just look at everything inside those fingers. It looks like there are tendons of some kind.

    Figure

  • Not impressed with this “pooped your pants” walk cycle, which doesn’t really use the knees or ankles.

    Figure

  • A lot of the hardware appears to be waiting for software to use it, like the screen that serves as the robot’s face. It only seems to run a screen saver.

    Figure

The actual design of the robot appears to be solid aluminum and electrically actuated, aiming for an exact 1:1 match for a human. The website says the goal is a 5-foot 6-inch, 130-lb humanoid that can lift 44 pounds. That’s a very small form-over-function package to try and fit all these robot parts into. For alternative humanoid designs, you’ve got Boston Dynamics’ Atlas, which is more of a hulking beast thanks to the function-over-form design. There’s also the more purpose-built “Digit” from Agility Robotics, which has backward-bending bird legs for warehouse work, allowing it to bend down in front of a shelf without having to worry about the knees colliding with anything.

The best insight into the company’s progress is the official YouTube channel, which shows the Figure 01 robot doing a few tasks. The last video, from a few days ago, showed a robot doing a “fully autonomous” box-moving task at “16.7 percent” of normal human speed. For a bipedal robot, I have to say the walking is not impressive. Figure has a slow, timid shuffle that only lets it wobble forward at a snail’s pace. The walk cycle is almost entirely driven by the hips. The knees are bent the entire time and always out in front of the robot; the ankles barely move. It seems only to be able to walk in a straight line, and turning is a slow stop-and-spin-in-place motion that has the feet peddling in place the entire time. The feet seem to move at a constant up-and-down motion even when the robot isn’t moving forward, almost as if foot planning just runs on a set timer for balance. It can walk, but it walks about as slowly and awkwardly as a robot can. A lot of the hardware seems built for software that isn’t ready yet.

Figure seems more focused on the hands than anything. The 01 has giant oversized hands that are a close match for a human’s, with five fingers, all with three joints each. In January, Figure posted a video of the robot working a Keurig coffee maker. That means flipping up the lid with a fingertip, delicately picking up an easily crushable plastic cup with two fingers, dropping it into the coffee maker, casually pushing the lid down with about three different fingers, and pressing the “go” button with a single finger. It’s impressive to not destroy the coffee maker or the K-cup, but that Keurig is still living a rough life—a few of the robot interactions incidentally lift one side or the other of the coffee maker off the table thanks to way too much force.

  • For some very delicate hand work, here’s the Figure 01 making coffee. They went and sourced a silver Keurig machine so this image only contains two colors, black and silver.

    Figure

  • Time to press the “go” button. Also is that a wrist-mounted lidar puck for vision? Occasionally, flashes of light shoot out of it in the video.

    Figure

  • These hand close-ups are just incredible. I really do think they are tendon-actuated. You can also see all sorts of pads on the inside of the hand.

    Figure

  • I love the ridiculous T-pose it assumes while it waits for coffee.

    Figure

The video says the coffee task was performed via an “end-to-end neural network” using 10 hours of training time. Unlike walking, the hands really feel like they have a human influence when it comes to their movement. When the robot picks up the K-cup via a pinch of its thumb and index finger or goes to push a button, it also closes the other three fingers into a fist. There isn’t a real reason to move the three fingers that aren’t doing anything, but that’s what a human would do, so presumably, it’s in the training data. Closing the lid is interesting because I don’t think you could credit a single finger with the task—it’s just kind of a casual push using whatever fingers connect with the lid. The last clip of the video even shows the Figure 01 correcting a mistake—the K-cup doesn’t sit in the coffee maker correctly, and the robot recognizes this and can poke it around until it falls into place.

A lot of assembly line jobs are done at a station or sitting down, so the focus on hand dexterity makes sense. Boston Dynamics’ Atlas is way more impressive as a walking robot, but that’s also a multi-million dollar research bot that will never see the market. Figure’s goal, according to the press release, is to “bring humanoid robots into commercial operations as soon as possible.” The company openly posts a “master plan” on its website, which reads, “1) Build a feature-complete electromechanical humanoid. 2) Perform human-like manipulation. 3) Integrate humanoids into the labor force.” The robots are coming for our jobs.

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company Read More »

a-“robot”-should-be-chemical,-not-steel,-argues-man-who-coined-the-word

A “robot” should be chemical, not steel, argues man who coined the word

Dispatch from 1935 —

Čapek: “The world needed mechanical robots, for it believes in machines more than it believes in life.”

In 1921, Czech playwright Karel Čapek and his brother Josef invented the word “robot” in a sci-fi play called R.U.R. (short for Rossum’s Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn’t happy about how the term’s meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

In a newly translated column called “The Author of the Robots Defends Himself,” published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

This recently resurfaced article comes courtesy of a new English translation of Čapek’s play called R.U.R. and the Vision of Artificial Life accompanied by 20 essays on robotics, philosophy, politics, and AI. The editor, Jitka Čejková, a professor at the Chemical Robotics Laboratory in Prague, aligns her research with Čapek’s original vision. She explores “chemical robots”—microparticles resembling living cells—which she calls “liquid robots.”

Enlarge / “An assistant of inventor Captain Richards works on the robot the Captain has invented, which speaks, answers questions, shakes hands, tells the time and sits down when it’s told to.” – September 1928

In Čapek’s 1935 column, he clarifies that his robots were not intended to be mechanical marvels, but organic products of modern chemistry, akin to living matter. Čapek emphasizes that he did not want to glorify mechanical systems but to explore the potential of science, particularly chemistry. He refutes the idea that machines could replace humans or develop emotions and consciousness.

The author of the robots would regard it as an act of scientific bad taste if he had brought something to life with brass cogwheels or created life in the test tube; the way he imagined it, he created only a new foundation for life, which began to behave like living matter, and which could therefore have become a vehicle of life—but a life which remains an unimaginable and incomprehensible mystery. This life will reach its fulfillment only when (with the aid of considerable inaccuracy and mysticism) the robots acquire souls. From which it is evident that the author did not invent his robots with the technological hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.

The reason for the transition from chemical to mechanical in the public perception of robots isn’t entirely clear (though Čapek does mention a Russian film which went the mechanical route and was likely influential). The early 20th century was a period of rapid industrialization and technological advancement that saw the emergence of complex machinery and electronic automation, which probably influenced the public and scientific community’s perception of autonomous beings, leading them to associate the idea of robots with mechanical and electronic devices rather than chemical creations.

The 1935 piece is full of interesting quotes (you can read the whole thing in IEEE Spectrum or here), and we’ve grabbed a few highlights below that you can conveniently share with your robot-loving friends to blow their minds:

  • “He pronounces that his robots were created quite differently—that is, by a chemical path”
  • “He has learned, without any great pleasure, that genuine steel robots have started to appear”
  • “Well then, the author cannot be blamed for what might be called the worldwide humbug over the robots.”
  • “The world needed mechanical robots, for it believes in machines more than it believes in life; it is fascinated more by the marvels of technology than by the miracle of life.”

So it seems, over 100 years later, that we’ve gotten it wrong all along. Čapek’s vision, rooted in chemical synthesis and the philosophical mysteries of life, offers a different narrative from the predominant mechanical and electronic interpretation of robots we know today. But judging from what Čapek wrote, it sounds like he would be firmly against AI takeover scenarios. In fact, Čapek, who died in 1938, probably would think they would be impossible.

A “robot” should be chemical, not steel, argues man who coined the word Read More »

this-robotic-digger-could-construct-the-buildings-of-the-future

This robotic digger could construct the buildings of the future

Construction is a tough job, and in Europe there is a chronic shortage of workers to build the homes, schools, and roads we use every single day. So why not get a robot to do the hard work so we don’t have to?

That’s exactly what researchers at ETH Zurich’s Robotic Systems Lab in Switzerland are working on. They’ve trained an autonomous excavator to construct stone walls using boulders weighing several tonnes — without any human interference. In the machine’s first assignment, it built a six metre-​high and 65 metre-long loading bearing wall. If scaled, the solution could to pave the way for faster, more sustainable construction.  

Using LiDAR sensors, the excavator autonomously draws a 3D map of the construction site and identifies existing building blocks and stones for the wall. Specifically designed tools and machine vision (the ability of a computer to see) enable the excavator to scan and grab large stones in its immediate environment. It can also register their approximate weight as well as their centre of gravity. 

An algorithm then determines the best position for each stone, and the excavator places each piece in the desired location to within a centimetre of accuracy. The autonomous machine can place 20 to 30 stones in a single consignment – about as many as one delivery could supply.

researcher looks at computer with image of 3d model of wall
The researchers designed digital blueprints for the robotic digger to follow. Credit: ETH Zurich

The digger, named HEAP, is a modified Menzi Muck M545 developed by the researchers to test the potential of autonomous machines for construction. Because HEAP is so precise, it opens up the possibility of using locally sourced stones and rubble for the construction of walls, instead of new material like bricks. 

The wall was constructed at an industrial park next to Zurich Airport, managed by Eberhard construction company. The firm is using the site, and various ETH Zurich technologies, to demonstrate ways to make construction more circular — by minimising waste to the greatest extent possible. 

The use of autonomous diggers has been on the cards for a while now, not just in Switzerland. In 2017, US startup Built Robotics was founded to bring robot diggers into the mainstream. At the time, CEO Noah Ready-Campbell predicted that fully autonomous equipment would become commonplace on construction sites before fully autonomous cars hit public roads. But the idea has yet to advance beyond the prototype stage.

Automation is easiest to implement on repetitive tasks with predictable outcomes — like in manufacturing assembly lines. But a construction site is a complex, messy environment where safety if paramount. Similar to autonomous cars, the world is simply not yet ready for the widespread deployment of autonomous diggers, cranes, and trucks. 

However, there are other applications of robotics technologies in construction that are being implemented right now. For instance, UK startup hyperTunnel combines swarm robotics and AI to excavate tunnels up to 10 times faster than conventional methods. The proposed process involves injecting the lining of a tunnel into the ground and then removing the waste using a swarm of small autonomous robotic vehicles.

Another area of rapid growth is the construction of homes using giant 3D printers, like those developed by Danish company COBOD. In the UK, a 36-home housing development is currently being built this way. Its proponents claim the huge robots will build the homes faster, safer, and more sustainably than traditional methods.

This robotic digger could construct the buildings of the future Read More »