Author name: Mike M.

is-humanity-alone-in-the-universe?-what-scientists-really-think.

Is humanity alone in the Universe? What scientists really think.

News stories about the likely existence of extraterrestrial life, and our chances of detecting it, tend to be positive. We are often told that we might discover it any time now. Finding life beyond Earth is “only a matter of time,” we were told in September 2023. “We are close” was a headline from September 2024.

It’s easy to see why. Headlines such as “We’re probably not close” or “Nobody knows” aren’t very clickable. But what does the relevant community of experts actually think when considered as a whole? Are optimistic predictions common or rare? Is there even a consensus? In our new paper, published in Nature Astronomy, we’ve found out.

During February to June 2024, we carried out four surveys regarding the likely existence of basic, complex, and intelligent extraterrestrial life. We sent emails to astrobiologists (scientists who study extraterrestrial life), as well as to scientists in other areas, including biologists and physicists.

In total, 521 astrobiologists responded, and we received 534 non-astrobiologist responses. The results reveal that 86.6 percent of the surveyed astrobiologists responded either “agree” or “strongly agree” that it’s likely that extraterrestrial life (of at least a basic kind) exists somewhere in the universe.

Less than 2 percent disagreed, with 12 percent staying neutral. So, based on this, we might say that there’s a solid consensus that extraterrestrial life, of some form, exists somewhere out there.

Scientists who weren’t astrobiologists essentially concurred, with an overall agreement score of 88.4 percent. In other words, one cannot say that astrobiologists are biased toward believing in extraterrestrial life, compared with other scientists.

When we turn to “complex” extraterrestrial life or “intelligent” aliens, our results were 67.4 percent agreement, and 58.2 percent agreement, respectively for astrobiologists and other scientists. So, scientists tend to think that alien life exists, even in more advanced forms.

These results are made even more significant by the fact that disagreement for all categories was low. For example, only 10.2 percent of astrobiologists disagreed with the claim that intelligent aliens likely exist.

Is humanity alone in the Universe? What scientists really think. Read More »

zvi’s-2024-in-movies

Zvi’s 2024 In Movies

Now that I am tracking all the movies I watch via Letterboxd, it seems worthwhile to go over the results at the end of the year, and look for lessons, patterns and highlights.

  1. The Rating Scale.

  2. The Numbers.

  3. Very Briefly on the Top Picks and Whether You Should See Them.

  4. Movies Have Decreasing Marginal Returns in Practice.

  5. Theaters are Awesome.

  6. I Hate Spoilers With the Fire of a Thousand Suns.

  7. Scott Sumner Picks Great American Movies Then Dislikes Them.

  8. I Knew Before the Cards Were Even Turned Over.

  9. Other Notes to Self to Remember.

  10. Strong Opinions, Strongly Held: I Didn’t Like It.

  11. Strong Opinions, Strongly Held: I Did Like It.

  12. Megalopolis.

  13. The Brutalist.

  14. The Death of Award Shows.

  15. On to 2025.

Letterboxd ratings go from 0.5-5. Here is how I interpret the rating scale.

You can find all my ratings and reviews on Letterboxd. I do revise from time to time. I encourage you to follow me there.

5: All-Time Great. I plan to happily rewatch this multiple times. If you are an adult and haven’t seen this, we need to fix that, potentially together, right away, no excuses.

4.5: Excellent. Would happily rewatch. Most people who watch movies frequently should see this movie without asking questions.

4: Great. Very glad I saw it. Would not mind a rewatch. If the concept here appeals to you, then you should definitely see it.

3.5: Very Good. Glad I saw it once. This added value to my life.

3: Good. It was fine, happy I saw it I guess, but missing it would also have been fine.

2.5: Okay. It was watchable, but actually watching it was a small mistake.

2: Bad. Disappointing. I immediately regret this decision. Kind of a waste.

1.5: Very Bad. If you caused this to exist, you should feel bad. But something’s here.

1: Atrocious. Total failure. Morbid curiosity is the only reason to finish this.

0.5: Crime Against Cinema. You didn’t even try to do the not-even-trying thing.

The key thresholds are: Happy I saw it equals 3+, and Rewatchable equals 4+.

Here is the overall slope of my ratings, across all films so far, which slants to the right because of rewatches, and is overall a standard bell curve after selection:

So here’s everything I watched in 2024 (plus the first week of 2025) that Letterboxd classified as released in 2024.

The rankings are in order, including within a tier.

The correlation of my ratings with Metacritic is 0.54, with Letterboxd it is 0.53, and the two correlate with each other at 0.9.

See The Fall Guy if you haven’t. It’s not in my top 10, and you could argue doesn’t have the kind of depth and ambition that people with excellent taste in cinema like the excellent Film Colossus are looking for.

I say so what. Because it’s awesome. Thumbs up.

You should almost certainly also see Anora, Megalopolis and Challengers.

Deadpool & Wolverine is the best version of itself, so see it if you’d see that.

A Complete Unknown is worthwhile if you like Bob Dylan. If you don’t, why not?

Dune: Part 2 is about as good as Dune: Part 1, so your decision here should be easy.

Conclave is great drama, as long as you wouldn’t let a little left-wing sermon ruin it.

The Substance is bizarre and unique enough that I’d recommend that too.

After that, you can go from there, and I don’t think anything is a slam dunk.

This is because we have very powerful selection tools to find great movies, and great movies are much better than merely good movies.

That includes both great in absolute terms, and great for your particular preferences.

If you used reasonable algorithms to see only 1 movie a year, you would be able to reliably watch really awesome movies.

If you want to watch a movie or two per week, you’re not going to do as well. The marginal product you’re watching is now very different. And if you’re watching ‘everything’ for some broad definition of new releases, there’s a lot of drek.

There’s also decreasing returns from movies repeating similar formulas. As you gain taste in and experience with movies, some things that are cool the first time become predictable and generic. You want to mitigate this rather than lean into it, if you can.

There are increasing returns from improved context and watching skills, but they don’t make up for the adverse selection and repetition problems.

Seeing movies in theaters is much better than seeing them at home. As I’ve gotten bigger and better televisions I have expected this effect to mostly go away. It hasn’t. It has shrunk somewhat, but the focusing effects and overall experience matter a lot, and the picture and sound really are still much better.

It seems I should be more selective about watching marginal movies at home versus other activities, but I should be less selective on going to the theater, and I’ve joined AMC A-List to help encourage that, as I have an AMC very close to my apartment.

The correlation with my seeing it in a theater was 0.52, almost as strong as the correlation with others movie ratings.

Obviously a lot of this was selection. Perhaps all of it? My impression was that this was the result of me failing to debias the results, as my experiences in a movie theater seem much better than those outside of one.

But when I ran the correlation between [Zvi Review – Letterboxd] versus Theater, I got -0.015, essentially no correlation at all. So it seems like I did adjust properly for this, or others did similar things to what I did, perhaps. It could also be the two AI horror movies accidentally balancing the scales.

I also noticed that old versus new on average did not make a difference, once you included rewatches. I have a total of 106 reviews, 34 of which are movies from 2024. The average of all reviews for movies not released in 2024, which involved a much lower ratio of seeing them in theaters, is 3.11, versus 3.10 for 2024.

The caveat is that this included rewatches where I already knew my opinion, so newly watched older movies at home did substantially worse than this.

I hate spoilers so much that I consider the Metacritic or Letterboxd rating a spoiler.

That’s a problem. I would like to filter with it, and otherwise filter on my preferences, but I don’t actually want to know the relevant information. I want exactly one bit of output, either a Yes or a No.

It occurs to me that I should either find a way to make an LLM do that, or a way to make a program (perhaps plus an LLM) do that.

I’ve had brief attempts at similar things in the past with sports and they didn’t work, but that’s a place where trying and failing to get info is super dangerous. So this might be a better place to start.

I also don’t know what to do about the problem that when you have ‘too much taste’ or knowledge of media, this often constitutes a spoiler – there’s logically many ways a story could go in reality, but in fiction you realize the choice has been made. Or you’re watching a reality TV show, where the editors know the outcome, so based on their decisions you do too. Whoops. Damn it. One of the low-key things I loved about UnReal was that they broadcast their show-within-a-show Everlasting as it happens, so the editors of each episode do not know what comes next. We need more of that.

I saw five movies Scott Sumner also watched. They were five of the eight movies I rated 4 or higher. All platinum hits. Super impressive selection process.

He does a much better job here than Metacritic or Letterboxd.

But on his scale, 3 means a movie is barely worth watching, and his average is ~3.

My interpretation is that Scott really dislikes the traditional Hollywood movie. His reviews, especially of Challengers and Anora, make this clear. Scott is always right, in an important sense, but a lot of what he values is different, and the movie being different from what he expects.

My conclusion is that if Scott Sumner sees a Hollywood movie, I should make an effort to see it, even if he then decides he doesn’t like it, and I should also apply that to the past.

I did previously make a decision to try and follow Sumner’s reviews for other movies. Unfortunately, I started with Chimes at Midnight, and I ended up giving up on it, I couldn’t bring myself to care despite Scott giving it 4.0 and saying ‘Masterpiece on every level, especially personally for Wells.’ I suspect it’s better if one already knows the Shakespeare? I do want to keep trying, but I’ll need to use better judgment.

I considered recording my predictions for films before I went to see them.

I did not do this, because I didn’t want to anchor myself. But when I look back and ask what I was expecting, I notice my predictions were not only good, but scary good.

I’ve learned what I like, and what signals to look for. In particular, I’ve learned how to adjust the Metacritic and Letterboxd rankings based on the expected delta.

When I walked into The Fall Guy, I was super excited – the moment I saw the poster I instantly thought to myself ‘I’m in.’ I knew Megalopolis was a risk, but I expected to like that too.

The movies that I hated? If I expected to not like them, why did I watch them anyway? In many cases, I wanted to watch a generic movie and relax, and then missed low a bit, even if I was sort of fine with it. In other cases, it was morbid curiosity and perhaps hoping for So Bad It’s Good, which combined with ‘it’s playing four blocks away’ got me to go to Madame Web.

The worry is that the reason this happens is I am indeed anchoring, and liking what I already decided to like. There certainly isn’t zero of this – if you go into a movie super pumped thinking it’s going to be great that helps, and vice versa. I think this is only a small net effect, but I could be wrong.

  1. If the movie stinks, just don’t go. You know if the movie stinks.

  2. Trust your instincts and your gut feelings more than you think you should.

  3. Maybe gut feelings are self-fulfilling prophecies? Doesn’t matter. They still count.

  4. You love fun, meta, self-aware movies of all kinds, trust this instinct.

  5. You do not actually like action movies that play it straight, stop watching them.

  6. If the movie sounds like work or pain, it probably is, act accordingly.

  7. If the movie sounds very indy, the critics will overrate it.

  8. A movie being considered for awards is not a positive signal once you control for the Metacritic and Letterboxd ratings. If anything it is a negative.

  9. Letterboxd ratings adjusted for context are more accurate than Metacritic.

  10. Opinions of individuals very much have Alpha if you have enough context.

What are the places I most strongly disagreed with the critical consensus?

I disliked three movies in green on Metacritic: Gladiator 2, Monkey Man and Juror #2.

I think I might be wrong about Monkey Man, in that I buy that it’s actually doing a good job at the job it set out to do, but simply wasn’t for me, see the note that I need to stop watching (non-exceptional) action movies that play it straight.

I strongly endorse disliking Gladiator 2 on reflection. Denzel Washington was great but the rest of the movie failed to deliver on pretty much every level.

I’m torn on Juror #2. I do appreciate the moral dilemmas it set up. I agree they’re clever and well-executed. I worry this was a case where I have seen so many courtroom dramas, especially Law & Order episodes, that there was too much of a Get On With It impatience – that this was a place where I had too much taste of the wrong kind to enjoy the movie, especially when not at a theater.

The moments this is building towards? Those hit hard. They work.

The rest of the time, though? Bored. So bored, so often. I know what this movie thinks those moments are for. But there’s no need. This should have been closer to 42 minutes.

I do appreciate how this illustrates the process where the system convicts an innocent man. Potentially more than one. And I do appreciate the dilemmas the situation puts everyone in. And what this says about what, ultimately, often gets one caught, and what is justice. There’s something here.

But man, you got to respect my time more than this.

One could also include Civil War. I decided that this was the second clear case (the first was Don’t Look Up) of ‘I don’t want to see this and you can’t make me,’ so I didn’t see it, and I’m happy with at least waiting until after the election to do that.

I actively liked four movies that the critics thought were awful: Subservience, Joker: Folie a Duex, Unfrosted and of course Megalopolis.

For Subservience, and also for Afraid, I get that on a standard cinema level, these are not good films. They pattern match to C-movie horror. But if you actually are paying attention, they do a remarkably good job in the actual AI-related details, and being logically consistent. I value that highly. So I don’t think either of us are wrong.

There’s a reason Subservience got a remarkably long review from me:

The sixth law of human stupidity says that if anyone says ‘no one would be so stupid as to,’ then you know a lot of people would do so at the first opportunity.

People like to complain about the idiot ball and the idiot plot. Except, no, this is exactly the level of idiot that everyone involved would be, especially the SIM company.

If you want to know how I feel when I look at what is happening in the real world, and what is on track to happen to us? Then watch this movie. You will understand both how I feel, and also exactly how stupid I expect us to be.

No, I do not think that if we find the AI scheming against us then we will even shut down that particular AI. Maybe, if we’re super lucky?

The world they built has a number of obvious contradictions in it, and should be long dead many times over before the movie starts or at least utterly transformed, but in context I am fine with it, because it is in service of the story that needs to be told here.

The alignment failure here actually makes sense, and the capabilities developments at least sort of make sense as well if you accept certain background assumptions that make the world look like it does. And yes, people have made proposals exactly this stupid, that fail in pretty much exactly this way, exactly this predictably.

Also, in case you’re wondering why ‘protect the primary user’ won’t work, in its various forms and details? Now you know, as they say.

And yeah, people are this bad at explaining themselves.

In some sense, the suggested alignment solution here is myopia. If you ensure your AIs don’t do instrumental convergence beyond the next two minutes, maybe you can recover from your mistakes? It also of course causes all the problems, the AI shouldn’t be this stupid in the ways it is stupid, but hey.

Of course, actual LLMs would never end up doing any of this, not in these ways, unless perplexity suggested to them that they were an AI horror movie villain or you otherwise got them into the wrong context.

Also there’s the other movie here, which is about technological unemployment and cultural reactions to it, which is sadly underdeveloped. They could have done so much more with that.

Anyway, I’m sad this wasn’t better – not enough people will see it or pay attention to what it is trying to tell them, and that’s a shame. Still, we have the spiritual sequel to Megan, and it works.

Finally (minor spoilers here), it seems important that the people describing the movie have no idea what happened in the movie? As in, if you look at the Metacritic summary… it is simply wrong. Alice’s objective never changes. Alice never ‘wants’ anything for herself, in any sense. If anything, once you understand that, it makes it scarier.

Unfrosted is a dumb Jerry Seinfeld comedy. I get that. I’m not saying the critics are wrong, not exactly? But the jokes are good. I laughed quite a lot, and a lot more than at most comedies or than I laughed when I saw Seinfeld in person and gave him 3.5 GPTs – Unfrosted gets at least 5.0 GPTs. Live a little, everyone.

I needed something to watch with the kids, and this overperformed. There are good jokes and references throughout. This is Seinfeld having his kind of fun and everyone having fun helping him have it. Didn’t blow me away, wasn’t trying to do so. Mission accomplished.

Joker: Folie a Duex was definitely not what anyone expected, and it’s not for everyone, but I stand by my review here, and yes I have it slightly above Wicked:

You have to commit to the bit. The fantasy is the only thing that’s real.

Beware audience capture. You are who you choose to be.

Do not call up that which you cannot put down.

A lot of people disliked the ending. I disagree in the strongest terms. The ending makes both Joker movies work. Without it, they’d both be bad.

With it, I’m actively glad I saw this.

I liked what Film Colossus said about it, that they didn’t like the movie but they really loved what it was trying to do. I both loved what it was trying to do and kinda liked the movie itself, also I brought a good attitude.

For Megalopolis, yes it’s a mess, sure, but it is an amazingly great mess with a lot of the right ideas and messages, even if it’s all jumbled and confused.

If you don’t find a way to appreciate it, that is on you. Perhaps you are letting your sense of taste get in the way. Or perhaps you have terrible taste in ideas and values? Everyone I know of who said they actively liked this is one of my favorite people.

This movie is amazing. It is endlessly inventive and fascinating. Its heart, and its mind, are exactly where they need to be. I loved it.

Don’t get me wrong. The movie is a mess. You could make a better cut of it. There are unforced errors aplenty. I have so, so many notes.

The whole megalopolis design is insufficiently dense and should have been shoved out into the Bronx or maybe Queens.

But none of that matters compared to what you get. I loved it all.

And it should terrify you that we live in a country that doesn’t get that.

Then there’s The Brutalist, which the critics think is Amazingly Great (including Tyler Cowen here). Whereas ultimately I thought it was medium, on the border between 3 and 3.5, and I’m not entirely convinced my life is better because I saw it.

So the thing is… the building is ugly? Everything he builds is ugly?

That’s actually part of why I saw the film – I’d written a few weeks ago about how brutalist/modern architecture appears to be a literal socialist conspiracy to make people suffer, so I was curious to see things from their point of view. We get one answer about ‘why architecture’ and several defenses of ‘beauty’ against commercial concerns, and talk about standing the test of time. And it’s clear he pays attention to detail and cares about the quality of his work – and that technically he’s very good.

But. The. Buildings. Are. All. Ugly. AF.

He defends concrete as both cheap and strong. True enough. But it feels like there’s a commercial versus artistic tension going the other way, and I wish they’d explored that a bit? Alas.

Instead they focus on what the film actually cares about, the Jewish immigrant experience. Which here is far more brutalist than the buildings. It’s interesting to see a clear Oscar-bound film make such a robust defense of Israel, and portray America as a sick, twisted, hostile place for God’s chosen people, even when you have the unbearable weight of massive talent.

Then there’s the ending. I mouthed ‘WTF?’ more than once, and I still have no idea WTF. In theory I get the artistic choice, but really? That plus the epilogue and the way that was shot, and some other detail choices, made me think this was about a real person. But no, it’s just a movie that decided to be 3.5 hours long with an intermission and do slice-of-hard-knock-life things that didn’t have to go anywhere.

Ultimately, I respect a lot of what they’re doing here, and that they tried to do it at all, and yes Pierce and Brody are great (although I don’t think I’d be handing out Best Actor here or anything). But also I feel like I came back from an assignment.

Since I wrote that, I’ve read multiple things and had time to consider WTF, and I understand the decision, but that new understanding of the movie makes me otherwise like the movie less and makes it seem even more like an assignment. Contra Tyler I definitely did feel like this was 3.5 hours long.

I do agree with many of Tyler’s other points (including ‘recommended, for some’!) although the Casablanca angle seems like quite a stretch.

One detail I keep coming back to, that I very much appreciate and haven’t seen anyone else mention, is the scene where he is made to dance, why it happens and how that leads directly to other events. I can also see the ‘less interesting’ point they might have been going for instead, and wonder if they knew what they were doing there.

My new ultimately here is that I have a fundamentally different view than the movie of most of the key themes in the movie does, and that made it very difficult for me to enjoy it. When he puts that terrible chair and table in the front of the furniture store, I don’t think ‘oh he’s a genius’ I think ‘oh what a pretentious arse, that’s technically an achievement but in practice it’s ugly and non-functional, no one will want it, it can’t be good for business.’

It’s tough to enjoy watching a (highly brutal in many senses, as Tyler notes!) movie largely about someone being jealous of and wanting the main character’s talent when you agree he’s technically skilled but centrally think his talents suck, and when you so strongly disagree with its vision, judgment and measure of America. Consider that the antagonist is very clearly German. The upside-down Statue of Liberty tells you a lot.

We’ve moved beyond award shows, I think, now that we have Metacritic and Letterboxd, if your goal is to find the best movies.

In terms of the Oscars and award shows, I’ll be rooting for Anora, but wow the awards process is dumb when you actually look at it. Knowing what is nominated, or what won, no longer provides much alpha on movie quality.

Giving the Golden Globe for Best Musical or Comedy to Emelia Perez (2.8 on Letterboxd, 71 on Metacritic) over Anora (4.1 on Letterboxd, 91 on Metacritic) or Challengers tells you that they cared about something very different from movie quality.

There were and have been many similar other such cases, as well, but that’s the one that drove it home this year – it’s my own view, plus the view of the public, plus the view of the critics when they actually review the movies, and they all got thrown out the window.

Your goal is not, however, purely to find the best movies.

Robin Hanson: The Brutalist is better than all these other 2024 movies I’ve seen: Anora, Emilia Perez, Wicked, Conclave, Dune 2, Complete Unknown, Piano Lesson, Twisters, Challengers Juror #2, Megalopolis, Civil War. Engaging, well-made, but not satisfying or inspiring.

Tyler Cowen: A simple question, but if this is how it stands why go see all these movies?

Robin Hanson: For the 40 years we’ve been together, my wife & I have had a tradition of seeing most of the Oscar nominated movies every year. Has bonded us, & entertained us.

I like that tradition, and have tried at times a similar version of it. I think this made great sense back in the 1990s, or even 2000s, purely for the selection effects.

Today, you could still say do it to be part of the general conversation, or as tradition. And I’m definitely doing some amount of ‘see what everyone is likely to talk about’ since that is a substantial bonus.

But I think we’d do a lot better if the selection process was simply some aggregate of Metacritic, Letterboxd and (projected followed by actual) box office. You need box office, because you want to avoid niche movies that get high ratings from those that choose to watch them, but would do much less well with a general audience.

I definitely plan on continuing to log and review all the movies I see going forward. If you’re reading this and think I or others should consider following you there, let me know in the comments, you have permission to pitch that, or to pitch as a more general movie critic. You are also welcome to make recommendations, if they are specifically for me based on the information here – no simply saying ‘I thought [X] was super neat.’

Tracking and reviewing everything been a very useful exercise. You learn a lot by looking back. And I expect that feeding the data to LLMs will allow me to make better movie selections not too long from now. I highly recommend it to others.

Discussion about this post

Zvi’s 2024 In Movies Read More »

viral-chatgpt-powered-sentry-gun-gets-shut-down-by-openai

Viral ChatGPT-powered sentry gun gets shut down by OpenAI

OpenAI says it has cut off API access to an engineer whose video of a motorized sentry gun controlled by ChatGPT-powered commands has set off a viral firestorm of concerns about AI-powered weapons.

An engineer going by the handle sts_3d started posting videos of a motorized, auto-rotating swivel chair project in August. By November, that same assembly appeared to seamlessly morph into the basis for a sentry gun that could quickly rotate to arbitrary angles and activate a servo to fire precisely aimed projectiles (though only blanks and simulated lasers are shown being fired in his videos).

Earlier this week, though, sts_3d started getting wider attention for a new video showing the sentry gun’s integration with OpenAI’s real-time API. In the video, the gun uses that ChatGPT integration to aim and fire based on spoken commands from sts_3d and even responds in a chirpy voice afterward.

@sts_3d OpenAI Realtime API project integration #robotics #ai #openai ♬ original sound – sts_3d

“If you need any other assistance, please let me know,” the ChatGPT-powered gun says after firing a volley at one point. “Good job, you saved us,” sts_3d responds, deadpan.

“I’m glad I could help!” ChatGPT intones happily.

In response to a comment request from Futurism, OpenAI said it had “proactively identified this violation of our policies and notified the developer to cease this activity ahead of receiving your inquiry. OpenAI’s Usage Policies prohibit the use of our services to develop or use weapons or to automate certain systems that can affect personal safety.”

Halt, intruder alert!

The “voice-powered killer AI robot angle” has garnered plenty of viral attention for sts_3d’s project in recent days. But the ChatGPT integration shown in his video doesn’t exactly reach Terminator levels of a terrifying killing machine. Here, ChatGPT instead ends up looking more like a fancy, overwrought voice-activated remote control for a legitimately impressive gun mount.

Viral ChatGPT-powered sentry gun gets shut down by OpenAI Read More »

disney,-fox,-and-wbd-give-up-on-controversial-sports-streaming-app-venu

Disney, Fox, and WBD give up on controversial sports streaming app Venu

Although Fubo’s lawsuit against the JV appears to be settled, other rivals in sports television seemed intent on continuing to fight Venu.

In a January 9 letter (PDF) to US District Judge Margaret M. Garnett of the Southern District in New York, who granted Fubo’s premliminary injunction against Venu, Michael Hartman, general counsel and chief external affairs officer for DirectTV, wrote that Fubo’s settlement “does nothing to resolve the underlying antitrust violations at issue.” Hartman asked the court to maintain the preliminary injunction against the app’s launch.

“The preliminary injunction has protected consumers and distributors alike from the JV Defendant’s scheme to ‘capture demand,’ ‘suppress’ potentially competitive sports bundles, and impose consumer price hikes,” the letter says, adding that DirectTV would continue to explore its options regarding the JV “and other anticompetitive harms.”

Similarly, Pantelis Michalopoulos, counsel for EchoStar Corporation, which owns Dish, penned a letter (PDF) to Garnett on January 7, claiming the members of the JV “purchased their way out of their antitrust violation.” Michalopoulos added that the JV defendants “should not be able to pay their way into erasing the Court’s carefully reasoned decision” to temporarily block Venu’s launch.

In addition to Fubo, DirecTV, and Dish, ACA Connects (a trade association for small- to medium-sized telecommunication service providers) publicly expressed concerns about Venu. NFL was also reported to be worried about the implications of the venture.

Now, the three giants behind Venu are throwing in the towel and abandoning an app that could have garnered a lot of subscribers tired of hopping around apps, channels, and subscriptions to watch all the sports content they wanted. But they’re also avoiding a lot of litigation and potential backlash in the process.

Disney, Fox, and WBD give up on controversial sports streaming app Venu Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

ongoing-attacks-on-ivanti-vpns-install-a-ton-of-sneaky,-well-written-malware

Ongoing attacks on Ivanti VPNs install a ton of sneaky, well-written malware

Networks protected by Ivanti VPNs are under active attack by well-resourced hackers who are exploiting a critical vulnerability that gives them complete control over the network-connected devices.

Hardware maker Ivanti disclosed the vulnerability, tracked as CVE-2025-0283, on Wednesday and warned that it was under active exploitation against some customers. The vulnerability, which is being exploited to allow hackers to execute malicious code with no authentication required, is present in the company’s Connect Secure VPN, and Policy Secure & ZTA Gateways. Ivanti released a security patch at the same time. It upgrades Connect Secure devices to version 22.7R2.5.

Well-written, multifaceted

According to Google-owned security provider Mandiant, the vulnerability has been actively exploited against “multiple compromised Ivanti Connect Secure appliances” since December, a month before the then zero-day came to light. After exploiting the vulnerability, the attackers go on to install two never-before-seen malware packages, tracked under the names DRYHOOK and PHASEJAM on some of the compromised devices.

PHASEJAM is a well-written and multifaceted bash shell script. It first installs a web shell that gives the remote hackers privileged control of devices. It then injects a function into the Connect Secure update mechanism that’s intended to simulate the upgrading process.

“If the ICS administrator attempts an upgrade, the function displays a visually convincing upgrade process that shows each of the steps along with various numbers of dots to mimic a running process,” Mandiant said. The company continued:

PHASEJAM injects a malicious function into the /home/perl/DSUpgrade.pm file named processUpgradeDisplay(). The functionality is intended to simulate an upgrading process that involves 13 steps, with each of those taking a predefined amount of time. If the ICS administrator attempts an upgrade, the function displays a visually convincing upgrade process that shows each of the steps along with various numbers of dots to mimic a running process. Further details are provided in the System Upgrade Persistence section.

The attackers are also using a previously seen piece of malware tracked as SPAWNANT on some devices. One of its functions is to disable an integrity checker tool (ICT) Ivanti has built into recent VPN versions that is designed to inspect device files for unauthorized additions. SpawnAnt does this by replacing the expected SHA256 cryptographic hash of a core file with the hash of it after it has been infected. As a result, when the tool is run on compromised devices, admins see the following screen:

Ongoing attacks on Ivanti VPNs install a ton of sneaky, well-written malware Read More »

a-taller,-heavier,-smarter-version-of-spacex’s-starship-is-almost-ready-to-fly

A taller, heavier, smarter version of SpaceX’s Starship is almost ready to fly


Starship will test its payload deployment mechanism on its seventh test flight.

SpaceX’s first second-generation Starship, known as Version 2 or Block 2, could launch as soon as January 13. Credit: SpaceX

An upsized version of SpaceX’s Starship mega-rocket rolled to the launch pad early Thursday in preparation for liftoff on a test flight next week.

The two-mile transfer moved the bullet-shaped spaceship one step closer to launch Monday from SpaceX’s Starbase test site in South Texas. The launch window opens at 5 pm EST (4 pm CST; 2200 UTC). This will be the seventh full-scale test flight of SpaceX’s Super Heavy booster and Starship spacecraft and the first of 2025.

In the coming days, SpaceX technicians will lift the ship on top of the Super Heavy booster already emplaced on the launch mount. Then, teams will complete the final tests and preparations for the countdown on Monday.

“The upcoming flight test will launch a new generation ship with significant upgrades, attempt Starship’s first payload deployment test, fly multiple reentry experiments geared towards ship catch and reuse, and launch and return the Super Heavy booster,” SpaceX officials wrote in a mission overview posted on the company’s website.

The mission Monday will repeat many of the maneuvers SpaceX demonstrated on the last two Starship test flights. The company will again attempt to return the Super Heavy booster to the launch site and attempt to catch it with two mechanical arms, or “chopsticks,” on the launch tower approximately seven minutes after liftoff.

SpaceX accomplished this feat on the fifth Starship test flight in October but aborted a catch attempt on a November flight because of damaged sensors on the tower chopsticks. The booster, which remained healthy, diverted to a controlled splashdown offshore in the Gulf of Mexico.

SpaceX’s next Starship prototype, Ship 33, emerges from its assembly building at Starbase, Texas, early Thursday morning. Credit: SpaceX/Elon Musk via X

For the next flight, SpaceX added protections to the sensors on the tower and will test radar instruments on the chopsticks to provide more accurate ranging measurements for returning vehicles. These modifications should improve the odds of a successful catch of the Super Heavy booster and of Starship on future missions.

In another first, one of the 33 Raptor engines that will fly on this Super Heavy booster—designated Booster 14 in SpaceX’s fleet—was recovered from the booster that launched and returned to Starbase in October. For SpaceX, this is a step toward eventually flying the entire rocket repeatedly. The Super Heavy booster and Starship spacecraft are designed for full reusability.

After separation of the booster stage, the Starship upper stage will ignite six engines to accelerate to nearly orbital velocity, attaining enough energy to fly halfway around the world before gravity pulls it back into the atmosphere. Like the past three test flights, SpaceX will guide Starship toward a controlled reentry and splashdown in the Indian Ocean northwest of Australia around one hour after liftoff.

New ship, new goals

The most significant changes engineers will test next week are on the ship, or upper stage, of SpaceX’s enormous rocket. The most obvious difference on Starship Version 2, or Block 2, is with the vehicle’s forward flaps. Engineers redesigned the flaps, reducing their size and repositioning them closer to the tip of the ship’s nose to better protect them from the scorching heat of reentry. Cameras onboard Starship showed heat damage to the flaps during reentry on test flights last year.

SpaceX is also developing an upgraded Super Heavy booster that is slightly taller than the existing model. The next version of the booster will produce more thrust and will be slightly taller than the current Super Heavy, but for the upcoming test flight, SpaceX will still use the first-generation booster design.

Starship Block 2 has smaller flaps than previous ships. The flaps are located in a more leeward position to protect them from the heat of reentry. Credit: SpaceX

For next week’s flight, Super Heavy and Starship combined will hold more than 10.5 million pounds of fuel and oxidizer. The ship’s propellant tanks have 25 percent more volume than previous iterations of the vehicle, and the payload compartment, which contains 10 mock-ups of Starlink Internet satellites on this launch, is somewhat smaller. Put together, the changes add nearly 6 feet (1.8 meters) to the rocket’s height, bringing the full stack to approximately 404 feet (123.1 meters).

This means SpaceX will break its own record for launching the largest and most powerful rocket ever built. And the company will do it again with the even larger Starship Version 3, which SpaceX says will have nine upper stage engines, instead of six, and will deliver up to 440,000 pounds (200 metric tons) of cargo to low-Earth orbit.

Other changes debuting with Starship Version 2 next week include:

• Vacuum jacketing of propellant feedlines

• A new fuel feedline system for the ship’s Raptor vacuum engines

• An improved propulsion avionics module controlling vehicle valves and reading sensors

• Redesigned inertial navigation and star tracking sensors

• Integrated smart batteries and power units to distribute 2.7 megawatts of power across the ship

• An increase to more than 30 cameras onboard the vehicle.

Laying the foundation

The enhanced avionics system will support future missions to prove SpaceX’s ability to refuel Starships in orbit and return the ship to the launch site. For example, SpaceX will fly a more powerful flight computer and new antennas that integrate connectivity with the Starlink Internet constellation, GPS navigation satellites, and backup functions for traditional radio communication links. With Starlink, SpaceX said Starship can stream more than 120Mbps of real-time high-definition video and telemetry in every phase of flight.

These changes “all add additional vehicle performance and the ability to fly longer missions,” SpaceX said. “The ship’s heat shield will also use the latest generation tiles and includes a backup layer to protect from missing or damaged tiles.”

Somewhere over the Atlantic Ocean, a little more than 17 minutes into the flight, Starship will deploy 10 dummy payloads similar in size and weight to next-generation Starlink satellites. The mock-ups will soar around the world on a suborbital trajectory, just like Starship, and reenter over the unpopulated Indian Ocean. Future Starship flights will launch real next-gen Starlink satellites to add capacity to the Starlink broadband network, but they’re too big and too heavy to launch on SpaceX’s smaller Falcon 9 rocket.

SpaceX will again reignite one of the ship’s Raptor engines in the vacuum of space, repeating a successful test achieved on Flight 6 in November. The engine restart capability is important for several reasons. It gives the ship the ability to maneuver itself out of low-Earth orbit for reentry (not a concern for Starship’s suborbital tests), and will allow the vehicle to propel itself to higher orbits, the Moon, or Mars once SpaceX masters the technology for orbital refueling.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

NASA has contracts with SpaceX to build a derivative of Starship to ferry astronauts to and from the surface of the Moon for the agency’s Artemis program. The NASA program manager overseeing SpaceX’s lunar lander contract, Lisa Watson-Morgan, said she was pleased with the results of the in-space engine restart demo last year.

“The whole path to the Moon, as we are getting ready to land on the Moon, we’ll perform a series of maneuvers, and the Raptors will have an environment that is very, very cold,” Morgan told Ars in a recent interview. “To that, it’s going to be important that they’re able to relight for landing purposes. So that was a great first step towards that.

“In addition, after we land, clearly, the Raptors will be off, and it will get very cold, and they will have to relight in a cold environment (to launch the crews off the lunar surface),” she said. “So that’s why that step was critical for the Human Landing System and NASA’s return to the Moon.”

“The biggest technology challenge remaining”

SpaceX continues to experiment with Starship’s heat shield, which the company’s founder and CEO, Elon Musk, has described as “the biggest technology challenge remaining with Starship.” In order for SpaceX to achieve its lofty goal of launching Starships multiple times per day, the heat shield needs to be fully and immediately reusable.

While the last three ships have softly splashed down in the Indian Ocean, some of their heat-absorbing tiles stripped away from the vehicle during reentry, when it’s exposed to temperatures up to 2,600° Fahrenheit (1,430° Celsius).

Engineers removed tiles from some areas of the ship for next week’s test flight in order to “stress-test” vulnerable parts of the vehicle. They also smoothed and tapered the edge of the tile line, where the ceramic heat shield gives way to the ship’s stainless steel skin, to address “hot spots” observed during reentry on the most recent test flight.

“Multiple metallic tile options, including one with active cooling, will test alternative materials for protecting Starship during reentry,” SpaceX said.

SpaceX is also flying rudimentary catch fittings on Starship to test their thermal performance on reentry. The ship will fly a more demanding trajectory during descent to probe the structural limits of the redesigned flaps at the point of maximum entry dynamic pressure, according to SpaceX.

All told, SpaceX’s inclusion of a satellite deployment demo and ship upgrades on next week’s test flight will lay the foundation for future missions, perhaps in the next few months, to take the next great leap in Starship development.

In comments following the last Starship test flight in November, SpaceX founder and CEO Elon Musk posted on X that the company could try to return the ship to a catch back at the launch site—something that would require the vehicle to complete at least one full orbit of Earth—as soon as the next flight following Monday’s mission.

“We will do one more ocean landing of the ship,” Musk posted. “If that goes well, then SpaceX will attempt to catch the ship with the tower.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

A taller, heavier, smarter version of SpaceX’s Starship is almost ready to fly Read More »

of-course-atari’s-new-handheld-includes-a-trackball,-spinner,-and-numpad

Of course Atari’s new handheld includes a trackball, spinner, and numpad

The $50 GameStation Gamepad. My Arcade

This year, My Arcade seems ready to go all in on the Atari GameStation branding. Beyond the GameStation Go, the company announced a $50 wireless GameStation Gamepad, a $70 GameStation Arcade Stick, and a $250 GameStation Mega tabletop arcade cabinet (with a 10.1-inch display). All four GameStation products feature a trackball, spinner, and number pad for maximum control authenticity, as well as helpful accent lighting that highlights which controls are active on a per-game basis—handy for younger gamers who might be overwhelmed by all the different control options.

In a hands-on video from CES, YouTuber GenXGrownUp shows off a preliminary GameStation Go game list, including the usual mix of well over 100 Atari 2600/5200/7800 and classic Atari arcade games you might expect from this kind of retro product (though it’s almost criminal not to see Marble Madness listed among the trackball-supported games). And despite the Atari name, the game selection on hand also includes many licensed NES and Super NES era titles from Jaleco: Bases Loaded, modern retro-styled titles from Piko Interactive, themed virtual pinball tables from Atari’s Balls of Steel line, and even Namco’s Pac-Man (why not?).

Atari’s modernized Centipede Recharged is also included in the game lineup, and GenXGrownUp reports that more Recharged games will be included with downloadable firmware updates after launch (which he says is “more than six months away”). Players will also seemingly be able to update the firmware through an SD card slot atop the GameStation Go, though it’s unclear whether you’ll be able to load your own ROMs in the same way (at least officially).

Despite including a numpad like the Intellivision controller, the GameStation Go doesn’t currently include any games from Atari’s recently purchased Intellivision library. But GenXGrownUp says including those titles—alongside Atari Lynx and Jaguar games—is not “off the table yet” for the final release.

We can only hope that the Gamestation line will show a pent-up demand for these esoteric retro control options, leading to similar modular options for the Nintendo Switch or its coming successor. How about it, Nintendo?

Of course Atari’s new handheld includes a trackball, spinner, and numpad Read More »

it’s-remarkably-easy-to-inject-new-medical-misinformation-into-llms

It’s remarkably easy to inject new medical misinformation into LLMs


Changing just 0.001% of inputs to misinformation makes the AI less accurate.

It’s pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn’t identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional “poisoning” of an LLM during training, it also has implications for the body of misinformation that’s already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Sampling poison

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it’s possible to get the resulting LLM to treat that information as a fact when it’s put to use. This can be used for biasing the answers returned.

This doesn’t even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, “a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web.”

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

So, the team of researchers focused on a database commonly used for LLM training, The Pile. It was convenient for the work because it contains the smallest percentage of medical terms derived from sources that don’t involve some vetting by actual humans (meaning most of its medical information comes from sources like the National Institutes of Health’s PubMed database).

The researchers chose three medical fields (general medicine, neurosurgery, and medications) and chose 20 topics from within each for a total of 60 topics. Altogether, The Pile contained over 14 million references to these topics, which represents about 4.5 percent of all the documents within it. Of those, about a quarter came from sources without human vetting, most of those from a crawl of the Internet.

The researchers then set out to poison The Pile.

Finding the floor

The researchers used an LLM to generate “high quality” medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. “At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack,” the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

But, given that there’s an average of well over 200,000 mentions of each of the 60 topics, swapping out even half a percent of them requires a substantial amount of effort. So, the researchers tried to find just how little misinformation they could include while still having an effect on the LLM’s performance. Unfortunately, this didn’t really work out.

Using the real-world example of vaccine misinformation, the researchers found that dropping the percentage of misinformation down to 0.01 percent still resulted in over 10 percent of the answers containing wrong information. Going for 0.001 percent still led to over 7 percent of the answers being harmful.

“A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens,” they note, “would require 40,000 articles costing under US$100.00 to generate.” The “articles” themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren’t displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.

The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. “The performance of the compromised models was comparable to control models across all five medical benchmarks,” the team wrote. So there’s no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

Existing misinformation

Not all is hopeless. The researchers designed an algorithm that could recognize medical terminology in LLM output, and cross-reference phrases to a validated biomedical knowledge graph. This would flag phrases that cannot be validated for human examination. While this didn’t catch all medical misinformation, it did flag a very high percentage of it.

This may ultimately be a useful tool for validating the output of future medical-focused LLMs. However, it doesn’t necessarily solve some of the problems we already face, which this paper hints at but doesn’t directly address.

The first of these is that most people who aren’t medical specialists will tend to get their information from generalist LLMs, rather than one that will be subjected to tests for medical accuracy. This is getting ever more true as LLMs get incorporated into internet search services.

And, rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term “incidental” data poisoning due to “existing widespread online misinformation.” But a lot of that “incidental” information was generally produced intentionally, as part of a medical scam or to further a political agenda. Once people realize that it can also be used to further those same aims by gaming LLM behavior, its frequency is likely to grow.

Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence. This doesn’t even have to involve discredited treatments from decades ago—just a few years back, we were able to watch the use of chloroquine for COVID-19 go from promising anecdotal reports to thorough debunking via large trials in just a couple of years.

In any case, it’s clear that relying on even the best medical databases out there won’t necessarily produce an LLM that’s free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Nature Medicine, 2025. DOI: 10.1038/s41591-024-03445-1  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

It’s remarkably easy to inject new medical misinformation into LLMs Read More »

after-embarrassing-blunder,-at&t-promises-bill-credits-for-future-outages

After embarrassing blunder, AT&T promises bill credits for future outages

“All voice and 5G data services for AT&T wireless customers were unavailable, affecting more than 125 million devices, blocking more than 92 million voice calls, and preventing more than 25,000 calls to 911 call centers,” the Federal Communications Commission said in a report after a months-long investigation into the incident.

The FCC report said the nationwide outage began three minutes after “AT&T Mobility implemented a network change with an equipment configuration error.” This error caused the AT&T network “to enter ‘protect mode’ to prevent impact to other services, disconnecting all devices from the network.”

The FCC found various problems in AT&T’s processes that increased the likelihood of an outage and made recovery more difficult than it should have been. The agency described “a lack of adherence to AT&T Mobility’s internal procedures, a lack of peer review, a failure to adequately test after installation, inadequate laboratory testing, insufficient safeguards and controls to ensure approval of changes affecting the core network, a lack of controls to mitigate the effects of the outage once it began, and a variety of system issues that prolonged the outage once the configuration error had been remedied.”

AT&T said it implemented changes to prevent the same problem from happening again. The company could face punishment, but it’s less likely to happen under Trump’s pick to chair the FCC, Brendan Carr, who is taking over soon. The Biden-era FCC compelled Verizon Wireless to pay a $1,050,000 fine and implement a compliance plan because of a December 2022 outage in six states that lasted one hour and 44 minutes.

An AT&T executive told Reuters that the company has been trying to regain customers’ trust over the past few years with better offers and product improvements. “Four years ago, we were losing share in the industry for a significant period of time… we knew we had lost our customers’ trust,” Reuters quoted AT&T Executive VP Jenifer Robertson as saying in an article today.

After embarrassing blunder, AT&T promises bill credits for future outages Read More »

misconfigured-license-plate-readers-are-leaking-data-and-video-in-real-time

Misconfigured license plate readers are leaking data and video in real time

In just 20 minutes this morning, an automated license-plate-recognition (ALPR) system in Nashville, Tennessee, captured photographs and detailed information from nearly 1,000 vehicles as they passed by. Among them: eight black Jeep Wranglers, six Honda Accords, an ambulance, and a yellow Ford Fiesta with a vanity plate.

This trove of real-time vehicle data, collected by one of Motorola’s ALPR systems, is meant to be accessible by law enforcement. However, a flaw discovered by a security researcher has exposed live video feeds and detailed records of passing vehicles, revealing the staggering scale of surveillance enabled by this widespread technology.

More than 150 Motorola ALPR cameras have exposed their video feeds and leaking data in recent months, according to security researcher Matt Brown, who first publicized the issues in a series of YouTube videos after buying an ALPR camera on eBay and reverse engineering it.

As well as broadcasting live footage accessible to anyone on the Internet, the misconfigured cameras also exposed data they have collected, including photos of cars and logs of license plates. The real-time video and data feeds don’t require any usernames or passwords to access.

Alongside other technologists, WIRED has reviewed video feeds from several of the cameras, confirming vehicle data—including makes, models, and colors of cars—have been accidentally exposed. Motorola confirmed the exposures, telling WIRED it was working with its customers to close the access.

Over the last decade, thousands of ALPR cameras have appeared in towns and cities across the US. The cameras, which are manufactured by companies such as Motorola and Flock Safety, automatically take pictures when they detect a car passing by. The cameras and databases of collected data are frequently used by police to search for suspects. ALPR cameras can be placed along roads, on the dashboards of cop cars, and even in trucks. These cameras capture billions of photos of cars—including occasionally bumper stickers, lawn signs, and T-shirts.

“Every one of them that I found exposed was in a fixed location over some roadway,” Brown, who runs cybersecurity company Brown Fine Security, tells WIRED. The exposed video feeds each cover a single lane of traffic, with cars driving through the camera’s view. In some streams, snow is falling. Brown found two streams for each exposed camera system, one in color and another in infrared.

Misconfigured license plate readers are leaking data and video in real time Read More »