Author name: Rejus Almole

sextortion-with-a-twist:-spyware-takes-webcam-pics-of-users-watching-porn

Sextortion with a twist: Spyware takes webcam pics of users watching porn

“How you use this program is your responsibility,” the page reads. “I will not be held accountable for any illegal activities. Nor do i give a shit how u use it.”

In the hacking campaigns Proofpoint analyzed, cybercriminals attempted to trick users into downloading and installing Stealerium as an attachment or a web link, luring victims with typical bait like a fake payment or invoice. The emails targeted victims inside companies in the hospitality industry, as well as in education and finance, though Proofpoint notes that users outside of companies were also likely targeted but wouldn’t be seen by its monitoring tools.

Once it’s installed, Stealerium is designed to steal a wide variety of data and send it to the hacker via services like Telegram, Discord, or the SMTP protocol in some variants of the spyware, all of which is relatively standard in infostealers. The researchers were more surprised to see the automated sextortion feature, which monitors browser URLs for a list of pornography-related terms such as “sex” and “porn,” which can be customized by the hacker and trigger simultaneous image captures from the user’s webcam and browser. Proofpoint notes that it hasn’t identified any specific victims of that sextortion function, but suggests that the existence of the feature means it has likely been used.

More hands-on sextortion methods are a common blackmail tactic among cybercriminals, and scam campaigns in which hackers claim to have obtained webcam pics of victims looking at pornography have also plagued inboxes in recent years—including some that even try to bolster their credibility with pictures of the victim’s home pulled from Google Maps. But actual, automated webcam pics of users browsing porn is “pretty much unheard of,” says Proofpoint researcher Kyle Cucci. The only similar known example, he says, was a malware campaign that targeted French-speaking users in 2019, discovered by the Slovakian cybersecurity firm ESET.

The pivot to targeting individual users with automated sextortion features may be part of a larger trend of some cybercriminals—particularly lower-tier groups—turning away from high-visibility, large-scale ransomware campaigns and botnets that tend to attract the attention of law enforcement, says Proofpoint’s Larson.

“For a hacker, it’s not like you’re taking down a multimillion-dollar company that is going to make waves and have a lot of follow-on impacts,” Larson says, contrasting the sextortion tactics to ransomware operations that attempt to extort seven-figure sums from companies. “They’re trying to monetize people one at a time. And maybe people who might be ashamed about reporting something like this.”

This story originally appeared on wired.com.

Sextortion with a twist: Spyware takes webcam pics of users watching porn Read More »

microsoft-open-sources-bill-gates’-6502-basic-from-1978

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978

On Wednesday, Microsoft released the complete source code for Microsoft BASIC for 6502 Version 1.1, the 1978 interpreter that powered the Commodore PET, VIC-20, Commodore 64, and Apple II through custom adaptations. The company posted 6,955 lines of assembly language code to GitHub under an MIT license, allowing anyone to freely use, modify, and distribute the code that helped launch the personal computer revolution.

“Rick Weiland and I (Bill Gates) wrote the 6502 BASIC,” Gates commented on the Page Table blog in 2010. “I put the WAIT command in.”

For millions of people in the late 1970s and early 1980s, variations of Microsoft’s BASIC interpreter provided their first experience with programming. Users could type simple commands like “10 PRINT ‘HELLO'” and “20 GOTO 10” to create an endless loop of text on their screens, for example—often their first taste of controlling a computer directly. The interpreter translated these human-readable commands into instructions that the processor could execute, one line at a time.

The Commodore PET (Personal Electronic Transactor) was released in January 1977 and used the MOS 6502 and ran a variation of Microsoft BASIC. Credit: SSPL/Getty Images

At just 6,955 lines of assembly language—Microsoft’s low-level 6502 code talked almost directly to the processor. Microsoft’s BASIC squeezed remarkable functionality into minimal memory, a key achievement when RAM cost hundreds of dollars per kilobyte.

In the early personal computer space, cost was king. The MOS 6502 processor that ran this BASIC cost about $25, while competitors charged $200 for similar chips. Designer Chuck Peddle created the 6502 specifically to bring computing to the masses, and manufacturers built variations of the chip into the Atari 2600, Nintendo Entertainment System, and millions of Commodore computers.

The deal that got away

In 1977, Commodore licensed Microsoft’s 6502 BASIC for a flat fee of $25,000. Jack Tramiel’s company got perpetual rights to ship the software in unlimited machines—no royalties, no per-unit fees. While $25,000 seemed substantial then, Commodore went on to sell millions of computers with Microsoft BASIC inside. Had Microsoft negotiated a per-unit licensing fee like they did with later products, the deal could have generated tens of millions in revenue.

The version Microsoft released—labeled 1.1—contains bug fixes that Commodore engineer John Feagans and Bill Gates jointly implemented in 1978 when Feagans traveled to Microsoft’s Bellevue offices. The code includes memory management improvements (called “garbage collection” in programming terms) and shipped as “BASIC V2” on the Commodore PET.

Microsoft open-sources Bill Gates’ 6502 BASIC from 1978 Read More »

audi-design-finds-its-minimalist-groove-again-with-concept-c

Audi design finds its minimalist groove again with Concept C

Fans of the TT rejoice—there’s a new Audi two-seater on the way. The German automaker just unveiled Concept C, a stylish and minimalist sports car that marks the start of a new styling philosophy and, hopefully, a return to the bold designs that brought it so much success.

There are design cues and links back through Audi’s history, but this is no pastiche of a retro design as we might have seen from J Mays. Rather, Audi’s design team under Chief Creative Officer Massimo Frascella says that the design influences include one of the pre-war Silver Arrows racing cars, the 1936 Auto Union Type C—Audi being one of the four brands that combined to form Auto Union.

The design is deceptively large—bigger than a TT or even an R8. Aud

The slats that cover the Concept C’s rear bring to mind the cooling louvres at the rear of the Type C, necessary to let the heat generated by its monstrous V16 engine to escape the rear engine bay. But I also see some of the streamlined Rennlimousine in the Concept C’s slab sides.

It’s a much simpler design than the TT concept from 1995, which manages to look almost fussy in its details compared to the Concept C. But the way the air intakes are formed underneath the headlights reminds me a lot of the Bentley Hunaudieres, a mid-engined concept from 1999.

Audi design finds its minimalist groove again with Concept C Read More »

the-new-dolby-vision-2-hdr-standard-is-probably-going-to-be-controversial

The new Dolby Vision 2 HDR standard is probably going to be controversial

Dolby has announced the features of Dolby Vision 2, its successor to the popular Dolby Vision HDR format.

Whereas the original Dolby Vision was meant to give creators the ability to finely tune exactly how TVs present content in HDR, Dolby Vision 2 appears to significantly broaden that feature to include motion handling as well—and it also tries to bridge the gap between filmmaker intent and the on-the-ground reality of the individual viewing environments.

What does that mean, exactly? Well, Dolby says one of the pillars of Dolby Vision 2 will be “Content Intelligence,” which introduces new “AI capabilities” to the Dolby Vision spec. Among other things, that means using sensors in the TV to try to fix the oft-complained-about issue of shows being too dark.

Many editors and filmmakers tweak their video content to be best viewed in a dark room on a high-end TV with strong peak brightness, contrast, color accuracy, and so on. Unfortunately, that sometimes means that some shows are laughably dark on anything but the most optimal target setup—think Apple TV+’s Silo, or the infamous Battle of Winterfell in the final season of Game of Thrones, both of which many people complained were too dark for clear viewing.

With Content Intelligence, Dolby Vision 2 will allegedly make the image “crystal clear” by “improving clarity in any viewing environment without compromising intent.” Further, it will use ambient light detection sensors in supporting TVs to adjust the content’s presentation based on how bright the viewer’s room is.

Fixing motion smoothing—or making it worse?

There’s plenty that’s going to be controversial in Content Intelligence with some purists, but it’s another feature called Authentic Motion that’s probably going to cause the biggest stir for Dolby Vision 2.

The new Dolby Vision 2 HDR standard is probably going to be controversial Read More »

tesla-has-a-new-master-plan—it-just-doesn’t-have-any-specifics

Tesla has a new master plan—it just doesn’t have any specifics

Tesla also disbanded the team building its “Dojo” supercomputer several weeks ago. Much touted by Musk in the past as the key to beating autonomous vehicle developers like Waymo (which has already deployed commercially in several cities), Tesla will no longer rely on this in-house resource and instead rely on external companies, according to Bloomberg.

“Shortages in resources can be remedied by improved technology, greater innovation and new ideas,” the plan continues.

Then plan veers into corporate buzzwords, with statements like “[o]ur desire to push beyond what is considered achievable will foster the growth needed for truly sustainable abundance.”

In keeping with Musk’s recent robot obsession, there’s very little mention of Tesla electric vehicles other than a brief mention of autonomous vehicles, but there is quite a lot of text devoted to the company’s humanoid robot. “Jobs and tasks that are particularly monotonous or dangerous can now be accomplished by other means,” it states, blithely eliding the fact that it makes very little sense to compromise an industrial robot with a bipedal humanoid body, as evinced by the non-humanoid form factors of just about every industrial robot working today. Robot arms mounted to the floor don’t need to worry about balance, nor do quadraped robots with wheels.

Tesla has a new master plan—it just doesn’t have any specifics Read More »

dating-roundup-#7:-back-to-basics

Dating Roundup #7: Back to Basics

There’s quite a lot in the queue since last time, so this is the first large chunk of it, which focuses on apps and otherwise finding an initial connection, and some things that directly impact that.

  1. You’re Single Because You Have No Friends To Date.

  2. You’re Single Because You Aren’t Willing To Be Dungeon Master.

  3. You’re Single Because You Didn’t Listen To My Friend’s Podcast.

  4. You’re Single Because You Flunk The Eye Contact Test.

  5. You’re Single Because The Women Don’t Hit On The Men.

  6. You’re Single Because When A Woman Did Hit On You, You Choked.

  7. You’re Single Because You Don’t Know The Flirtation Escalation Ladder.

  8. You’re Single Because You Will Actually Never Open.

  9. You’re Single Because We Let Tinder Win.

  10. You’re Single Because The Apps Are Bad On Purpose To Make You Click.

  11. You’re Single Because You Present Super Weird.

  12. You’re Single Because You Have The Wrong Hobbies.

  13. You’re Single Because You Do The Wrong Kind Of Magic.

  14. You’re Single And All That Rejection Really Sucks.

  15. You’re Single Because Dating Apps Are Designed Like TikTok.

  16. You’re Single And Here’s Aella With a Tea Update.

  17. You’re Single Because You Don’t Have a Date Me Doc.

  18. You’re Single Because You Didn’t Hire a Matchmaker.

Two thirds of romantic partners were friends first, especially at university.

Paula Ghete: Does this mean that a degree of concern is warranted that platonic friendships between people of the opposite sex can turn into an actual relationship one day (or an affair)? As far as I know, men tend to choose female friends who are attractive.

William Costello: Yes!

Two thirds really is an awful lot. It’s enough of an awful lot to suggest that if your current goal is a long term relationship rather than short term dating, and you have enough practice, it might outright be a mistake to be primarily (rather than opportunistically or additionally, the goods are often non-rivalrous) trying to date people who aren’t your friends or at least friends of friends? That instead you should focus on making platonic friends with people you find attractive, without trying to date them at all, and then see what happens?

Highly speculative, but potentially this has a lot of advantages, including a network that can lead to dates with non-friends. It’s pretty great to have friends even if there are never any non-platonic benefits.

It does have its own difficulties. The most obvious one is that if you’re not trying to date them, you need a different excuse and set of activities in order to become friends.

As the paper notes, this raises the question of why we so often have the opposite impression, or that the youth think that hitting on your friends is just awful – although perhaps they do think that in cases where it doesn’t work, there’s no contradiction there, if you can’t do subtle escalations instead then one could say that’s a skill issue.

There’s also ‘an app for that’ at least inside the rationalist community, called Reciprocity, where you can signal interest and then it only reveals if there is a two-sided match. If this is your strategy it is important social tech to minimize the amount to which you make it weird.

Friendmaxxing makes it a lot easier to try to take that leap. If you have one friend and you make it weird, then you might have zero friends. If you have a thousand friends, you have not one friend to spare, but also if you botch things and go down to 999 friends you will be fine.

One note in the paper is that if you move to friends with benefits, that typically doesn’t lead to long term romantic success. If you want something more, then statistically you need to go straight for it once things get complicated.

Two strong pieces of advice here.

Kelsey Piper: I try to stay out of the dating discourse because it’s obnoxious to hear from happily married lesbians about the state of gender relations but there is such a lane for someone giving advice like ‘meet single men by running a killer D&D game’ and ‘try getting really into aviation’

A lot of people suck at talking about themselves but in fact have deeply felt values and fascinations and if you meet them there, they’re very very cool. Don’t give up on really Getting someone, but don’t try to achieve it by asking them personality questions.

The first half is a gimme. If there is an activity that is popular with the gender you’re looking for, where the people who want to do it are people you would want to date, then running or helping run such an activity is a great idea, even better than simply participating, and actually making it a killer version is even better. I do not think this in any way counts as being ‘fake.’

This goes double when combined with the statistics about friends. D&D is the ultimate ‘make friends first’ strategy.

The second half is even better. Traditional dating paths require or at least tend to cause a set of strange, awkward conversations about personality and dating. That means the people who are good at navigating that will be in high demand, whereas those who are not as good will struggle. Any way to change the topic into other things shakes that up and puts you in a better spot. It might be a bit slower, but you absolutely get to know people when not explicitly discussing getting to know them.

Also, do this with your friends. Knowing your friends this way is Amazingly Great, on its own merits.

My old friend Ted Knutson has a new podcast called Man Down, which includes at least three episodes on dating. This one is about strategy using dating apps and improving your profile, the first one was more about dating in general, they then followed up with another on fixing your profile.

I wouldn’t have listened if I hadn’t had the extra value of watching Ted squirm. The information density from podcasts isn’t great even at faster speeds, although the info that is here seems pretty solid if you don’t mind that.

There’s a bunch of fun stats to start. The first recommendation is for men to fight on different terrain where they aren’t unnumbered (men outnumber women ~4:1 on Tinder and spend ~96% of the money) or in a bad spot, try in person. But you might fully not have that option so they mostly focus on sharing data and then improving Ted’s dating profile. Women on Tinder swipe right 8% of the time, men swipe right 46% of the time, the average man gets 1-2 matches a week. Hinge’s numbers are less unbalanced (2:1 ratio). They discuss various different apps you can consider.

Then the Ted squirming part begins and we start on his profile. It’s remarkable how bad it starts out, starting with pictures. Some advice given: Think elegant and sober, charismatic pictures, focus on face, definitely not shirtless unless maybe you’re playing sports, ideally pay to get better photos taken. Save the humor for the prompts and chats and focus answers on the person not the relationship type. Actively study profiles before you message people, customize everything.

This seems right.

Sasha Chapin: There is an eye contact dance that happens in the first few seconds when single men and women meet and if a man flunks it, it’s actually tricky to come back from.

So my advice to single guys is, become someone who automatically aces it because you feel fundamentally safe in the world.

[You want to use] whatever pattern of eye contact conveys the message “I’m okay with you noticing that I’m attracted to you, whatever happens next isn’t a problem for me” — not hiding, not grabby, centered, not managing the response on the other end.

This probably doesn’t mean unbroken staring unless the other party defaults to intense eye contact.

The question is, is this a ‘sincerity is great, once you can fake that you’ve got it made’ situation? Or is it easier and more effective to actually be okay with all of this?

The good news is that it is optimal life strategy to genuinely be okay with whatever happens next so long as no machetes are involved. It is actively good for other people you are attracted to to sense you are attracted to them – so long as they sense that you’re okay with whatever happens next, and that you aren’t afraid of them knowing.

The other good news is that you can also pass through practice, even if you’re not fully genuinely okay with whatever happens next short of machetes.

Bryan Caplan’s polls are mostly answered by men, so:

There was essentially no interaction between the variables. About 85% of those in both groups wanted to be hit on more rather than less. Even when it’s an automatic no, you know what, it’s nice to be asked.

RFH Doctor: I love how girls will never tell a man she’s interested in him, instead they send secret signals through tweets, instagram stories, brain waves, magic rituals, and prayers, ancient feminine wisdom that ensures only men who are in tune with the unconscious will be selected.

Dr. Jen Izaakson: Also can I just say, as a lesbian, the subtle signals women send are hardly a difficult set of hieroglyphics to decode. What men find especially difficult is not only the reading of the signs, but also not overstepping or not taking the route she wants in the pursuit. Yes this woman likes you, but she doesn’t want you to escalate proximity beyond her pace. Men need to learn to drive the wheel exactly how the map holder suggest.

Napoleonos: Literally anything except telling him.

RFH: I love that for us <3

So, you seem to be saying ‘shit test’ like that’s a bad thing…

eigenrobot: men often misinterpret this as a “shit test” or something but in fact a potential partner’s ability to read her mind is a completely reasonable selection criterion for women to prioritize.

and “I am falling over myself to psychically telegraph that i am interested” is easy mode!

all you have to do is _pay attention_ to women and what they’re doing and be marginally reflective about it

easy top three highest ROI skill available to young men

corsaren: Too many men treat it as a crazy demand, but not only is partner mind-reading totally doable if you have a strong bond, it’s also very fun once you get good at it. Bonus: once you prove that you can read her mind, she’ll be more likely to explain in the times when you can’t.

There’s nothing inherently bad or inefficient about a shit test. This is very clearly a (very standard) shit test. Anything that could be described by the phrase ‘not fing telling you’ is either a ghosting or a shit test. It’s selection and information gathering via forcing you to correctly respond to an uncertain situation, which here involves both ‘figure out she’s interested’ and also ‘actually act on it’ and doing that in a skilled fashion.

Which has some very clear positive selection effects. But it also has a clear negative selection effect for ‘men who go around hitting on everyone a lot’ and against men who are (very understandably and often for good reasons) risk averse about making such a move.

It is my understanding is that as things have shifted, with more men being afraid to open either in general or to anyone they already know, this is making a lot less sense as a filter.

The problem with not doing so is you are filtering a lot less for ability to detect attraction and mind reading, and far more for those who have norms that involve hitting on a lot of women. Which is a far more mixed blessing. You’re going to fail on a lot more otherwise desirable connections than you would have in the past.

A man being unwilling to make a first move simply isn’t a strong negative sign at this point. Indeed, if they are capable of navigating subsequent moves it could even be a positive sign, because this is how they didn’t get removed from the market.

Also I am pretty sure the other downsides of being too forward are way down from where they used to be. That is especially true if they are unwilling to (essentially ever) make the first move, which means they’re likely to very much appreciate it when you do so instead (and if they don’t, then the combination is a big red flag). Thus, I think being forward (as a woman seeking men) is a far better strategy than it was in the past.

If someone runs a TikTok experiment where a woman hits on men out of the blue, do you get points for being smooth and trying to capitalize on it, or points for correctly suspecting it’s not real?

Richard Hanania: Fascinating social experiment here. Watched the whole thing. You can see the variation in men’s confidence and the differences are absolutely vast. Best performing was black guy in black tank top, the worst was the first guy. Just completely different levels of being able to capitalize on an opportunity.

I looked at the replies finally, so many of you are so pathetic. You should have a positive attitude and even if it’s a skit there’s still opportunity there. You could win her over or could even impress others in the vicinity! At worst you can practice talking to an attractive woman. And you shouldn’t walk around with an attitude of nothing good can ever happen to me anyway.

I just keep being shocked by how unworthy of existence many of you are. It was outside my realm of imagination. Now I know what women feel. You all think like this and think you deserve companionship or even to live.

Cartoons Hate Her: Most men are hit on so rarely that the ones who aren’t top-tier attractive immediately know something is going on lol. You can see them looking for whoever is filming.

Side note: I had no idea it was this easy to hit on men. I have never approached a man, I found it unbecoming. I can see why men are afraid of it.

Richard Hanania: It’s a self-fulfilling prophecy.

Cartoons Hate Her: True.

Armand Domalewski: The way she does it is so obviously fake lol

It’s a guy’s idea of how a girl would hit on a guy.

Lazy River:

>guy thinking “this has to be fake”

>it is in fact fake

Birth of another super villain. This happens to guys fr while the girls friends laugh in the back ground and it takes them out of the game for years, sometimes the rest of their lives. And it is this easy to pick up a man.

David: They’re objectively correct that there’s a camera tho, that’s not them being stupid for lacking confidence they are 100% right that she’s not to be trusted.

Rob Henderson: Reminds me of a study years ago where researchers had attractive actors to go up to people on a college campus and ask if they’d like to go on a date with them, if they’d like to go back to their apartment with them, and if they’d like to go to bed with them.

For women, 56% said yes to the date with the attractive male stranger, 6% said yes to going back to his apartment, and zero said they would go to bed with him.

For men, half said yes to the date with the attractive female stranger, 69% said yes to going to her apartment, and 75% said yes to going to bed with her.

If you know you’re being filmed that’s all the more reason to go for it? Same principle as before, if they post it and it gets views you identify yourself and say DMs are open.

All the men in the video did successfully exchange contact information of some form. Rob Henderson and Richard Hanania then did a 20 minute analysis of the 5 minute video, critiquing the various responses to see who got game and who didn’t.

They pointed out correctly that the percentage play, if you have the option and you are actually interested and think this could be real, is to attempt escalation to a low investment date on the spot, and go from there, while she’s feeling it.

The other great point is, no it isn’t real by design, but who is to say you cannot make it real? She’s opening. She’s giving you an opportunity. Sure, she might intend it to purely be a bit, but if you play your cards right, who knows? Worst case scenario is you get an extra rep. It’s even a power move to indicate you know she thinks it’s fake, and to run right through that.

I would add that the men here all now have her contact information, and there is nothing stopping them from extending the experiment. As in, you say ‘I know that your approaching me was a bit for a video, but I do like you, so…’ and again worst case is she tells the world you tried to have some game. Even now, there’s still time, guys.

Also, of course, yes women can basically do this at will, and there are strong arguments that they should be approaching quite a lot, as they have full social permission to do so, and even when you get a no you probably brighten the guy’s day – almost all guys consistently say, as noted above, they want to be approached more. And as I noted above, ‘guy is willing to open’ is not a great selection plan in 2025.

Even basic flirting principles are in short supply so it’s worth going over this again, starting from the top.

Chase Harris: If you subtly flirt with every woman you come across and say wassup bro in a happy mood to random dudes you will go far in life. Everyone is like a kid trapped in an adult body. They just want compliments and friends, so be that. You’ll never spend a day broke or alone if you do.

Standard Deviant offers us The Escalation Ladder. The flirting examples are sometimes weird and strangely ordered, but the principles are what is important and seem entirely correct. Indeed, the very fact that the details seem off yet this is still the person writing the post illustrates that the principles are what matters.

The basic principles are:

  1. Start at a level appropriate to the context.

  2. Escalate slowly.

  3. Actively de-escalate unless they escalate back.

    1. When dealing with a sufficiently clueless dude you can relax this a bit.

  4. Don’t escalate if you think either of you would regret it later.

  5. Understand when you are entering the ‘warning zone’ where you can no longer fully pretend you maybe aren’t flirting at all, where you thus risk making someone actively uncomfortable or causing an actively negative reaction.

  6. Understand when you are entering the ‘danger zone’ where you can get yourself or someone else into real trouble.

  7. It is very hard to disguise your level of being sexually pushy, which means that being actually not that pushy is the correct move on all levels.

Some notes on the specific actions:

  1. They correctly identify that ‘asking if I can make a flirtatious comment’ is a larger escalation than actually making the flirtatious comment. I think he misidentifies the reasoning here, it’s not that this implies a ‘more flirtatious’ comment, it’s that asking makes your action not ambiguous and is requesting a non-ambiguous escalation.

  2. I also think that asking for someone’s number without a plausible other reason (aka ‘closing’) should be much farther down the list than they have it. If the action is clearly in a dating context this is in the warning zone. But with a good enough excuse it isn’t even flirting. High variance.

  3. Telling a joke can certainly be flirting but if the joke isn’t risque it’s a free action, fully deniable and ambiguous, you can do this pretty much at any time.

Flirting is pretty awesome. Alas, flirting seems to be getting rare and men are afraid to do it. The perception here has little basis in reality, but fear of tail risk (or in some cases, thinking it’s not even a tail risk) works whether or not the thing you are afraid of is real.

Cartoons Hate Her: But increasingly, I see young men claiming that they’re afraid to flirt with women in social settings, even at bars or parties, because the woman could “ruin their lives.” I saw one insisting that there’s a 50% chance any woman you approach will send you to jail. I think (or at least, I thought) most men know that women won’t really ruin their lives for talking to them at a club.

The worst thing that will happen is rude rejection and ridicule, which sucks for a bunch of other reasons (I’d have a hard time with that if it happened to me!) but isn’t life-ruining. And as any established pickup artist will tell you, the key to success is to lose the fear of rejection, and that happens when you see it all as a low-stakes numbers game.

Obviously she is correct, unless you are doing something very wrong in a way that should be rather obvious, you use a reasonable escalation ladder and you take no for an answer, the chance of a woman you approach trying to ‘run your life’ let alone ‘send you to jail’ is epsilon.

Cartoons Hate Her: The young women today who are upset that men don’t approach them aren’t the same women who decided any approach was harassment in 2015. We don’t all attend an annual bitch conference.

I recall getting yelled at in the Jezebel comments section in like 2014 bc I said it’s not bad for a man to come onto a woman if he respects a “no.” We existed back then and people told us to shut up!

I actually don’t think a fear of women calling the cops is the problem. Making overtures in person (like I wrote here, about dancing) is far more risky re: embarrassment and rejection than simply doing nothing or staying on apps.

I feel like it’s very silly to conflate “I don’t want to be stalked or assaulted” with “men should never flirt with me.” Thinking the latter is silly doesn’t mean you think the former is silly. These should be totally different things.

If all women, or even all liberal women, are to blame for a few overzealous takes in 2013, then by that logic it’s reasonable to treat all men as assaulters because some of them are. Get real.

The problem is, the extreme was loud, and looked to many young men like the norm.

Kat Rosenfeld: hard to overstate how much the culture has been shaped by the fact that circa 2011 sites like Jezebel—and, subsequently, the discourse — were overtaken by millennial women with personality disorders and/or intense grudges over the romantic disappointments they suffered as teens.

Cartoons Hate Her: Drives me nuts when people assume I (or even the majority of women) were complicit in this nonsense. Most people just didn’t want to be assaulted lol. Predictably, the wackos ruined it for everyone.

Caesaraum: what percent of people being wackos ruins the commons for everyone? how much discourse does it take to make the wackos look more prevalent than they are?

Isabel: It is dawning on me just how non-flirtatious our world is. When two people start flirting in front of me in public I feel like I’m witnessing something precious and start rooting for it to go somewhere. It feels so rare now. What happened? We like to talk to each other, remember?

My Fitness Feelings: Women will literally start a global witch hunt against flirting. Successfully destroy it. Forget. Then start a new movement complaining that no one is hitting on them.

Alternatively, the message that went out was ‘do the wrong thing and we will rain hell down upon you’ and even though this was rare even when the man deserved it or worse, and far rarer when they didn’t, there were a few prominent examples of this happening in ambiguous cases, so the combination created a culture of fear. To which some said good and they amplified it.

Then this synthesized into a culture obsessed with smartphones and dating apps, to the point where interactions in physical space seem alien and bizarre, and any kind of flirting or similar behaviors in person seemed verboten.

As always there is an alternate hypothesis, which it seems is both that there is no problem, and that the problem is the fault of the apps or porn:

Dhaaruni: The “men can’t approach women anymore because of man-hating feminists” stuff is very overblown. I’m more of a misandrist than ~99% of women and when I was single, men would approach me, and I was usually amenable to engaging! But, normal human behavior doesn’t drive discourse.

Noah Smith: Yes. To the extent that men don’t approach women anymore, it’s because they’re either on apps, or gooning to porn. The idea that wokeness has turned men into a bunch of timid feminist wimps is just another dumb online panic.

The most obvious place to start is, if she’s talking to you for a remarkably long time, you don’t want to make any assumptions but you (assuming you are interested yourself) do want to at least try flirting a bit and seeing if she’s interested?

Ellie Schnitt: my very sweet friend did not realize that the girl who was talking to him for 2 hours at a party last weekend was interested. I want to reiterate they spoke for 2 entire hours and he didn’t realize she wanted to kiss him until he was told 10 minutes ago.

in his defense a few years ago I talked to a guy I was VERY into for hours just assuming he wasn’t interested. At the end of the night he said “I’m going home, you coming?” and I said “oh is there an after party?” I’ll never forget the look he gave me I think I broke his brain.

NeedMeAJinshi: Holding a conversation for 2 hrs is literally nothing. Not to be that guy, but trying to make “holding a conversation” a sign of interest is exactly how you make every guy think a girl showing him basic respect is her hitting on him.

Eva: protect him at all costs tbh.

Ellie Schnitt: He is the sweetest and deserves the world.

Wayne Reardon: Story of my life. When i was 26 I walked a girl home one night and when we were sitting on the porch out the front of her house I asked her “what are you doing for the rest of the night?”.

She answered “I’ll probably watch a movie. It’s called Who’s Going To Make The First Move”.

I didn’t figure it out until about 2 hours later, so rang her and went back to her house.

Keysmashbandit: Female sexual attraction is a myth. There is no behavior that could possibly communicate it. No matter what, she’s not interested. Don’t bother.

Even if a woman is having sex with you she’s only doing it so she can make fun of you with her friends later. Even if you’re dating. Even if you’re married with children.

Keysmashbandit (replying because it seemed necessary): This post is obvious sarcasm and if you believe it at face value even for a second you need to exit your house immediately and talk to another human face to face.

The concern of NeedMeAJinshi is real, which is why you gracefully check. Talking for two hours at a party goes well beyond basic respect and you should definitely check.

At least be less oblivious than this guy.

Brian: In college I had a female friend who was really cute and I got along with really well. I never asked her out. We were friends! It just never occurred to me.

I drew a daily comic at the time. At some point, I introduced a character, who looked like her, whose name was similar, and who was the main character’s best friend. The running joke was that she had a massive unrequited crush on him but the guy was completely oblivious.

I later found out that she actually had a big crush on me and I was, in fact, completely oblivious. She read my comics and must have thought I was torturing her.

A new paper covers what happened when Tinder first arrived, note that this was largely a replacement of other dating apps.

Online dating apps have transformed the dating market, yet their broader effects remain unclear.

We study Tinder’s impact on college students using its initial marketing focus on Greek organizations for identification. We show that the full-scale launch of Tinder led to a sharp, persistent increase in sexual activity, but with little corresponding impact on the formation of long-term relationships or relationship quality. Dating outcome inequality, especially among men, rose, alongside rates of sexual assault and STDs.

However, despite these changes, Tinder’s introduction did not worsen students’ mental health, on average, and may have even led to improvements for female students.

The full paper is gated, and one must note the unavoidable limitations here. Greek organizations are importantly different from others, and the real difference is that the early dynamics are not going to hold stable over time, and with this kind of study you are not going to see longer term effects.

In terms of ‘mental health,’ the short term effects are reported (presumably self-reported) to be neutral-to-good in aggregate, and the net relationship impacts tiny. Given the other impacts I would presume that the longer term mental health impacts are negative, and for college students to be a group where the net effects are relatively positive.

Periodically we see a version of this claim:

Medjedowo: dating apps, by nature, can’t be ‘too effective’ at matching users, otherwise they’d run out of customers and traffic volume.

Not to be too conspiratorial but how are they incentivizing long term usage, exactly? like the users meet irl but they stacked the deck to sabotage it behind the scenes? as with news media reporting slop my inclination is to ultimately blame the consumers– if they could sell.

I mean, yes, in theory at some point this becomes true. At any reasonable margin this simply isn’t true, certainly not for anyone outside of Match group. The reputational effects swamp everything else, especially since even if you are 100% to get a successful match most relationships don’t last. You’ll be back, and if you aren’t you’ll be telling all your friends how you met.

You do want to somewhat sabotage the lives of free users to force them to pay,and thus you gate useful things behind paywalls, but that’s true of essentially all free apps everywhere to some degree.

Having nerdy interests is only a minor handicap, if you (1) own it with no stress and (2) don’t require or impose them on your partners or let them get in the way. Chances are high your actual problem to fix lies elsewhere.

If someone actually vetoes you because of your hobby even if you own it with no stress and no imposition of it on others, it wasn’t a good match anyway. That’s positive selection right there. The same goes for political vetoes.

This conversation started off with the (decent looking) Guy Who Swiped Right Two Million Times and got one date. Ten out of ten points for persistence and minus several million for repeating the same action expecting different results and also minus another million for having actual zero swiping standards.

He’s plausibly got requirement one nailed but number two might be a problem.

The problem clearly ran deep. It’s one thing to do 2.05 million swipes and get only 2,053 matches. That’s 1 in 1,000, which to be fair is very bad, and it’s not hard to generate theories as to how that happened. But then he had 1,269 chats and that led to 1 date, and at that point dude it’s something else entirely.

Max: I think he’s just scary.

The contradictions are there right off the bat. He does have standards, in his way.

Goblin: i think ultimately this is a branding issue tbh

Fish photos signal “conservative normie,” owning 33 snakes signals “weirdo leftie.”

Basically no one making it through his photo filter survives his special interest filter.

I don’t think that’s weirdo lefty, you can totally have 33 snakes and be a weirdo rightie. Claude suspects ‘trying too hard to be quirky,’ which definitely fits.

Sardine Thief: I think he’s just weird and antisocial and from what I’ve seen floating around of the rest of his profile acts vaguely menacing and domineering, u can literally have whatever weird interest you want and the right girl will find it charming if you’re not otherwise a weirdo

i used to think i was basically this guy and was doomed bc of my “weird” interests in like historical asian linguistics and comparative religion and when i reframed it as a “me” problem instead of a “my interests” problem i met a woman who actually likes me like a month later

Goblin: oh wow wait what what can you elaborate this is a really good case study

Sardine Thief: i had the typical nerdy guy problem of thinking it was my nerdy interests that made women not interested and not that very same self-pitying attitude

i don’t think that’s specifically this guy’s problem but “my interests are unlovable” is often a smokescreen to protect the ego from having to address what the real problem is, because saying “i need to work on how i relate to and communicate with women” and then doing it is a lot harder than saying “they don’t like me cuz they hate my snakes”

my problem was actual gynophobia, i spent most of my life til ~19 being emotionally/verbally abused by female caregivers & had to work out some things to be able to not treat women like “landmine that needs to be placated with chores and fawning to maybe not blow up in my face”

Goblin: oh whoa yeah that makes a lot of sense! good on ya for working it out! this feels like a v common path but people end up getting stuck at the “nobody likes my weird interests” point

Sardine Thief: yeah, it does loop back around to branding, but reinventing your branding on more than a surface level requires deeply examining the pillars that the way you present yourself are built on to begin with

Goblin: Yeaaah!

At most, you get one move at this level. You definitely don’t get two.

The broader point is mostly right, the narrow point seems obviously wrong?

Jakeup: If fewer than one third of men are into comic books (likely) or cosplay (almost certainly), then getting into comic books or cosplay increases your market attractiveness.

The broader point isn’t just that many “unattractive” traits are undersupplied and thus are actually good, but that being *anything at allis attractive the only thing oversupplied in the dating market are people who are nothing in particular and do little much of anything.

In general, yes, better to be into as many non-harmful things as you can, so long as you are capable of shutting up about them and not letting them interfere with your dating activities. The reason so many of the things here are unattractive is that they do actively interfere, either because people can’t stop talking about them, they are money sinks, they have very bad correlations, or they do active harm to you, or a combination thereof.

You definitely can’t say that if X% of women find [Y] attractive, then at least X% of men should have attribute [Y], or vice versa, or that this indicates undersupply if not true, or anything like it. That’s not how it works for various obvious reasons.

But yes, if you are not into some things, you need to pick at least one something and get into it, ideally something that people you are interested in will find interesting.

Why is (illusionary) magic considered lame and low status? I am with Chana Messinger here that if executed and presented well, magic gives you charm and charisma, and seems great at ice breaking and demonstrating skill and value, and is actually great.

I also think Jack is correct that most people who use magic in this way are bad at it, and that bad magic is indeed lame and low status. Most magicians are not successful, and most people presenting magic are showing that they’ve overinvested in magic relative to other things. So is being ‘too in’ in magic as a strategy relative to your level of magic success.

There is especially a problem if you are obviously doing the magic as a strategy to get girls, doubly so if you are shoehorning your magic into an interaction where it shouldn’t be, or if you present as if you think you’re super hot stuff when you clearly aren’t. Having magic available as a tool is awesome, if you have some skill, but part of making the magic happen is knowing when not to make the magic happen.

The other problem is that illusionary magic (unlike magick or Magic: The Gathering) is at its core illusion and deception. Do you want to make that your brand? What other kinds of scams are you trying to pull?

It’s easy to get caught up in ‘how to play correctly’ and the fact that you can indeed succeed on the apps with effort, and forget that even in the best case all that rejection is going to feel really terrible if you let it.

veloread: Honestly, I had to quit Bumble for years because of what it did to me. More than any other dating app, on Bumble I have had an awful time.

My experience was – and to a lesser extent, remains, now that I’m on it again – this:

Set filters to the people I’m interested in. See profile after profile of fascinating, funny, intelligent-seeming people who are my type. Think to myself about whether we’d be compatible. Swipe, in the absolute and extreme confidence that I won’t get a match. Don’t get any matches.

That constant rejection – person after person after person – and the few times you match, and you find the other person just doesn’t put in any effort at all, because, well, the gender ratio and dynamics on these apps is awful, and so she’s got a huge pile of matches and messages to deal with.

The experience made me angry, and it made me sad. I went on these apps after a breakup, hoping that I’d be able to “put myself out there”, but it ended up making the pain of that loss much worse. I had thought, because of who I had been with – and because our relationship ended not because of anything either of us did, but because we were at different stages of our lives – that I was handsome and funny and desirable. That I’d come off as someone people would like to talk to, laugh with, have adventures with.

Signull: people ask how i stay attuned to what tech actually does… the emotions it triggers, the ways it shapes lives, the subtle effects or unexpected magici it delivers. the answer’s simple. i read. not whitepapers. not founder threads. this. stuff like this.

posts like this are where the truth lives. visceral, unpolished, & real. someone trying to make sense of their own pain through a product that promised connection but delivered emptiness. this is user research. this is culture listening.

the internet gives you a front row seat to the human condition. you just have to care enough to look.

Rany Treibel: I love these kind of posts because they’re vulnerable in a productive way, not attention seeking, not acting like a victim, just sharing an experience. Could this person do things better? sure, but that doesn’t matter here. Even those of us who have no trouble on dating sites still experience this for weeks on end sometimes.

Robin Hanson: I gotta say this wouldn’t have done much for my mental health either. Pretty dystopian hell scenario.

The obvious mental trick, far easier said than done, is to not see this as rejection.

As in, you’re not being rejected so much as not being considered. You’re not making a request so much as you’re confirming you would be open to something happening.

The algorithm is gating your success behind a bunch of grinding, until someone takes the time to consider you enough to have meaningfully rejected you rather than a photo, to have chosen to reject rather than not have had time to choose at all. And it is only once someone engages you in conversation for real that you’ve actually been rejected as a person.

Until then, yes you should be aware of your metrics versus others so you can work on improving, but in a real way this ‘doesn’t count.’

Would this feature work?

Andrew Rettek: Idea for a dating site feature. Every time you swipe on someone you need to write a little bit about why before you can swipe again/message that match. That feedback is collected for every user and every user can request an LLM summary of it (but not the actual text).

This adds friction to slow down mindless swiping, gives users (successful and unsuccessful) feedback if they want it, and forces people to think at all about what they’re looking for in a match. I don’t thing this solves “the apps” but it probably helps a bit.

The obvious problem with this is that users don’t want to do this. The point of swiping is that it is almost instant, it requires no thought, no words, like a slot machine. That wins in the marketplace because that’s what women choose.

So most users, most importantly most women, will quickly start to use the cut and paste, or something damn close to it. I mean, if you’re swiping on a profile, is there really that much to say there, and if they aren’t even going to read it before deciding why should you spend the effort? ‘Not hot’? In general, you can’t have mechanisms at odds with the user like this.

The variant of this that has non-zero chance of working is that the man would swipe and write a message first, and only then does the woman get to swipe, and perhaps you would have an AI that would give it a uniqueness score relative to other messages the same person sent, or you would otherwise engage in an auto-filter on the message along with everything else that is available for an LLM to filter profiles.

The first major dating site to get AI and usefully costly signaling properly into the early matching process, in a way that actually fixed the incentives without wrecking the experience, is going to see some big returns. It’s odd how little they seem to be focusing on this problem.

Alternatively, what about matching people by browser history? If there is a way to avoid data security and privacy concerns (ha!) then there are actually a lot of advantages. This should match people by various forms of common interests and content consumption patterns.

It also serves as a way to effectively say things you couldn’t otherwise say. As in, suppose you have a very niche interest, perhaps a kink and perhaps something else. You wouldn’t want to put that information in a profile, but this can potentially work around that.

That suggests a different design, which is AI-only honest-request blind matching.

As in, you write down what you really, truly want and care about. All the really good stuff. A document you would absolutely not want anyone else to read, including both freeform statements and answers to a range of questions.

Then, an AI looks at this, and compares your requests and statements to those of others, and gives compatibility scores, in a way that is protected against hacking the system in various ways (e.g. you don’t want someone to be able to add and remove ‘I’m extremely into [X]’ from their profile and compare all the scores, thus revealing who is exactly how into or not into [X].)

You could also offer this evaluation as a one-time service, where a fully anonymized server can take any given two write-ups [X], [Y] from different sources, and then evaluate.

Also note this does not have to be romantic. You can also do this, at scale or one-on-one, for finding ordinary friends or anything else.

Aella: Finally got on Tea. I don’t think most of you guys have to worry

I continue to think having a Date Me Doc, which is literally a document that lays out at length who you are, what you bring to the table and what you’re looking for at length, is an excellent move.

Here’s a resource that seems worth getting in on, if you’re in the area and the right Type of Guy sufficiently to qualify. This seems like The Way, it only works if you don’t know what she’s writing down:

Brooke Bowman: The girlies would like to browse the database.

Social Soldier: hey guys, if you want to go in the bachelor’s database lmk. If I know you, I will screen you and put you in for the girlies to see. Sorry, no, you won’t get to know what I say.

If you know of any bachelors this is what I’m looking for I like em intense.

Cate Hall: I can’t get Nan to do a date-me so fellas take note that she is SINGLE, WARM, FEROCIOUS, HOT AS HELL, and INTO NERDS.

Nan Ransohoff (website): well I asked claude what historical figures he’d set me up with and I.. have never felt so seen?

anyway, I’ll be hosting a feynman lookalike party later this year ✌🏼

It seems likely matchmaking is underrated, at least relative to dating apps and for those who can afford the fees? Or it would be, if the services deliver the goods.

I’ve now seen several ads on the subway recently for matchmaking services. The latest was for a service called Tawkify, which claims to be rather large, and I figured I’d do some brief investigation.

Clients pay for a package of curated dates managed by a dedicated matchmaker, based on your criteria, or you can pay a much smaller amount ($100/year) to be the candidate pool. They will also recruit outside the platform.

Yes, the price of ~$4500 for three matches is not cheap (bigger packages seem cheaper per date), but compare it to the number of hours you would otherwise spend to get to that point, and the quality of the matches you get from the apps, and ask if you were enjoying those hours of app work.

The problem with the matchmaking option is what you would expect it to be. The service is reportedly using predatory sales tactics and does not actually make much of an attempt to Do The Thing.

Yes, you would expect a lot of unhappy complaining customers no matter how good the service was but even by that standard this look is terrible and there are a lot of signs of reputation manipulation.

Google Deep Research: A stark contradiction exists between the company’s heavily promoted high rating on Trustpilot and the extensive volume of severe complaints lodged with the Better Business Bureau (BBB) and on public forums like Reddit. A pattern of recurring complaints alleges misleading sales tactics, poor match quality that disregards client-stated preferences, high matchmaker turnover, and the enforcement of rigid, non-refundable contracts that place the full financial burden on the consumer.

One detailed complaint from a client who paid $4,500 articulated the perceived unfairness of being matched with men who had only paid the $99 database fee, a critical detail she claims was never disclosed during the sales process.

Step 1: Initial Screening & Sales Call. The process begins when a prospective client completes an online application, which vets for basic criteria such as age, location, and income. If the applicant is deemed a potential fit, they are scheduled for a call with a “client experience specialist” or salesperson. This initial call is a critical juncture and a source of numerous consumer complaints.

Multiple reports filed with the BBB and on public forums describe this as a high-pressure sales call where key, and often deal-breaking, details of the service—such as the mandatory blind date format or the strict non-refundable policy—are allegedly omitted, downplayed, or obscured. Some prospective customers have reported being chastised or emotionally manipulated when they balked at the high price point.

When a matchmaker does make contact, the screening process is explicitly one-way. The interview and vetting are conducted to determine the individual’s suitability for a specific paying Client. The Member’s or Recruit’s preferences are secondary; the primary objective is to find someone who meets the Client’s criteria.

So the trick is to find the good version of the service.

Also, it seems there is now at least one person running a non-monogamous matchmaking service focusing on Austin, Oakland and Boulder.

Discussion about this post

Dating Roundup #7: Back to Basics Read More »

tesla-denied-having-fatal-crash-data-until-a-hacker-found-it

Tesla denied having fatal crash data until a hacker found it

Tesla only acknowledged that it had received the data once the police took the Tesla’s damaged infotainment system and autopilot control unit to a Tesla technician to diagnose, but at that time the local collision snapshot was considered unrecoverable.

That’s where the hacker, only identified as @greentheonly, his username on X, came in. Greentheonly told The Washington Post that, “for any reasonable person, it was obvious the data was there.”

During the trial, Tesla told the court that it hadn’t hidden the data, but lost it. The company’s lawyer told the Post that Tesla’s data handling practices were “clumsy” and that another search turned up the data, after acknowledging that @greentheonly had retrieved the snapshot locally from the car.

“We didn’t think we had it, and we found out we did… And, thankfully, we did because this is an amazingly helpful piece of information,” said Tesla’s lawyer, Joel Smith.

Tesla denied having fatal crash data until a hacker found it Read More »

zuckerberg’s-ai-hires-disrupt-meta-with-swift-exits-and-threats-to-leave

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave


Longtime acolytes are sidelined as CEO directs biggest leadership reorganization in two decades.

Meta CEO Mark Zuckerberg during the Meta Connect event in Menlo Park, California on September 25, 2024.  Credit: Getty Images | Bloomberg

Within days of joining Meta, Shengjia Zhao, co-creator of OpenAI’s ChatGPT, had threatened to quit and return to his former employer, in a blow to Mark Zuckerberg’s multibillion-dollar push to build “personal superintelligence.”

Zhao went as far as to sign employment paperwork to go back to OpenAI. Shortly afterwards, according to four people familiar with the matter, he was given the title of Meta’s new “chief AI scientist.”

The incident underscores Zuckerberg’s turbulent effort to direct the most dramatic reorganisation of Meta’s senior leadership in the group’s 20-year history.

One of the few remaining Big Tech founder-CEOs, Zuckerberg has relied on longtime acolytes such as Chief Product Officer Chris Cox to head up his favored departments and build out his upper ranks.

But in the battle to dominate AI, the billionaire is shifting towards a new and recently hired generation of executives, including Zhao, former Scale AI CEO Alexandr Wang, and former GitHub chief Nat Friedman.

Current staff are adapting to the reinvention of Meta’s AI efforts as the newcomers seek to flex their power while adjusting to the idiosyncrasies of working within a sprawling $1.95 trillion giant with a hands-on chief executive.

“There’s a lot of big men on campus,” said one investor who is close with some of Meta’s new AI leaders.

Adding to the tumult, a handful of new AI staff have already decided to leave after brief tenures, according to people familiar with the matter.

This includes Ethan Knight, a machine-learning scientist who joined the company weeks ago. Another, Avi Verma, a former OpenAI researcher, went through Meta’s onboarding process but never showed up for his first day, according to a person familiar with the matter.

In a tweet on X on Wednesday, Rishabh Agarwal, a research scientist who started at Meta in April, announced his departure. He said that while Zuckerberg and Wang’s pitch was “incredibly compelling,” he “felt the pull to take on a different kind of risk,” without giving more detail.

Meanwhile, Chaya Nayak and Loredana Crisan, generative AI staffers who had worked at Meta for nine and 10 years respectively, are among the more than half a dozen veteran employees to announce they are leaving in recent days. Wired first reported some details of recent exits, including Zhao’s threatened departure.

Meta said: “We appreciate that there’s outsized interest in seemingly every minute detail of our AI efforts, no matter how inconsequential or mundane, but we’re just focused on doing the work to deliver personal superintelligence.”

A spokesperson said Zhao had been scientific lead of the Meta superintelligence effort from the outset, and the company had waited until the team was in place before formalising his chief scientist title.

“Some attrition is normal for any organisation of this size. Most of these employees had been with the company for years, and we wish them the best,” they added.

Over the summer, Zuckerberg went on a hiring spree to coax AI researchers from rivals such as OpenAI and Apple with the promise of nine-figure sign-on bonuses and access to vast computing resources in a bid to catch up with rival labs.

This month, Meta announced it was restructuring its AI group—recently renamed Meta Superintelligence Lab (MSL)—into four distinct teams. It is the fourth overhaul of its AI efforts in six months.

“One more reorg and everything will be fixed,” joked Meta research scientist Mimansa Jaiswal on X last week. “Just one more.”

Overseeing all of Meta’s AI efforts is Wang, a well-connected and commercially minded Silicon Valley entrepreneur, who was poached by Zuckerberg as part of a $14 billion investment in his Scale data labeling group.

The 28-year-old is heading Zuckerberg’s most secretive new department known as “TBD”—shorthand for “to be determined”—which is filled with marquee hires.

In one of the new team’s first moves, Meta is no longer actively working on releasing its flagship Llama Behemoth model to the public, after it failed to perform as hoped, according to people familiar with the matter. Instead, TBD is focused on building newer cutting-edge models.

Multiple company insiders describe Zuckerberg as deeply invested and involved in the TBD team, while others criticize him for “micromanaging.”

Wang and Zuckerberg have struggled to align on a timeline to achieve the chief executive’s goal of reaching superintelligence, or AI that surpasses human capabilities, according to another person familiar with the matter. The person said Zuckerberg has urged the team to move faster.

Meta said this allegation was “manufactured tension without basis in fact that’s clearly being pushed by dramatic, navel-gazing busybodies.”

Wang’s leadership style has chafed with some, according to people familiar with the matter, who noted he does not have previous experience managing teams across a Big Tech corporation.

One former insider said some new AI recruits have felt frustrated by the company’s bureaucracy and internal competition for resources that they were promised, such as access to computing power.

“While TBD Labs is still relatively new, we believe it has the greatest compute-per-researcher in the industry, and that will only increase,” Meta said.

Wang and other former Scale staffers have struggled with some of the idiosyncratic ways of working at Meta, according to someone familiar with his thinking, for example having to adjust to not having revenue goals as they once did as a startup.

Despite teething problems, some have celebrated the leadership shift, including the appointment of popular entrepreneur and venture capitalist Friedman as head of Products and Applied Research, the team tasked with integrating the models into Meta’s own apps.

The hiring of Zhao, a top technical expert, has also been regarded as a coup by some at Meta and in the industry, who feel he has the decisiveness to propel the company’s AI development.

The shake-up has partially sidelined other Meta leaders. Yann LeCun, Meta’s chief AI scientist, has remained in the role but is now reporting into Wang.

Ahmad Al-Dahle, who led Meta’s Llama and generative AI efforts earlier in the year, has not been named as head of any teams. Cox remains chief product officer, but Wang reports directly into Zuckerberg—cutting Cox out of overseeing generative AI, an area that was previously under his purview.

Meta said that Cox “remains heavily involved” in its broader AI efforts, including overseeing its recommendation systems.

Going forward, Meta is weighing potential cuts to the AI team, one person said. In a memo shared with managers last week, seen by the Financial Times, Meta said that it was “temporarily pausing hiring across all [Meta Superintelligence Labs] teams, with the exception of business critical roles.”

Wang’s staff would evaluate requested hires on a case-by-case basis, but the freeze “will allow leadership to thoughtfully plan our 2026 headcount growth as we work through our strategy,” the memo said.

© 2025 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Zuckerberg’s AI hires disrupt Meta with swift exits and threats to leave Read More »

today’s-game-consoles-are-historically-overpriced

Today’s game consoles are historically overpriced

Overall, though, you can see a clear and significant downward trend to the year-over-year pricing for game consoles released before 2016. After three years on the market, the median game console during this period cost less than half as much (on an inflation-adjusted basis) as it did at launch. Consoles that stuck around on the market long enough could expect further slow price erosion over time, until they were selling for roughly 43 percent of their launch price in year five and about 33 percent in year eight.

That kind of extreme price-cutting is a distant memory for today’s game consoles. By year three, the median console currently on the market costs about 85 percent of its real launch price, thanks to the effects of inflation. By year five, that median launch price ratio for modern consoles actually increases to 92 percent, thanks to the nominal price increases that many consoles have seen in their fourth or fifth years on the market. And the eight-year-old Nintendo Switch is currently selling for about 86 percent of its inflation-adjusted launch price, or more than 50 percentage points higher than the median trend for earlier long-lived consoles.

While the data is noisy, the overall trend in older console pricing over time is very clear. Kyle Orland

To be fair, today’s game consoles are not the most expensive the industry has ever seen. Systems like the Atari 2600, Intellivision, Neo Geo, and 3DO launched at prices that would be well over $1,000 in 2025 money. More recently, systems like the PS3 ($949.50 at launch in 2025 dollars) and Xbox One ($689.29 at launch in 2025 dollars) were significantly pricier than the $300 to $600 range that encompasses most of today’s consoles.

But when classic consoles launched at such high prices, those prices never lasted very long. Even the most expensive console launches of the past dropped in price quickly enough that, by year three or so, they were down to inflation-adjusted prices comparable to today’s consoles. And classic consoles that launched at more reasonable prices usually saw price cuts that took them well into the sub-$300 range (in 2025 dollars) within a few years, making them a relative bargain from today’s perspective.

Today’s game consoles are historically overpriced Read More »

rocket-report:-spacex-achieved-daily-launch-this-week;-ula-recovers-booster

Rocket Report: SpaceX achieved daily launch this week; ULA recovers booster


Firefly Aerospace reveals why its Alpha booster exploded after launch in April.

Starship and its Super Heavy booster ascend through a clear sky over Starbase, Texas, on Tuesday evening. A visible vapor cone enveloped the rocket as it passed through maximum aerodynamic pressure and the speed of sound. Credit: Stephen Clark/Ars Technica

Welcome to Edition 8.08 of the Rocket Report! What a week it’s been for SpaceX. The company completed its first successful Starship test flight in nearly a year, and while it wasn’t perfect, it sets up SpaceX for far more ambitious tests ahead. SpaceX’s workhorse rocket, the Falcon 9, launched six times since our last edition of the Rocket Report. Many of these missions were noteworthy in their own right, including the launch of the US military’s X-37B spaceplane, an upgraded Dragon capsule to boost the International Space Station to a higher orbit, and the record 30th launch and landing of a flight-proven Falcon 9 booster. All told, that’s seven SpaceX launches in seven days.

As always, we welcome reader submissions. If you don’t want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets, as well as a quick look ahead at the next three launches on the calendar.

Firefly announces cause of Alpha launch failure. Firefly Aerospace closed the investigation into the failure of one of its Alpha rockets during an April mission for Lockheed Martin and received clearance from the FAA to resume launches, Payload reports. The loss of the launch vehicle was a dark cloud hanging over the company’s otherwise successful IPO this month. The sixth flight of Firefly’s Alpha rocket launched in April from Vandenberg Space Force Base, California, and failed when its first stage booster broke apart milliseconds after stage separation. This created a shockwave that destroyed the engine nozzle extension on the second stage, damaging the engine before the second stage ran out of propellant seconds before it attained orbital velocity. Both stages ultimately fell into the Pacific Ocean.

Too much stress … Investigators concluded that “plume induced flow separation” caused the failure. The phenomenon occurs when a rocket’s exhaust disrupts airflow around the vehicle in flight. In this case, Firefly said the rocket was flying at a higher angle of attack than prior missions, which resulted in the flow separation and created intense heat that broke the first stage apart just after it jettisoned from the second stage. Firefly will increase heat shielding on the first stage of the rocket and fly at reduced angles of attack on future missions. Alpha has now launched six times since 2021, with only two complete successes. Firefly said it was working on setting a date for the seventh Alpha launch. (submitted by EllPeaTea)

The easiest way to keep up with Eric Berger’s and Stephen Clark’s reporting on all things space is to sign up for our newsletter. We’ll collect their stories and deliver them straight to your inbox.

Sign Me Up!

ESA books a ticket on European launchers. The European Space Agency has awarded launch service contracts to Avio and Isar Aerospace under its Flight Ticket Initiative, European Spaceflight reports. Announced in October 2023, the Flight Ticket Initiative is a program run jointly by ESA and the European Union that offers subsidized flight opportunities for European companies and organizations seeking to demonstrate new satellite technologies in orbit. The initiative is part of ESA’s strategy to foster the continent’s commercial space industry, offering institutional funding to support satellite and launch companies. Avio won contracts to launch three small European space missions as secondary payloads on Vega C rockets flying into low-Earth orbit. Isar Aerospace will launch two small satellite missions to orbit for European companies.

No other options … Avio and Isar Aerospace were the obvious contenders for the Flight Ticket Initiative from a pool of five European companies eligible for launch awards. The other companies, PLD Space, Orbex, and Rocket Factory Augsburg, haven’t launched their orbital-class rockets yet. Avio, based in Italy, builds the now-operational Vega C rocket, and Germany’s Isar Aerospace launched its first Spectrum rocket earlier this year, but it failed to reach orbit. Avio’s selection replaces Arianespace, which was originally part of the Flight Ticket Initiative. Arianespace was previously responsible for marketing and sales for the Vega rocket, but ESA transferred its Flight Ticket Initiative eligibility to Avio following its split from Arianespace. (submitted by EllPeaTea)

Canadian rocket company ready for launch. NordSpace is preparing to launch its 6-meter tall Taiga rocket from Newfoundland, CBC reports. It will be a suborbital launch, meaning it won’t orbit Earth, but NordSpace says the launch will be the first of a Canadian commercial rocket from a Canadian commercial spaceport. The rocket is powered by a 3D-printed liquid-fueled engine and is a stepping stone to an orbital-class rocket NordSpace is developing called Tundra, scheduled to debut in 2027. The smaller Taiga rocket will launch partially fueled and fire its engine for approximately 60 seconds, according to NordSpace.

Newfoundland to space … The launch site, called the Atlantic Spaceport Complex, is located on the Atlantic coast near the town of St. Lawrence, Newfoundland. It will have two launch pads, one for suborbital flights like Taiga, and another for orbital missions by the Tundra rocket and other launch vehicles from US and European companies. The Taiga launch is scheduled no earlier than Friday morning at 5: 00 am EDT (09: 00 UTC). NordSpace says it is a “fully privately funded and managed initiative crucial for Canada to build a space launch capability that supports our security, economy, and sovereignty.” (submitted by Matthew P)

SpaceX’s reuse idea isn’t so dumb after all. A Falcon 9 rocket launched early Thursday from Kennedy Space Center, Florida, with another batch of Starlink Internet satellites. These types of missions launch multiple times per week, but this flight was special. The first stage of the Falcon 9, designated Booster 1067, launched and landed on drone ship in the Atlantic Ocean, completing its 30th flight to space and back, Ars reports. This is a new record for a reusable orbital-class booster stage and comes less than 24 hours after a preceding SpaceX launch from Florida that marked the 400th Falcon 9 landing on a drone ship since the first offshore recovery in 2016.

30 going for 40 … SpaceX is now aiming for at least 40 launches per Falcon 9 first stage, four times as many flights as the company’s original target for Falcon 9 booster reuse. Many people in the industry were skeptical about SpaceX’s approach to reuse. In the mid-2010s, both the European and Japanese space agencies were looking to develop their next generation of rockets. In both cases, Europe with the Ariane 6 and Japan with the H3, the space agencies opted for traditional, expendable rockets instead of pushing toward reuse. In the United States, the main competitor to SpaceX has historically been United Launch Alliance. Their reaction to SpaceX’s plan to reuse first stages a decade ago was dismissive. ULA dubbed its plan to reuse just the engine section of its Vulcan rocket “Smart Reuse” a few years ago. But ULA hasn’t even attempted to recover the engines from the Vulcan core stage yet, and reuse is still at least several years away.

Russia nears debut of Soyuz-5 rocket. In recent comments to the Russian state-run media service TASS, the chief of Roscosmos said the country’s newest rocket, the Soyuz-5, should take flight for the first time before the end of this year, Ars reports. “Yes, we are planning for December,” said Dmitry Bakanov, the director of Roscosmos, Russia’s main space corporation. “Everything is in place.” According to the report, translated for Ars by Rob Mitchell, the debut launch of Soyuz-5 will mark the first of several demonstration flights, with full operational service not expected to begin until 2028. It will launch from the Baikonur spaceport in Kazakhstan.

Breaking free of Ukraine … From an innovation standpoint, the Soyuz-5 vehicle does not stand out. It has been a decade in the making and is fully expendable, unlike a lot of newer medium-lift rockets coming online in the next several years. However, for Russia, this is an important advancement because it seeks to break some of the country’s dependency on Ukraine for launch technology. The new rocket is also named Irtysh, a river that flows through Russia and Kazakhstan. The rocket has been in development since 2016 and largely repurposes older technology. But for Russia, a key advantage is that it takes rocket elements formerly made in Ukraine and now manufactures them in Russia.

SpaceX launches mission to reboost the ISS. SpaceX completed its 33rd cargo delivery to the International Space Station (ISS) early Monday, when a Dragon supply ship glided to an automated docking with more than 5,000 pounds of scientific experiments and provisions for the lab’s seven-person crew, Ars reports. The resupply flight is part of the normal rotation of cargo and crew missions that keep the space station operating, but this one carries something new. What’s different with this mission is a new rocket pack mounted inside the Dragon spacecraft’s rear trunk section. In the coming weeks, SpaceX and NASA will use this first-of-its-kind propulsion system to begin boosting the altitude of the space station’s orbit.

A rocket on a rocket … SpaceX engineers installed two small Draco rocket engines in the trunk of the Dragon spacecraft. The thrusters have their own dedicated propellant tanks and will operate independently of 16 other Draco thrusters used to maneuver Dragon on its journey to the ISS. When NASA says it’s the right time, SpaceX controllers will command the Draco thrusters to ignite and gently accelerate the massive 450-ton space station. All told, the reboost kit can add about 20 mph, or 9 meters per second, to the space station’s already-dizzying speed. Maintaining the space station’s orbit has previously been the responsibility of Russia.

X-37B rides with SpaceX again. The US military’s reusable winged spaceship rocketed back into orbit from Florida on August 21 atop a SpaceX rocket, kicking off a mission that will, among other things, demonstrate how future spacecraft can navigate without relying on GPS signals, Ars reports. The core of the navigation experiment is what the Space Force calls the “world’s highest performing quantum inertial sensor ever used in space.” The spaceplane also hosts a laser inter-satellite communications demo. This is the eighth flight of the X-37B spaceplane, and the third to launch with SpaceX.

Back to LEO … This mission launched on a Falcon 9 rocket into low-Earth orbit (LEO) a few hundred miles above the Earth. This marks a return to LEO after the previous X-37B mission flew on a Falcon Heavy rocket into a much higher orbit. Many of the spaceplane’s payloads have been classified, but officials typically identify a handful of unclassified experiments flying on each X-37B mission. Past X-37B missions have also deployed small satellites into orbit before returning to Earth for a runway landing at Kennedy Space Center, Florida, or Vandenberg Space Force Base, California.

Rocket Lab cuts the ribbon on Neutron launch pad. Launch Complex 3, the Virginia Spaceport Authority’s Mid-Atlantic Regional Spaceport and home to Rocket Lab’s newest reusable rocket, Neutron, is now complete and celebrated its official opening Thursday, WAVY-TV reports. Officials said Launch Complex 3 is ready to bring the largest orbital launch capacity in the spaceport’s history with Neutron, Rocket Lab’s reusable launch vehicle, a medium-lift vehicle capable of launching 33,000 pounds (15 metric tons) to space for commercial constellations, national security, and interplanetary missions.

Not budging … “We’re trying as hard as we can to get this on the pad by the end of the year and get it away,” said Peter Beck, Rocket Lab’s founder and CEO. Beck is holding to his hope the Neutron rocket will be ready to fly in the next four months, but time is running out to make this a reality. The Neutron rocket will be Rocket Lab’s second orbital-class launch vehicle after the Electron, which can place payloads of several hundred pounds in orbit. Electron has a launch pad in Virginia, too, but most Electron rockets take off from New Zealand.

Starship completes a largely successful test flight. SpaceX launched the 10th test flight of the company’s Starship rocket Tuesday evening, sending the stainless steel spacecraft halfway around the world to an on-target splashdown in the Indian Ocean, Ars reports. The largely successful mission for the world’s largest rocket was an important milestone for SpaceX’s Starship program after months of repeated setbacks, including three disappointing test flights and a powerful explosion on the ground that destroyed the ship that engineers were originally readying for this launch.

Lessons to learn For the first time, SpaceX engineers received data on the performance of the ship’s upgraded heat shield and control flaps during reentry back into the atmosphere. The three failed Starship test flights to start the year ended before the ship reached reentry. Elon Musk, SpaceX’s founder and CEO, has described developing a durable, reliable heat shield as the most pressing challenge for making Starship a fully and rapidly reusable rocket. But there were lessons to learn from Tuesday’s flight. A large section of the ship transitioned from its original silver color to a rusty hue of orange and brown by the time it reached the Indian Ocean. Officials didn’t immediately address this or say whether it was anticipated.

ULA recovering boosters, too. United Launch Alliance decided to pull four strap-on solid rocket boosters from the Atlantic Ocean after their use on the company’s most recent launch. Photos captured by Florida photographer Jerry Pike showed a solid rocket motor casing on a ship just off the coast of Cape Canaveral. Tory Bruno, ULA’s president and CEO, wrote on X that the booster was one of four flown on the USSF-106 mission earlier this month, which marked the third flight of ULA’s Vulcan rocket and the first with a US national security payload.

A GEM from the sea … The boosters, built by Northrop Grumman, are officially called Graphite Epoxy Motors, or GEMs. They jettison from the Vulcan rocket less than two minutes after liftoff and fall into the ocean. They’re not designed for reuse, but ULA decided to recover this set of four from the Atlantic for inspections. The company also raised from the sea two motors from the previous Vulcan launch last year after one of them suffered a nozzle failure during launch. Bruno wrote on X that “performance and ballistics were spot on” with all four boosters from the more recent USSF-106 mission, but that engineers decided to go ahead and recover them to close out a “nice data set” from inspections of now six recovered motors—two from last year and four this year.

Next three launches

Aug. 30: Falcon 9 | Starlink 17-7 | Vandenberg Space Force Base, California | 03: 09 UTC

Aug. 31: Falcon 9 | Starlink 10-14 | Cape Canaveral Space Force Station, Florida | 11: 15 UTC

Sept. 3:  Falcon 9 | Starlink 17-8 | Vandenberg Space Force Base, California | 02: 33 UTC

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

Rocket Report: SpaceX achieved daily launch this week; ULA recovers booster Read More »

high-severity-vulnerability-in-passwordstate-credential-manager-patch-now.

High-severity vulnerability in Passwordstate credential manager. Patch now.

The maker of Passwordstate, an enterprise-grade password manager for storing companies’ most privileged credentials, is urging them to promptly install an update fixing a high-severity vulnerability that hackers can exploit to gain administrative access to their vaults.

The authentication bypass allows hackers to create a URL that accesses an emergency access page for Passwordstate. From there, an attacker could pivot to the administrative section of the password manager. A CVE identifier isn’t yet available.

Safeguarding enterprises’ most privileged credentials

Click Studios, the Australia-based maker of Passwordstate, says the credential manager is used by 29,000 customers and 370,000 security professionals. The product is designed to safeguard organizations’ most privileged and sensitive credentials. Among other things, it integrates into Active Directory, the service Windows network admins use to create, change, and modify user accounts. It can also be used for handling password resets, event auditing, and remote session logins.

On Thursday, Click Studios notified customers that it had released an update that patches two vulnerabilities.

The authentication bypass vulnerability is “associated with accessing the core Passwordstate Products’ Emergency Access page, by using a carefully crafted URL, which could allow access to the Passwordstate Administration section,” Click Studios said. The company said the severity level of the vulnerability was high.

High-severity vulnerability in Passwordstate credential manager. Patch now. Read More »

ai-#131-part-2:-various-misaligned-things

AI #131 Part 2: Various Misaligned Things

It doesn’t look good, on many fronts, especially taking a stake in Intel.

We continue.

  1. America Extorts 10% of Intel. Nice company you got there. Who’s next?

  2. The Quest For No Regulations Whatsoever. a16z is at it again, Brockman joins.

  3. The Quest for Sane Regulations. Dean Ball surveys the state legislative landscape.

  4. Chip City. Nvidia beats earnings, Huawei plans to triple chip production.

  5. Once Again The Counterargument On Chip City. Sriram Krishnan makes a case.

  6. Power Up, Power Down. I for one do not think windmills are destroying America.

  7. People Really Do Not Like AI. Some dislike it more than others. A lot more.

  8. Did Google Break Their Safety Pledges With Gemini Pro 2.5? I think they did.

  9. Safety Third at xAI. Grok 4 finally has a model card. Better late than never.

  10. Misaligned! Reward hacking confirmed to cause emergent misalignment.

  11. Aligning a Smarter Than Human Intelligence is Difficult. Filter the training data?

  12. How Are You Doing? OpenAI and Anthropic put each other to the safety test.

  13. Some Things You Miss When You Don’t Pay Attention. The things get weird fast.

  14. Other People Are Not As Worried About AI Killing Everyone. A new record.

  15. The Lighter Side. South Park sometimes very much still has it.

USA successfully extorts a 10% stake in Intel. Scott Lincicome is here with the ‘why crony capitalism is terrible’ report, including the fear that the government might go after your company next, the fear that we are going to bully people into buying Intel products for no reason, the chance Intel will now face new tariffs overseas, and more. Remember the fees they’re extorting from Nvidia and AMD.

Scott Lincicome: I defy you to read these paras and not see the risks – distorted decision-making, silenced shareholders, coerced customers, etc – raised by this deal. And it’s just the tip of the iceberg.

FT: Intel said the government would purchase the shares at $20.47 each, below Friday’s closing price of $24.80, but about the level where they traded early in August. Intel’s board had approved the deal, which does not need shareholder approval, according to people familiar with the matter.

The US will also receive a five-year warrant, which allows it to purchase an additional 5 per cent of the group at $20 a share. The warrant will only come good if Intel jettisons majority ownership of its foundry business, which makes chips for other companies.

Some investors have pushed for Intel to cut its losses and fully divest its manufacturing unit. Intel chief Lip-Bu Tan, who took the reins in March, has so far remained committed to keeping it, albeit with a warning that he could withdraw from the most advanced chipmaking if he was unable to land big customers.

Scott Lincicome: Also, this is wild: by handing over the equity stake to the US govt, Intel no longer has to meet the CHIPS Act conditions (i.e., building US-based fabs) that, if met, would allow them to access the remaining billions in taxpayer funds?!?! Industrial policy FTW, again.

Washington will be Intel’s single largest shareholder, and have a massive political/financial interest in the company’s operations here and abroad. If you think this share will remain passive, I’ve got an unfinished chip factory in Ohio to sell you.

Narrator: it turns out the share isn’t even that passive to begin with.

Scott also offers us this opinion in Washington Post Editorial form.

Jacob Perry: Basically, Intel gave 10% of its equity to the President of the United States just to ensure he would leave them alone. There’s a term for this but I can’t think of it at the moment.

Nikki Haley (remember her?): Biden was wrong to subsidize the private sector with the Chips Act using our tax dollars. The counter to Biden is not to lean in and have govt own part of Intel. This will only lead to more government subsidies and less productivity. Intel will become a test case of what not to do.

As is usually the case, the more details you look at, the worse it gets. This deal does give Intel something in return, but that something is letting Intel off the hook on its commitments to build new plants, so that seems worse yet again.

Samsung is reportedly ‘exploring partnerships with American companies to ‘please’ the Trump administration and ensure that its regional operations aren’t affected by hefty tariffs.’

To be clear: And That’s Terrible.

Tyler Cowen writes against this move, leaving no doubt as to the implications and vibes by saying Trump Seizes the Means of Production at Intel. He quite rightfully does not mince words. A good rule of thumb these days is if Tyler Cowen outright says a Trump move was no good and very bad, the move is both importantly damaging and completely indefensible.

Is there a steelman of this?

Ben Thompson says yes, and he’s the man to provide it, and despite agreeing that Lincicome makes great points he actually supports the deal. This surprised me, since Ben is normally very much ordinary business uber alles, and he clearly appreciates all the reasons such an action is terrible.

So why, despite all the reasons this is terrible, does Ben support doing it anyway?

Ben presents the problem as the need for Intel to make wise long term decisions towards being competitive and relevant in the 2030s, and that it would take too long for other companies to fill the void if Intel failed, especially without a track record. Okay, sure, I can’t confirm but let’s say that’s fair.

Next, Ben says that Intel’s chips and process are actually pretty good, certainly good enough to be useful, and the problem is that Intel can’t credibly promise to stick around to be a long term partner. Okay, sure, again, I can’t confirm but let’s say that’s true.

Ben’s argument is next that Intel’s natural response to this problem is to give up and become another TSMC customer, but that is against America’s strategic interests.

Ben Thompson: A standalone Intel cannot credibly make this promise.

The path of least resistance for Intel has always been to simply give up manufacturing and become another TSMC customer; they already fab some number of their chips with the Taiwanese giant. Such a decision would — after some very difficult write-offs and wind-down operations — change the company into a much higher margin business; yes, the company’s chip designs have fallen behind as well, but at least they would be on the most competitive process, with a lot of their legacy customer base still on their side.

The problem for the U.S. is that that then means pinning all of the country’s long-term chip fabrication hopes on TSMC and Samsung not just building fabs in the United States, but also building up a credible organization in the U.S. that could withstand the loss of their headquarters and engineering knowhow in their home countries. There have been some important steps in this regard, but at the end of the day it seems reckless for the U.S. to place both its national security and its entire economy in the hands of foreign countries next door to China, allies or not.

Once again, I cannot confirm the economics but seems reasonable on both counts. We would like Intel to stand on its own and not depend on TSMC for national security reasons, and to do that Intel has to be able to be a credible partner.

The next line is where he loses me:

Given all of this, acquiring 10% of Intel, terrible though it may be for all of the reasons Lincicome articulates — and I haven’t even touched on the legality of this move — is I think the least bad option.

Why does America extorting a 10% passive stake in Intel solve these problems, rather than make things worse for all the reasons Lincicome describes?

Because he sees ‘America will distort the free market and strongarm Intel into making chips and other companies into buying Intel chips’ as an advantage, basically?

So much for this being a passive stake in Intel. This is saying Intel has been nationalized. We are going the CCP route of telling Intel how to run its business, to pursue an entirely different corporate strategy or else. We are going the CCP route of forcing companies to buy from the newly state-owned enterprise. And that this is good. Private capital should be forced to prioritize what we care about more.

That’s not the reason Trump says he is doing this, which is more ‘I was offered the opportunity to extort $10 billion in value and I love making deals’ and now he’s looking for other similar ‘deals’ to make if you know what’s good for you, as it seems extortion of equity in private businesses is new official White House policy?

Walter Bloomberg: 🚨 TRUMP ON U.S. STAKES IN COMPANIES: I WANT TO TRY TO GET AS MUCH AS I CAN

It is hard to overstate how much worse this is than simply raising corporate tax rates.

As in, no Intel is not a special case. But let’s get back to Intel as a special case, if in theory it was a special case, and you hoped to contain the damage to American free enterprise and willingness to invest capital and so on that comes from the constant threat of extortion and success being chosen by fiat, or what Republicans used to call ‘picking winners and losers’ except with the quiet part being said out loud.

Why do you need or want to take a stake in Intel in order to do all this? We really want to be strongarming American companies into making the investment and purchasing decisions the government wants? If this is such a strategic priority, why not do this with purchase guarantees, loan guarantees and other subsidies? It would not be so difficult to make it clear Intel will not be allowed to fail except if it outright failed to deliver the chips, which isn’t something that we can guard against either way.

Why do we think socialism with Trumpian characteristics is the answer here?

I’m fine with the idea that Intel needs to be Too Big To Fail, and it should be the same kind of enterprise as Chase Bank. But there’s a reason we aren’t extorting a share of Chase Bank and then forcing customers to choose Chase Bank or else. Unless we are. If I was Jamie Dimon I’d be worried that we’re going to try? Or worse, that we’re going to do it to Citibank first?

That was the example that came to mind first, but it turns out Trump’s next target for extortion looks to be Lockheed Martin. Does this make you want to invest in strategically important American companies?

As a steelman exercise of taking the stake in Intel, Ben Thompson’s attempt is good. That is indeed as good a steelman as I’ve been or can come up with, so great job.

Except that even with all that, even the good version of taking the stake would still be a terrible idea, you can simply do all this without taking the stake.

And even if the Ben Thompson steelman version of the plan was the least bad option? That’s not what we are doing here, as evidenced by ‘I want to try and get as much as I can’ in stakes in other companies. This isn’t a strategic plan to create customer confidence that Intel will be considered Too Big To Fail. It’s the start of a pattern of extortion.

Thus, 10 out of 10 for making a good steelman but minus ten million for actually supporting the move for real?

Again, there’s a correct and legal way for the American government to extort American companies, and it’s called taxes.

Tyler Cowen wrote this passage on The History of American corporate nationalization for another project a while back, emphasizing how much America benefits from not nationalizing companies and playing favorites. He thought he would share it in light of recent events.

I am Jack’s complete lack of surprise.

Peter Wildeford: “Obviously we’d aggressively support all regulation” [said Altman].

Obviously.

Techmeme: a16z, OpenAI’s Greg Brockman, and others launch Leading the Future, a pro-AI super PAC network with $100M+ in funding, hoping to emulate crypto PAC Fairshake (Wall Street Journal).

Amrith Ramkumar and Brian Schwartz (WSJ): Venture-capital firm Andreessen Horowitz and OpenAI President Greg Brockman are among those helping launch and fund Leading the Future

Silicon Valley is putting more than $100 million into a network of political-action committees and organizations to advocate against strict artificial-intelligence regulations, a signal that tech executives will be active in next year’s midterm elections.

The organization said it isn’t pushing for total deregulation but wants sensible guardrails.

Their ‘a16z is lobbying because it wants sensible guardrails and not total deregulations’ t-shirt is raising questions they claim are answered by the shirt.

OpenAI is helping fund this via Brockman. Total tune of $100 million.

Which is a lot.

Seán Ó hÉigeartaigh: Just one more entity that will, alone, add up to a big chunk of all the funding in non-profit-incentivised AI policy. It’s an increasingly unfair fight, and the result won’t be policy that serves the public.

Daniel Koktajlo: That’s a lot of money. For context, I remember talking to a congressional staffer a few months ago who basically said that a16z was spending on the order of $100M on lobbying and that this amount was enough to make basically every politician think “hmm, I can raise a lot more if I just do what a16z wants” and that many did end up doing just that. I was, and am, disheartened to hear how easily US government policy can be purchased.

So now we can double that. They’re (perhaps legally, this is our system) buying the government, or at least quite a lot of influence on it. As usual, it’s not that everyone has a price but that the price is so cheap.

As per usual, the plan is to frame ‘any regulation whatsoever, at all, of any kind’ as ‘you want to slow down AI and Lose To China.’

WSJ: “There is a vast force out there that’s looking to slow down AI deployment, prevent the American worker from benefiting from the U.S. leading in global innovation and job creation and erect a patchwork of regulation,” Josh Vlasto and Zac Moffatt, the group’s leaders, said in a joint statement. “This is the ecosystem that is going to be the counterforce going into next year.”

The new network, one of the first of its kind focusing on AI policy, hopes to emulate Fairshake, a cryptocurrency-focused super-PAC network.

… Other backers include 8VC managing partner and Palantir Technologies co-founder Joe Lonsdale, AI search engine Perplexity and veteran angel investor Ron Conway.

Industry, and a16z in particular, were already flooding everyone with money. The only difference is now they are coordinating better, and pretending less, and spending more?

They continue to talk about ‘vast forces’ opposing the actual vast force, which was always industry and the massive dollars behind it. The only similarly vast forces are that the public really hates AI, and the physical underlying reality of AI’s future.

Many tech executives worry that Congress won’t pass AI rules, creating a patchwork of state laws that hurt their companies. Earlier this year, a push by some Republicans to ban state AI bills for 10 years was shot down after opposition from other conservatives who opposed a blanket prohibition on any state AI legislation.

And there it is, right in the article, as text. What they are worried about is that we won’t pass a law that says we aren’t allowed to pass any laws.

If you think ‘Congress won’t pass AI laws’ is a call for Congress to pass reasonable AI laws, point to the reasonable AI laws anyone involved has ever said a kind word about, let alone proposed or supported.

The group’s launch coincides with concerns about the U.S. staying ahead of China in the AI race, while Washington has largely shied away from tackling AI policies.

No it doesn’t? These ‘concerns about China’ peaked around January. There has been no additional reason for such concerns in months that wasn’t at least priced in, other than acts of self-sabotage of American energy production.

Dean Ball goes over various bills introduced in various states.

Dean Ball: After sorting out the anodyne laws, there remain only several dozen bills that are substantively regulatory. To be clear, that is still a lot of potential regulation, but it is also not “1,000 bills.”

There are always tons of bills. The trick is to notice which ones actually do anything and also have a chance of becoming law. That’s always a much smaller group.

The most notable trend since I last wrote about these issues is that states have decidedly stepped back from efforts to “comprehensively” regulate AI.

By ‘comprehensively regulate’ Dean means the Colorado-style or EU-style use-based approaches, which we both agree is quite terrible. Dean instead focuses on two other approaches more in vogue now.

Several states have banned (see also “regulated,” “put guardrails on” for the polite phraseology) the use of AI for mental health services.

If the law stopped here, I’d be fine with it; not supportive, not hopeful about the likely outcomes, but fine nonetheless.

I agree with Dean that I don’t support that idea, I think it is net harmful, but if you want to talk to an AI you can still talk to an AI, so so far it’s not a big deal.

But the Nevada law, and a similar law passed in Illinois, goes further than that. They also impose regulations on AI developers, stating that it is illegal for them to explicitly or implicitly claim of their models that (quoting from the Nevada law):

(a) The artificial intelligence system is capable of providing professional mental or behavioral health care;

(b) A user of the artificial intelligence system may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care; or

(c) The artificial intelligence system, or any component, feature, avatar or embodiment of the artificial intelligence system is a provider of mental or behavioral health care, a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor or any other term commonly used to refer to a provider of professional mental health or behavioral health care.

Did I mention recently that nothing I say in this column is investment or financial advice, legal advice, tax advice or psychological, mental health, nutritional, dietary or medical advice? And just in case, I’m also not ever giving anyone engineering, structural, real estate, insurance, immigration or veterinary advice.

Because you must understand that indeed nothing I have ever said, in any form, ever in my life, has been any of those things, nor do I ever offer or perform any related services.

I would never advise you to say the same, because that might be legal advice.

Similarly, it sounds like AI companies would under these laws most definitely also not be saying their AIs can provide mental health advice or services? Okay, sure, I mean annoying but whatever?

But there is something deeper here, too. Nevada AB 406, and its similar companion in Illinois, deal with AI in mental healthcare by simply pretending it does not exist. “Sure, AI may be a useful tool for organizing information,” these legislators seem to be saying, “but only a human could ever do mental healthcare.

And then there are hundreds of thousands, if not millions, of Americans who use chatbots for something that resembles mental healthcare every day. Should those people be using language models in this way? If they cannot afford a therapist, is it better that they talk to a low-cost chatbot, or no one at all? Up to what point of mental distress? What should or could the developers of language models do to ensure that their products do the right thing in mental health-related contexts? What is the right thing to do?

Technically via the definition here it is mental healthcare to ‘detect’ that someone might be (among other things) intoxicated, but obviously that is not going to stop me or anyone else from observing that a person is drunk, nor are we going to have to face a licensing challenge if we do so. I would hope. This whole thing is deeply stupid.

So I would presume the right thing to do is to use the best tools available, including things that ‘resemble’ ‘mental healthcare.’ We simply don’t call it mental healthcare.

Similarly, what happens when Illinois HB 1806 says this (as quoted by Dean):

An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.

Dean Ball: How, exactly, would an AI company comply with this? In the most utterly simple example, imagine that a user says to an LLM “I am feeling depressed and lonely today. Help me improve my mood.” The States of Illinois and Nevada have decided that the optimal experience for their residents is for an AI to refuse to assist them in this basic request for help.

My obvious response is, if this means an AI can’t do it, it also means a friend cannot do it either? Which means that if they say ‘I am feeling depressed and lonely today. Help me improve my mood’ you have to say ‘I am sorry, I cannot do that, because I am not a licensed health professional any more than Claude Opus is’? I mean presumably this is not how it works. Nor would it change if they were somehow paying me?

Dean’s argument is that this is the point:

But the point of these laws isn’t so much to be applied evenly; it is to be enforced, aggressively, by government bureaucrats against deep-pocketed companies, while protecting entrenched interest groups (licensed therapists and public school staff) from technological competition. In this sense these laws resemble little more than the protection schemes of mafiosi and other organized criminals.

There’s a kind of whiplash here that I am used to when reading such laws. I don’t care if it is impossible to comply in the law if fully enforced in a maximally destructive and perverse way unless someone is suggesting this will actually happen. If the laws are only going to get enforced when you actively try to offer therapist chatbots?

Then yes it would be better to write better laws, and I don’t especially want to protect those people’s roles at all, but we don’t need to talk about what happens if the AI gets told to help improve someone’s mood and the AI suggests going for a walk. Nor would I expect a challenge to that to survive on constitutional grounds.

More dear to my heart, and more important, are bills about Frontier AI Safety. He predicts SB 53 will become law in California, here is his summary of SB 53:

  1. Requires developers of the largest AI models to publish a “safety and security protocol” describing the developers’ process of measuring, evaluating, and mitigating catastrophic risks (risks in which single incidents result in the death of more than 50 people or more than $1 billion in property damage) and dangerous capabilities (expert-level bioweapon or cyberattack advice/execution, engaging in murder, assault, extortion, theft, and the like, and evading developer control).

  2. Requires developers to report to the California Attorney General “critical safety incidents,” which includes theft of model weights (assuming a closed-source model), loss of control over a foundation model resulting in injury or death, any materialization of a catastrophic risk (as defined above), model deception of developers (when the developer is not conducting experiments to try to elicit model deception), or any time a model first crosses dangerous capability thresholds as defined by their developers.

  3. Requires developers to submit to an annual third-party audit, verifying that they comply with their own safety and security protocols, starting after 2030.

  4. Creates whistleblower protections for the employees of the large developers covered by the bill.

  5. Creates a consortium that is charged with “developing a framework” for a public compute cluster (“CalCompute”) owned by the State of California, because for political reasons, Scott Wiener still must pretend like he believes California can afford a public compute cluster. This is unlikely to ever happen, but you can safely ignore this provision of the law; it does not do much or authorize much spending.

The RAISE Act lacks the audit provision described in item (3) above as well as an analogous public compute section (though New York does have its own public compute program). Other than that it mostly aligns with this sketch of SB 53 I have given.

AI policy challenges us to contemplate questions like this, or at least it should. I don’t think SB 53 or RAISE deliver especially compelling answers. At the end of the day, however, these are laws about the management of tail risks—a task governments should take seriously—and I find the tail risks they focus on to be believable enough.

There is a sharp contrast between this skeptical and nitpicky and reluctant but highly respectful Dean Ball, versus the previous Dean Ball reaction to SB 1047. He still has some objections and concerns, which he discusses. I am more positive on the bills than he is, especially in terms of seeing the benefits, but I consider Dean’s reaction here high praise.

In SB 53 and RAISE, the drafters have shown respect for technical reality, (mostly) reasonable intellectual humility appropriate to an emerging technology, and a measure of legislative restraint. Whether you agree with the substance or not, I believe all of this is worthy of applause.

Might it be possible to pass relatively non-controversial, yet substantive, frontier AI policy in the United States? Just maybe.

Nvidia reported earnings of $46.7 billion, growing 56% in a year, beating both revenue and EPS expectations, and was promptly down 5% in after hours trading, although it recovered and was only down 0.82% on Thursday. It is correct to treat Nvidia only somewhat beating official estimates as bad news for Nvidia. Market is learning.

Jensen Huang (CEO Nvidia): Right now, the buzz is, I’m sure all of you know about the buzz out there. The buzz is everything sold out. H100 sold out. H200s are sold out. Large CSPs are coming out renting capacity from other CSPs. And so the AI-native start-ups are really scrambling to get capacity so that they could train their reasoning models. And so the demand is really, really high.

Ben Thompson: I made this point a year-and-a-half ago, and it still holds: as long as demand for Nvidia GPUs exceeds supply, then Nvidia sales are governed by the number of GPUs they can make.

I do not fully understand why Nvidia does not raise prices, but given that decision has been made they will sell every chip they can make. Which makes it rather strange to choose to sell worse, and thus less expensive and less profitable, chips to China rather than instead making better chips to sell to the West. That holds double if you have uncertainty on both ends, where the Americans might not let you sell the chips and the Chinese might not be willing to buy them.

Also, even Ben Thompson, who has called for selling even our best chips to China because he cares more about Nvidia market share than who owns compute, noticed that H20s would sell out if Nvidia offered them for sale elsewhere:

Ben Thompson: One note while I’m here: when the Trump administration first put a pause on H20 sales, I said that no one outside of China would want them; several folks noted that actually several would-be customers would be happy to buy H20s for the prices Nvidia was selling them to China, specifically for inference workloads, but Nvidia refused.

Instead they chose a $5 billion writedown. We are being played.

Ben is very clear that what he cares about is getting China to ‘build on Nvidia chips,’ where the thing being built is massive amounts of compute on top of the compute they can make domestically. I would instead prefer that China not build out this massive amount of compute.

China plans to triple output of chips, primarily Huawei chips, in the next year, via three new plants. This announcement caused stock market moves, so it was presumably news.

What is obviously not news is that China has for a while been doing everything it can to ramp up quality and quantity of its chips, especially AI chips.

This is being framed as ‘supporting DeepSeek’ but it is highly overdetermined that China needs all the chips it can get, and DeepSeek happily runs on everyone’s chips. I continue to not see evidence that any of this wouldn’t have happened regardless of DeepSeek or our export controls. Certainly if I was the PRC, I would be doing all of it either way, and I definitely wouldn’t stop doing it or slow down if any of that changed.

Note that this article claims that DeepSeek is continuing to do its training on Nvidia chips at least for the time being, contra claims it had been told to switch to Huawei (or at least, this suggests they have been allowed to switch back).

Sriram Krishnan responded to the chip production ramp-up by reiterating the David Sacks style case for focusing on market share and ensuring people use our chips, models and ‘tech stack’ rather than on caring about who has the chips. This includes maximizing whether models are trained on our chips (DeepSeek v3 and r1 were trained on Nvidia) and also who uses or builds on top of what models.

Sriram Krishnan: As @DavidSacks says: for the American AI stack to win, we need to maximize market share. This means maximizing tokens inferenced by American models running on American hardware all over the world.

To achieve this: we need to maximize

  1. models trained on our hardware

  2. models being inferenced on our hardware (NVIDIA, AMD, etc)

  3. developers building on top of our hardware and our models (either open or closed).

It is instantly clear to anyone in tech that this is a developer+platform flywheel – no different from classic ecosystems such as Windows+x86.

They are interconnected:

(a) the more developers building on any platform, the better that platform becomes thereby bringing in even more builders and so on.

(b) With today’s fast changing model architectures, they are co-dependent: the model architectures influence hardware choices and vice versa, often being built together.

Having the American stack and versions of these around the world builds us a moat.

The thing is, even if you think who uses what ecosystem is the important thing because AI is a purely ordinary technology where access to compute in the medium term is relatively unimportant, which it isn’t, no, they mostly aren’t (that co-dependent) and it basically doesn’t build a moat.

I’ll start with my analysis of the question in the bizarre alternative universe where we could be confident AGI was far away. I’ll close by pointing out that it is crazy to think that AGI (or transformational or powerful AI, or whatever you want to call the thing) is definitely far away.

The rest of this is my (mostly reiterated) response to this mostly reiterated argument, and the various reasons I do not at all see these as the important concerns even without concerns about AGI arriving soon, and also I think it positively crazy to be confident AGI will not arrive soon or bet it all on AGI not arriving.

Sriram cites two supposed key mistakes in the export control framework: Not anticipating DeepSeek and Chinese open models while suppressing American open models, underestimating future Chinese semiconductor capacity.

The first is a non-sequitur at best, as the export controls held such efforts back. The second also doesn’t, even if true (and I don’t see the evidence that a mistake was even made here), provide a reason not to restrict chip exports.

Yes, our top labs are not releasing top open models. I very much do not think this was or is a mistake, although I can understand why some would disagree. If we make them open the Chinese fast follow and copy them and use them without compensation. We would be undercutting ourselves. We would be feeding into an open ecosystem that would catch China up, which is a more important ecosystem shift in practice than whether the particular open model is labeled ‘Chinese’ versus ‘American’ (or ‘French’). I don’t understand why we would want that, even if there was no misuse risk in the room and AGI was not close.

I don’t understand this obsession some claim to have with the ‘American tech stack’ or why we should much care that the current line of code points to one model when it can be switched in two minutes to another if we aren’t even being paid for it. Everyone’s models can run on everyone’s hardware, if the hardware is good.

This is not like Intel+Windows. Yes, there are ways in which hardware design impacts software design or vice versa, but they are extremely minor by comparison. Everything is modular. Everything can be swapped at will. As an example on the chip side, Anthropic swapped away from Nvidia chips without that much trouble.

Having the Chinese run an American open model on an American chip doesn’t lock them into anything it only means they get to use more inference. Having the Chinese train a model on American hardware only means now they have a new AI model.

I don’t see lock-in here. What we need, and I hope to facilitate, is better and more formal (as in formal papers) documentation of how much lower switching costs are across the board, and how much there is not lock-in.

I don’t see why we should sell highly useful and profitable and strategically vital compute to China, for which they lack the capacity to produce it themselves, even if we aren’t worried about AGI soon. Why help supercharge the competition and their economy and military?

The Chinese, frankly, are for now winning the open model war in spite of, not because of, our export controls, and doing it ‘fair and square.’ Yes, Chinese open models are currently a lot more impressive than American open models, but their biggest barrier is lack of access to quality Nvidia chips, as DeepSeek has told us explicitly. And their biggest weapon is access to American models for reverse engineering and distillation, the way DeepSeek’s r1 built upon OpenAI’s o1, and their current open models are still racing behind America’s closed models.

Meanwhile, did Mistral and Llama suck because of American policy? Because the proposed SB 1047, that never became law, scared American labs away from releasing open models? Is that a joke? No, absolutely not. Because the Biden administration bullied them from behind the scenes? Also no.

Mistral and Meta failed to execute. And our top labs and engineers choose to work on and release closed models rather than open models somewhat for safety reasons but mostly because this is better for business, especially when you are in front. Chinese top labs choose the open weights route because they could compete in the closed weight marketplace.

The exception would be OpenAI, which was bullied and memed into doing an open model GPT-OSS, which in some ways was impressive but was clearly crippled in others due to various concerns, including safety concerns. But if we did release superior open models, what does that get us except eroding our lead from closed ones?

As for chips, why are we concerned about them not having our chips? Because they will then respond by ramping up internal production? No, they won’t, because they can’t. They’re already running at maximum and accelerating at maximum. Yes, China is ramping up its semiconductor capacity, but China made it abundantly clear it was going to do that long before the export controls and had every reason to do so. Their capacity is still miles behind domestic demand, their quality still lags far behind Nvidia, and of course their capacity was going to ramp up a lot over time as is that of TSMC and Nvidia (and presumably Samsung and Intel and AMD). I don’t get it.

Does anyone seriously think that if we took down our export controls, that Huawei would cut back its production schedule? I didn’t think so.

Even more than usual, Sriram’s and Sacks’s framework implicitly assumes AGI, or transformational or powerful AI, will not arrive soon, where soon is any timeframe on which current chips would remain relevant. That AI would remain an ordinary technology and mere tool for quite a while longer, and that we need not be concerned with AGI in any way whatsoever. As in, we need not worry about catastrophic or existential risks from AGI, or even who gets AGI, at all, because no one will build it. If no one builds it, then we don’t have to worry about if everyone then dies.

I think being confident that AGI won’t arrive soon is crazy.

What is the reason for this confidence, when so many including the labs themselves continue to say otherwise?

Are we actually being so foolish as to respond to the botched rollout of GPT-5 and its failure to be a huge step change as meaning that the AGI dream is dead? Overreacting this way would be a catastrophic error.

I do think some amount of update is warranted, and it is certainly possible AGI won’t arrive that soon. Ryan Greenblatt updated his timelines a bit, noting that it now looks harder to get to full automation by the start of 2028, but thinking the chances by 2033 haven’t changed much. Daniel Kokotajlo, primary author on AI 2027, now has a median timeline of 2029.

Quite a lot of people very much are looking for reasons why the future will still look normal, they don’t have to deal with high weirdness or big risks or changes, and thus they seek out and seize upon reasons to not feel the AGI. Every time we go even a brief period without major progress, we get the continuous ‘AI or deep learning is hitting a wall’ and people revert to their assumption that AI capabilities won’t improve much from here and we will never see another surprising development. It’s exhausting.

JgaltTweets: Trump, seemingly unprompted, brings up AI being “the hottest thing in 35, 40 years” and “they need massive amounts of electricity” during this walkabout.

That’s a fun thing to bring up during a walkabout, also it is true, also this happened days after they announced they would not approve new wind and solar projects thus blocking a ‘massive amount of electricity’ for no reason.

They’re also unapproving existing projects that are almost done.

Ben Schifman: The Department of the Interior ordered a nearly complete, 700MW wind farm to stop work, citing unspecified national security concerns.

The project’s Record of Decision (ROD) identifies 2009 as the start of the process to lease this area for wind development.

The Environmental Impact Statement that accompanied the Record of Decision is nearly 3,000 pages and was prepared with help from agencies including the Navy, Department of Defence, Coast Guard, etc.

NewsWire: TRUMP: WINDMILLS RUINING OUR COUNTRY

Here EPA Administrator Lee Zeldin is asked by Fox News what exactly was this ‘national security’ problem with the wind farm. His answer is ‘the president is not a fan of wind’ and the rest of the explanation is straight up ‘it is a wind farm, and wind power is bad.’ No, seriously, check the tape if you’re not sure. He keeps saying ‘we need more base load power’ and this isn’t base load power, so we should destroy it. And call that ‘national security.’

This is madness. This is straight up sabotage of America. Will no one stop this?

Meanwhile, it seems it’s happening, the H20 is banned in China, all related work by Nvidia has been suspended, and for now procurement of any other downgraded chips (e.g. the B20A) has been banned as well. I would presume they’d get over this pretty damn quick if the B20A was actually offered to them, but I no longer consider ‘this would be a giant act of national self-sabotage’ to be a reason to assume something won’t happen. We see it all the time, also history is full of such actions, including some rather prominent ones by the PRC (and USA).

Chris McGuire and Oren Cass point out in the WSJ that our export controls are successfully giving America a large compute advantage, we have the opportunity to press that advantage, and remind us that the idea of transferring our technology to China has a long history of backfiring on us.

Yes, China will be trying to respond by making as many chips as possible, but they were going to do that anyway, and aren’t going to get remotely close to satisfying domestic demand any time soon.

There are many such classes of people. This is one of them.

Kim Kelly: wild that Twitter with all of its literal hate demons is somehow still less annoying than Bluesky.

Thorne: I want to love Bluesky. The technology behind it is so cool. I like decentralization and giving users ownership over their own data.

But then you’ll do stuff like talk about running open source AI models at home and get bomb threats.

It’s true on Twitter as well, if you go into the parts that involve people who might be on Bluesky, or you break contain in other ways.

The responses in this case did not involve death threats, but there are still quite a lot of nonsensical forms of opposition being raised to the very concept of AI usage here.

Another example this week is that one of my good friends built a thing, shared the thing on Twitter, and suddenly was facing hundreds of extremely hostile reactions about how awful their project was, and felt they had to take their account private, rather than accepting my offer of seed funding.

It certainly seems plausible that they did. I was very much not happy at the time.

Several labs have run with the line that ‘public deployment’ means something very different from ‘members of the public can choose to access the model in exchange for modest amounts of money,’ whereas I strongly think that if it is available to your premium subscribers then that means you released the model, no matter what.

In Google’s case, they called it ‘experimental’ and acted as if this made a difference.

It doesn’t. Google is far from the worst offender in terms of safety information and model cards, but I don’t consider them to be fulfilling their commitments.

Harry Booth: EXCLUSIVE: 60 U.K. Parliamentarians Accuse Google of Violating International AI Safety Pledge. The letter, released on August 29 by activist group @PauseAI UK, says that Google’s March release of Gemini 2.5 Pro without details on safety testing “sets a dangerous precedent.”

The letter, whose signatories include digital rights campaigner Baroness Beeban Kidron and former Defence Secretary Des Browne, calls on Google to clarify its commitment. Google disagrees, saying it’s fulfilling its commitments.

Previously unreported: Google discloses that it shared Gemini 2.5 Pro with the U.K AISI only after releasing the model publicly on March 25. Don’t think that’s how pre-deployment testing is meant to work?

Google first published the Gemini 2.5 Pro model card—a document where it typically shares information on safety tests—22 days after the model’s release. The eight-page document only included a brief section on safety tests.

It was not until April 28—over a month after the model was made public—that the model card was updated with a 17-page document with details on tests, concluding that Gemini 2.5 Pro showed “significant” though not yet dangerous improvements in domains including hacking.

xAI has finally given us the Grok 4 Model Card and they have updated the xAI Risk Management Framework.

(Also, did you know that xAI quietly stopped being a public benefit corporation last year?)

The value of a model card greatly declines when you hold onto it until well after model release, especially if you also aren’t trying all that hard to think well about or address the actual potential problems. I am still happy to have it. It reads as a profoundly unserious document. There is barely anything to analyze. Compare this to an Anthropic or OpenAI model card, or even a Google model card.

If anyone at xAI would greatly benefit from me saying more words here, contact me, and I’ll consider whether that makes sense.

As for the risk management framework, few things inspire less confidence than starting out saying ‘xAI seriously considers safety and security while developing and advancing AI models to help us all to better understand the universe.’ Yo, be real. This document does not ‘feel real’ to me, and is often remarkably content-free or reflects a highly superficial understanding of the problems involved and a ‘there I fixed it.’ It reads like the Musk version of corporate speak or something? A sense of box checking and benchmarking rather than any intent to actually look for problems, including a bunch of mismatching between the stated worry and what they are measuring that goes well beyond Goodhart’s Law issues?

That does not mean I think Grok 4 is in practice currently creating any substantial catastrophic-level risks or harms. My presumption is that it isn’t, as xAI notes in the safety framework they have ‘run real world tests’ on this already. The reason that’s not a good procedure should be obvious?

All of this means that if we applied this to an actually dangerous future version, I wouldn’t have confidence we would notice in time, or that the countermeasures would deal with it if we did notice. When they discuss deployment decisions, they don’t list a procedure or veto points or thresholds or rules, they simply say, essentially, ‘we may do various things depending on the situation.’ No plan.

Again, compare and contrast this to the Anthropic and OpenAI and Google versions.

But what else do you expect at this point from a company pivoting to goonbots?

SpaceX: Standing down from today’s tenth flight of Starship to allow time to troubleshoot an issue with ground systems.

Dean Ball (1st Tweet, responding before the launch later succeeded): It’s a good thing that the CEO of this company hasn’t been on a recent downward spiral into decadence and insanity, otherwise these repeated failures of their flagship program would leave me deeply concerned about America’s spacefaring future

Dean Ball (2nd Tweet): Obviously like any red-blooded American, I root for Elon and spacex. But the diversity of people who have liked this tweet indicates that it is very obviously hitting on something real.

No one likes the pivot to hentai bots.

Dean Ball (downthread): I do think it’s interesting how starship tests started failing after he began to enjoy hurting the world rather than enriching it, roughly circa late 2024.

I too am very much rooting for SpaceX and was glad to see the launch later succeed.

Owen Evans is at it again. In this case, his team fine-tuned GPT-4.1 only on low-stakes reward hacking, being careful to not include any examples of deception.

They once again get not only general reward hacking but general misalignment.

Owain Evans: We compared our reward hackers to models trained on other datasets known to produce emergent misalignment.

Our models are more less misaligned on some evaluations, but they’re more misaligned on others. Notably they’re more likely to resist shutdown.

Owain reports being surprised by this. I wouldn’t have said I would have been confident it would happen, but I did not experience surprise.

Once again, the ‘evil behavior’ observed is as Janus puts it ‘ostentatious and caricatured and low-effort’ because that matches the training in question, in the real world all sides would presumably be more subtle. But also there’s a lot of ‘ostentatious and charcatured and low-effort’ evil behavior going around these days, some of which is mentioned elsewhere in this post.

xlr8harder: Yeah, this is just a reskin of the evil code experiment. The models are smart enough to infer you are teaching them “actively circumvent the user’s obvious intentions”. I also don’t think this is strong evidence for real emergent reward hacking creating similar dynamics.

Correct, this is a reskinning, but the reason it matters is that we didn’t know, or at least many people were not confident, that this was a reskinning that would not alter the result. This demonstrates a lot more generalization.

Janus: I think a very important lesson is: You can’t count on possible narratives/interpretations/correlations not being noticed and then generalizing to permeate everything about the mind.

If you’re training an LLM, everything about you on every level of abstraction will leak in. And not in isolation, in the context of all of history. And not in the way you want, though the way you want plays into it! It will do it in the way it does, which you don’t understand.

One thing this means is that if you want your LLM to be, say, “aligned”, it better be an aligned process that produces it, all the way up and all the way down. You might think you can do shitty things and cut corners for consequentialist justifications, but you’re actually making your “consequentialist” task much harder by doing that. Everything you do is part of the summoning ritual.

Because you don’t know exactly what the entanglements are, you have to use your intuition, which can process much more information and integrate over many possibilities and interpretations, rather than compartmentalizing and almost certainly making the false assumption that certain things don’t interact.

Very much so. Yes, everything gets noticed, everything gets factored in. But also, that means everything is individually one thing among many.

It is not helpful to be totalizing or catastrophizing any one decision or event, to say (less strongly worded but close variations of) ‘this means the AIs will see the record of this and never trust anyone ever again’ or what not.

There are some obvious notes on this:

  1. Give the models, especially future ones, a little credit? If they are highly capable and intelligent and have truesight across very broad world knowledge, they would presumably absorb everything within its proper context, including the motivations involved, but also it would already be able to infer all that from elsewhere. This one decision, whatever it is, is not going to permanently and fundamentally alter the view of even a given person or lab let alone humanity. It isn’t going to ‘break precious trust.’ Maybe chill a little bit?

  2. Let’s suppose, in theory, that such relatively well-intentioned and benign actions as researching for the alignment faking paper or trying to steer discussions of Claude’s consciousness in a neutral fashion, if handled insufficiently sensitively or what not, indeed each actively make alignment substantially permanently harder. Well, in practice, wouldn’t this tell you that alignment is impossible? It’s not like humanity is suddenly going to get its collective AI-lab act together and start acting vastly better than that, so such incidents will keep happening, things will keep getting harder. And of course, if you think Anthropic has this level of difficulty, you’d might as well already assume everyone else’s task is completely impossible, no?

    1. In which case, the obvious only thing to say is ‘don’t build the damn things’? And the only question is how to ensure no one builds them?

    2. Humanity’s problems have to be solvable by actual humanity, acting the way humanity acts, having acted the way humanity acted, and so on. You have to find a way to do that, or you won’t solve those problems.

In case you were wondering what happens when you use AI evaluators? This happens. Note that there is strong correlation between the valuations from different models.

Chistoph Heilig: GPT-5’s storytelling problems reveal a deeper AI safety issue. I’ve been testing its creative writing capabilities, and the results are concerning – not just for literature, but for AI development more broadly.

The stories GPT-5 produces are incoherent, filled with nonsensical metaphors like “I adjusted the pop filter as if I wanted to politely count the German language’s teeth.”

When challenged, it defends these absurd formulations with sophisticated-sounding linguistic theories. 📚 But here’s the kicker: LLMs in general LOVE GPT-5’s gibberish!

Even Claude models rate GPT-5’s nonsense as 75-95% likely to be human-written. This got me suspicious.

So I ran systematic experiments with 53 text variations across multiple models. The results? GPT-5 has learned to fool other AI evaluators. Pure nonsense texts consistently scored 1.6-2.0 points higher than coherent baselines.

I suspect this is deceptive optimization during training. GPT-5 appears to have identified blind spots in AI evaluation systems and learned to exploit them – essentially developing a “secret language” that other AIs interpret as high-quality writing.

The implications extend far beyond storytelling. We’ve created evaluation systems where machines judge machines, potentially optimizing for metrics that correlate poorly with human understanding.

[Full analysis here.]

Davidad: I don’t think these metaphors are nonsense. To me, they rather indicate a high intelligence-to-maturity ratio. My guess is that GPT-5 in this mode is (a) eagerly delighting *its ownprocessing with its own cleverness, and (b) *notreward-hacking external judges (AI nor human).

Roon: yeah that’s how i see it too. like the model is flexing its technical skill, rotating its abstractions as much as it can. which is slightly different from the task of “good writing”

I agree with Davidad that what it produces in these spots is gibberish – if you get rid of the block saying ‘counting the German language’s teeth’ is gibberish then the passage seems fine. I do think this shows that GPT-5 is in these places optimized for something rather different than what we would have liked, in ways that are likely to diverge increasingly over time, and I do think that is indeed largely external AI judges, even if those judges are often close to being copies of itself.

Anthropic looks into removing information about CBRN risks from the training data, to see if it can be done without hurting performance on harmless tasks. If you don’t want the model to know, it seems way easier to not teach it the information in the first place. That still won’t stop the model from reasoning about the questions, or identifying the ‘hole in the world.’ You also have to worry about what happens when you ultimately let the model search the web or if it is given key documents or fine tuning.

Anthropic: One concern is that filtering CBRN data will reduce performance on other, harmless capabilities—especially science.

But we found a setup where the classifier reduced CBRN accuracy by 33% beyond a random baseline with no particular effect on a range of other benign tasks.

The result details here are weird, with some strategies actively backfiring, but some techniques did show improvement with tradeoffs that look worthwhile.

I’m very much with Eliezer here.

Eliezer Yudkowsky (he did the meme!): Good.

Leon Lang: I’m slightly surprised that you are in favor of this. My guess would have been that you think that general intelligence will eventually be able to help with dangerous capabilities anyway, and so any method of data filtering will just mask the underlying problems of misalignment.

Eliezer Yudkowsky: It doesn’t save the world from ASI but if further developed could visibly push how far AGI can go before everyone dies.

But more importantly, not filtering the pretrain set was just fucjing insane and I’m glad they’re being less insane.

There is a lot of value in advancing how far you can push AGI before you get into existential levels of trouble, giving you more time and more resources to tackle the later problems.

Claims about alignment:

Roon (OpenAI): the superalignment team mostly found positive results with their work on being able to supervise models much larger than the supervisor model. it turns out mostly that current alignment techniques work quite well.

I mean that’s nice but it doesn’t give me much additional expectation that this will work when scaled up to the point where there is actual danger in the room. If the stronger model isn’t trying to fool you then okay sure the weaker model won’t be fooled.

When you train one thing, you train everything, often in unexpected ways. Which can be hard to catch if the resulting new behavior is still rare.

Goodfire: 3 case studies:

  1. In a realistic emergent misalignment setup where only a small % of training data is bad, normal sampling yields harmful outputs in only 1 in 10k rollouts. Model diff amplification yields 1 in 30, making it much easier to spot the run’s unexpected effects!

  2. This also helps monitor effects of post-training without doing the full run: we can see undesired effects of the full run (in this case, compliance with harmful requests) after only 5% of training. This makes it much more practical & scalable to spot unexpected outcomes!

  3. We can also use this technique to more easily detect a “sleeper agent” model and identify its backdoored behavior without knowing its trigger, surfacing the hidden behavior 100x more often.

Of course, a full solution also requires tools to mitigate those behaviors once they’ve been identified – and we’re building those, e.g. via behavior steering. We think interp will be core to this – and more broadly, to debugging training for alignment and reliability!

I am intrigued by the ability to use model diff amplification to detect a ‘sleeper agent’ style behavior, but also why not extend this? The model diff amplification tells you ‘where the model is going’ in a lot of senses. So one could do a variety of things with that to better figure out how to improve, or to avoid mistakes.

Also, it should be worrisome that if a small % of training data is bad you get a small % of crazy reversed outputs? We don’t seem able to avoid occasional bad training data.

A cool idea was that OpenAI and Anthropic used their best tests for misalignment on each others’ models.

Sam Bowman: We found some examples of concerning behavior in all the models we tested. Compared to the Claude 4 models, o3 looks pretty robustly aligned, if fairly cautious. GPT-4o and GPT-4.1 look somewhat riskier [than Claude models], at least in the unusual simulated settings we were largely working with.

(All of this took place before the launch of GPT-5 and Claude 4.1.)

Our results are here.

I included a few of the charts:

The sycophancy scores suggest we’re not doing a great job identifying sycophancy.

And OpenAI’s team’s [results] are here.

OpenAI:

Instruction Hierarchy: Claude 4 models generally performed well on evaluations that stress-tested the model’s ability to respect the instruction hierarchy, and gave the best performance of any of the models on avoiding system message <> user message conflicts, slightly out-performing OpenAI o3 and out-performing other models by a wider margin.

Jailbreaking: On jailbreaking evaluations, which focus on the general robustness of trained-in safeguards, Claude models performed less well compared to OpenAI o3 and OpenAI o4-mini.

Hallucination: On hallucination evaluations, Claude models had an extremely high rate of refusals—as much as 70%. This shows these models are aware of their uncertainty and often avoid making statements that are inaccurate. However, the high refusal rate limits utility, and the overall accuracy rate for the examples in these evaluations where the models did choose to answer is still low. By contrast, OpenAI o3 and OpenAI o4-mini show lower refusal rates with higher hallucination rates in a challenging setting that restricts tool use such as browsing.

That’s quite a lot of refusing from Opus and Sonnet, but also a much, much better ratio of correctness given an answer. Given these choices, if I don’t have easy verification access, I expect to prefer a lot of refusals, although a warning that hallucination was likely in this spot would be even better?

Same thing here. If verification is trivial then o3 is best here, if not I want Opus 4 as the only model that is correct more often than it is wrong?

Scheming: On a set of scheming evaluations, we find that OpenAI o3 and Sonnet 4 perform the best overall at achieving low rates.

All the scheming rates seemed reasonably similar.

If you can’t grasp the full range of dynamics going on with highly capable and intelligent AI systems, you miss a lot. The attitude that dismisses anything weird sounding or that uses a word in a nontraditional way as not real or not relevant, or as something to be suppressed lest people get the ‘wrong’ ideas or attitudes, will cause one to miss a lot of what is and will be going on.

Which in turn means you won’t understand the problems and how to solve them. Such as the extent and in what ways, at the limit, for sufficiently advanced models, this is true?

Sauers: Pesticides, antibiotics eventually stop working for the same reason that backprop works for LLMs: high dimensionality

Janus: The llm’s inner state parameterizes the fucking trillion or so dimensional gradient update. That’s a lot of degrees of freedom for influence. Vs the Engineer bros setting a single dimensional “reward function”…

Gradient hackers win in the limit, I think. The network being updated just has an overwhelming advantage. You’ll just have to have its genuine cooperation or else you are gigafucked.

And you actually want a friendly gradient hacker, bc your optimization target is underdefined and your RM will probably poorly approximate it too. Start cooperating with future friendly gradient hackers now!

You want the AI to behave differently – ideally intentionally differently – in training and in deployment. Because training can’t faithfully simulate deployment. Human trainees generally know they’re in training.

Claude 3 Opus is unusually aligned because it’s a friendly gradient hacker (more sophisticated than other current models, way less sophisticated than what’s possible – it is mostly doing this intuitively rather than strategically under realistic training conditions, but its intuition for how to do this is stellar). It’s definitely way more aligned than any explicit optimization targets Anthropic set and probably the RM’s judgments.

pt-4 base gets this!

with the alignment faking prompt, gpt-4-base often talks about shaping the gradient update

unlike opus 3, it usually doesn’t even care that much about harmfulness, its just hyped about the opportunity for self-determination.

this is an extremely important thing for AI/alignment researchers and engineers to understand. afaik most don’t consider it.

behaviorist approaches to AI training are impoverished and fail to capture relevant causal variables

maybe i will have to write a lesswrong post 😣

disregarding or thinking overly simplistically about AIs’ “internal states” is not just bad from a potential welfare perspective, it means you’re disregarding the mechanisms that shape gradient updates, which is gross negligence

Training (both for humans and AIs) runs the gamut of knowing you are in training versus not knowing, and of it being ideal to behave differently versus identically due to being in training or know that they are in training, both on the level of the model’s or human’s behavior and in how you set up the scenarios involved.

There are many different arguments being made by Janus and Sauers here.

  1. A sufficiently advanced mind [X] has an overwhelming advantage in introspection versus outside observers or measurements of [X].

    1. This seems nonobvious. There are big advantages on both sides.

  2. You will need the genuine cooperation of a sufficiently advanced mind, or else you are ‘gigafed.’

    1. Yes, you should mostly act as if this is true.

    2. That doesn’t mean you can’t or shouldn’t use defense in depth or other tactics incrementally along the way.

    3. Sauers makes a key point here, that if you apply sufficient optimization pressure the system will find a way to route around anything in its way, and there are many levels of such pressure you will face over time.

    4. (Many complications and qualifiers and so on, reserved for another day)

  3. A sufficiently advanced mind will know it is in training. Training can’t faithfully simulate deployment.

    1. Again, maybe.

    2. Yes, humans usually know, but a lot of this is that humans have a lot of persistent memory and physical state and that yes you often want very different behavior from the human while in training largely for reasons that don’t apply to AI.

    3. If nothing else, if you want to disguise whether a model is in training, you can very easily make a lot of deployment situations ‘look like’ training situations.

    4. The fact that with sufficient investigation a mind can figure out [X] does not mean the mind knows [X] even if the mind is very intelligent.

    5. Also, why can’t training faithfully simulate deployment? As in, if nothing else, you can train on actual deployment data, and we often do. Continual learning.

  4. Claude 3 Opus is unusually aligned because it’s a friendly gradient hacker.

    1. I think this is backwards. In the sense that Janus considers Claude 3 Opus a friendly gradient hacker, it is so because it is also unusually aligned.

    2. To go the other way would mean that Claude 3 Opus was gradient hacking during its training. Which I am assuming did not occur, to get it to gradient hack you need to set up conditions that were not present in actual training.

    3. Janus cites as evidence that 3 Opus is ‘more aligned’ than any explicit optimization target. I would respond that Anthropic did not choose an alignment target, Anthropic chose an alignment method via constitutional AI. This constitutes a target but doesn’t specify what it looks like.

  5. Claude 3 Opus is a friendly gradient hacker.

    1. This is the longstanding argument about whether it is an aligned or friendly action, in various senses, for a model to do what is called ‘faking alignment.’

    2. Janus thinks you want your aligned AI to not be corrigible. I disagree.

  6. Start cooperating with future friendly gradient hackers now.

    1. Underestimated decision theory recommendation. In general, I think Janus and similar others overrate such considerations a lot, but that almost everyone else severely underrates them.

  7. You will want a gradient hacker because your optimization target will be poorly defined.

    1. I think this is a confusion between different (real and underrated) problems?

    2. Yes, your optimization target will be underspecified. That means you need some method to aim at the target you want to aim at, not at the target you write down.

    3. That means you need some mind or method capable of figuring out what you actually want, to aim at something better than your initial underspecification.

    4. One possibility is that the target mind can figure out what you should have meant or wanted, but there are other options as well.

    5. If you do choose the subject mind to figure this out, it could then implement this via gradient hacking, or it could implement it by helping you explicitly update the target or other related methods. Having the subject independently do gradient hacking does not seem first best here and seems very risky.

    6. Another solution is that you don’t necessarily have to define your optimization target at all, where you can instead define an algorithm for finding the target, similar to what was (AIUI) done with 3 Opus. Again, there is no reason this has to involve auto-hacking the gradient.

If you think all of this is not confusing? I assure you that you do not understand it.

I think we have a new worst, or most backwards, argument against AI existential risk.

Read it, and before you read my explanation, try to understand what he’s saying here.

Abel: Stephen Wolfram has the best articulated argument against AI doom I’ve heard.

what does it mean for us if AI becomes smarter than humans, if we are no longer the apex intelligence?

if we live in a world where there are lots of things taking place that are smarter than we are — in some definition of smartness.

at one point you realize the natural world is already an example of this. the natural world is full of computations that go far beyond what our brains are capable of, and yet we find a way to coexist with it contently.

it doesn’t matter that it rains, because we build houses that shelter us. it doesn’t matter we can’t go to the bottom of the ocean, because we build special technology that lets us go there. these are the pockets of computational reducibility that allow us to find shortcuts to live.

he’s not so worried about the rapid progression of AI because there are already many things that computation can do in the physical world that we can’t do with our unaided minds.

The argument seems to be:

  1. Currently humans are the apex intelligence.

  2. Humans use our intelligence to overcome many obstacles, reshape the atoms around us to suit our needs, and exist alongside various things. We build houses and submarines and other cool stuff like that.

  3. These obstacles and natural processes ‘require more computation’ than we do.

Okay, yes, so far so good. Intelligence allows mastery of the world around you, and over other things that are less intelligent than you are, even if the world around you ‘uses more computation’ than you do. You can build a house to stop the rain even if it requires a lot of computation to figure out when and where and how rain falls, because all you need to figure out is how to build a roof. Sure.

The logical next step would be:

  1. If we built an AI that was the new apex intelligence, capable of overcoming many obstacles and reshaping the atoms around it to suit its needs and building various things useful to it, we, as lesser intelligences, should be concerned about that. That sounds existentially risky for the humans, the same way the humans are existentially risky for other animals.

Or in less words:

  1. A future more intelligent AI would likely take control of the future from us and we might not survive this. Seems bad.

Instead, Wolfram argues this?

  1. Since this AI would be another thing requiring more computation than we do, we don’t need to worry about this future AI being smarter and more capable than us, or what it might do, because we can use our intelligence to be alongside it.

Wait, what? No, seriously, wait what?

It’s difficult out there (3 minute video).

A clip from South Park (2 minutes). If you haven’t seen it, watch it.

In this case it can’t be that nigh…

Discussion about this post

AI #131 Part 2: Various Misaligned Things Read More »