Author name: Kris Guyer

rocket-lab-electron-among-first-artifacts-installed-in-ca-science-center-space-gallery

Rocket Lab Electron among first artifacts installed in CA Science Center space gallery

It took the California Science Center more than three years to erect its new Samuel Oschin Air and Space Center, including stacking NASA’s space shuttle Endeavour for its launch pad-like display.

Now the big work begins.

“That’s completing the artifact installation and then installing the exhibits,” said Jeffrey Rudolph, president and CEO of the California Science Center in Los Angeles, in an interview. “Most of the exhibits are in fabrication in shops around the country and audio-visual production is underway. We’re full-on focused on exhibits now.”

On Tuesday, the science center is marking the addition of the first artifacts to the Kent Kresa Space Gallery. Named for the former chairman and CEO of Northrop Grumman and former chairman of General Motors, the completed gallery will complement the Samuel Oschin Shuttle Gallery (featuring Endeavour) with three areas devoted to the themes of “Rocket Science,” “Robots in Space,” and “Humans in Space.”

Now in place are a space shuttle main engine (SSME), a walk-through segment of a shuttle solid rocket booster, and a Rocket Lab Electron rocket.

Erecting Electron

“The biggest thing we have put in—other than the space shuttle—was the Electron, which we think is really significant,” said Rudolph. “We’re really happy to show next-generation technologies from startup companies with new launch vehicles, particularly if the company is based in California. Our goal is to inspire and motivate the next generation, and we think that showing folks that there are still a lot of innovative things going on, happening in their backyard, is a really great opportunity to inspire kids and people of all ages.”

a large yellow crane is used to lift a long, black cylindrical artifact into place inside a museum

Credit: California Science Center

Founded in New Zealand in 2006 and now based in Long Beach, Rocket Lab developed the Electron as the first carbon-composite launch vehicle intended to service the small satellite market. It was also the first orbital-class rocket to use electric pump-fed engines. Having now flown 75 successful missions (including five suborbital flights), the Electron is the third most-launched small-lift rocket in history.

Of course, “small” can be relative. At 59 feet tall (18 meters), one floor of the Kresa gallery was not enough.

“The Electron rocket is actually at the center of a staircase, a section which is open all the way from level two, where you enter, to the lower level, which is 25 feet (7.6 meters) below. The Electron is standing up in that opening and it pretty much fills the whole thing,” said Rudolph.

Rocket Lab Electron among first artifacts installed in CA Science Center space gallery Read More »

critics-scoff-after-microsoft-warns-ai-feature-can-infect-machines-and-pilfer-data

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data


Integration of Copilot Actions into Windows is off by default, but for how long?

Credit: Photographer: Chona Kasinger/Bloomberg via Getty Images

Microsoft’s warning on Tuesday that an experimental AI agent integrated into Windows can infect devices and pilfer sensitive user data has set off a familiar response from security-minded critics: Why is Big Tech so intent on pushing new features before their dangerous behaviors can be fully understood and contained?

As reported Tuesday, Microsoft introduced Copilot Actions, a new set of “experimental agentic features” that, when enabled, perform “everyday tasks like organizing files, scheduling meetings, or sending emails,” and provide “an active digital collaborator that can carry out complex tasks for you to enhance efficiency and productivity.”

Hallucinations and prompt injections apply

The fanfare, however, came with a significant caveat. Microsoft recommended users enable Copilot Actions only “if you understand the security implications outlined.”

The admonition is based on known defects inherent in most large language models, including Copilot, as researchers have repeatedly demonstrated.

One common defect of LLMs causes them to provide factually erroneous and illogical answers, sometimes even to the most basic questions. This propensity for hallucinations, as the behavior has come to be called, means users can’t trust the output of Copilot, Gemini, Claude, or any other AI assistant and instead must independently confirm it.

Another common LLM landmine is the prompt injection, a class of bug that allows hackers to plant malicious instructions in websites, resumes, and emails. LLMs are programmed to follow directions so eagerly that they are unable to discern those in valid user prompts from those contained in untrusted, third-party content created by attackers. As a result, the LLMs give the attackers the same deference as users.

Both flaws can be exploited in attacks that exfiltrate sensitive data, run malicious code, and steal cryptocurrency. So far, these vulnerabilities have proved impossible for developers to prevent and, in many cases, can only be fixed using bug-specific workarounds developed once a vulnerability has been discovered.

That, in turn, led to this whopper of a disclosure in Microsoft’s post from Tuesday:

“As these capabilities are introduced, AI models still face functional limitations in terms of how they behave and occasionally may hallucinate and produce unexpected outputs,” Microsoft said. “Additionally, agentic AI applications introduce novel security risks, such as cross-prompt injection (XPIA), where malicious content embedded in UI elements or documents can override agent instructions, leading to unintended actions like data exfiltration or malware installation.”

Microsoft indicated that only experienced users should enable Copilot Actions, which is currently available only in beta versions of Windows. The company, however, didn’t describe what type of training or experience such users should have or what actions they should take to prevent their devices from being compromised. I asked Microsoft to provide these details, and the company declined.

Like “macros on Marvel superhero crack”

Some security experts questioned the value of the warnings in Tuesday’s post, comparing them to warnings Microsoft has provided for decades about the danger of using macros in Office apps. Despite the long-standing advice, macros have remained among the lowest-hanging fruit for hackers out to surreptitiously install malware on Windows machines. One reason for this is that Microsoft has made macros so central to productivity that many users can’t do without them.

“Microsoft saying ‘don’t enable macros, they’re dangerous’… has never worked well,” independent researcher Kevin Beaumont said. “This is macros on Marvel superhero crack.”

Beaumont, who is regularly hired to respond to major Windows network compromises inside enterprises, also questioned whether Microsoft will provide a means for admins to adequately restrict Copilot Actions on end-user machines or to identify machines in a network that have the feature turned on.

A Microsoft spokesperson said IT admins will be able to enable or disable an agent workspace at both account and device levels, using Intune or other MDM (Mobile Device Management) apps.

Critics voiced other concerns, including the difficulty for even experienced users to detect exploitation attacks targeting the AI agents they’re using.

“I don’t see how users are going to prevent anything of the sort they are referring to, beyond not surfing the web I guess,” researcher Guillaume Rossolini said.

Microsoft has stressed that Copilot Actions is an experimental feature that’s turned off by default. That design was likely chosen to limit its access to users with the experience required to understand its risks. Critics, however, noted that previous experimental features—Copilot, for instance—regularly become default capabilities for all users over time. Once that’s done, users who don’t trust the feature are often required to invest time developing unsupported ways to remove the features.

Sound but lofty goals

Most of Tuesday’s post focused on Microsoft’s overall strategy for securing agentic features in Windows. Goals for such features include:

  • Non-repudiation, meaning all actions and behaviors must be “observable and distinguishable from those taken by a user”
  • Agents must preserve confidentiality when they collect, aggregate, or otherwise utilize user data
  • Agents must receive user approval when accessing user data or taking actions

The goals are sound, but ultimately they depend on users reading the dialog windows that warn of the risks and require careful approval before proceeding. That, in turn, diminishes the value of the protection for many users.

“The usual caveat applies to such mechanisms that rely on users clicking through a permission prompt,” Earlence Fernandes, a University of California, San Diego professor specializing in AI security, told Ars. “Sometimes those users don’t fully understand what is going on, or they might just get habituated and click ‘yes’ all the time. At which point, the security boundary is not really a boundary.”

As demonstrated by the rash of “ClickFix” attacks, many users can be tricked into following extremely dangerous instructions. While more experienced users (including a fair number of Ars commenters) blame the victims falling for such scams, these incidents are inevitable for a host of reasons. In some cases, even careful users are fatigued or under emotional distress and slip up as a result. Other users simply lack the knowledge to make informed decisions.

Microsoft’s warning, one critic said, amounts to little more than a CYA (short for cover your ass), a legal maneuver that attempts to shield a party from liability.

“Microsoft (like the rest of the industry) has no idea how to stop prompt injection or hallucinations, which makes it fundamentally unfit for almost anything serious,” critic Reed Mideke said. “The solution? Shift liability to the user. Just like every LLM chatbot has a ‘oh by the way, if you use this for anything important be sure to verify the answers” disclaimer, never mind that you wouldn’t need the chatbot in the first place if you knew the answer.”

As Mideke indicated, most of the criticisms extend to AI offerings other companies—including Apple, Google, and Meta—are integrating into their products. Frequently, these integrations begin as optional features and eventually become default capabilities whether users want them or not.

Photo of Dan Goodin

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Critics scoff after Microsoft warns AI feature can infect machines and pilfer data Read More »

faced-with-naked-man,-doordasher-demands-police-action;-they-arrest-her-for-illegal-surveillance

Faced with naked man, DoorDasher demands police action; they arrest her for illegal surveillance

“The only justice I’m getting is exposing this man and having posted that video,” she added. “And it has gone viral. Now he can live with shame and embarrassment if people have seen it.”

“I’m the victim!” she said. “Is this making sense to any-fucking-body?”

Her numerous videos attracted huge followings—anywhere from 5 million to 30 million views each—and DoorDash eventually felt the need to respond.

“DoorDash never deactivates someone for reporting [sexual assault]—full stop,” said the company.

But, it added, “posting a video of a customer in their home, and disclosing their personal details publicly, is a clear violation of our policies. That is the sole reason that this Dasher’s account was deactivated, along with the customer’s, while we investigated. We’ve also ensured that the Dasher has full access to their earnings.”

Meanwhile, the police were doing something—but not something that Henderson wanted.

The cops determined that the nude man in question “was incapacitated and unconscious on his couch due to alcohol consumption.” Being drunk and naked inside your own home apparently does not qualify as sexual assault on a delivery driver, and the police department said in a press release yesterday that “the investigation by the Oswego Police Department determined that no sexual assault occurred.”

As part of their investigation, the cops found that Henderson had filmed the man and “subsequently posted the video to social media, where it drew significant attention.” This shifted their attention to Henderson’s decision to film and upload the video without the man’s consent.

The police eventually arrested Henderson, who is now charged with two felonies: “Unlawful Surveillance in the Second Degree” and “Dissemination of an Unlawful Surveillance Image in the First Degree.” She was released after being charged, and her case will be heard by the Oswego City Court.

Henderson has stopped releasing videos on TikTok about the situation.

Faced with naked man, DoorDasher demands police action; they arrest her for illegal surveillance Read More »

google-unveils-gemini-3-ai-model-and-ai-first-ide-called-antigravity

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity


Google’s flagship AI model is getting its second major upgrade this year.

Google has kicked its Gemini rollout into high gear over the past year, releasing the much-improved Gemini 2.5 family and cramming various flavors of the model into Search, Gmail, and just about everything else the company makes.

Now, Google’s increasingly unavoidable AI is getting an upgrade. Gemini 3 Pro is available in a limited form today, featuring more immersive, visual outputs and fewer lies, Google says. The company also says Gemini 3 sets a new high-water mark for vibe coding, and Google is announcing a new AI-first integrated development environment (IDE) called Antigravity, which is also available today.

The first member of the Gemini 3 family

Google says the release of Gemini 3 is yet another step toward artificial general intelligence (AGI). The new version of Google’s flagship AI model has expanded simulated reasoning abilities and shows improved understanding of text, images, and video. So far, testers like it—Google’s latest LLM is once again atop the LMArena leaderboard with an ELO score of 1,501, besting Gemini 2.5 Pro by 50 points.

Gemini 3 LMArena

Credit: Google

Factuality has been a problem for all gen AI models, but Google says Gemini 3 is a big step in the right direction, and there are myriad benchmarks to tell the story. In the 1,000-question SimpleQA Verified test, Gemini 3 scored a record 72.1 percent. Yes, that means the state-of-the-art LLM still screws up almost 30 percent of general knowledge questions, but Google says this still shows substantial progress. On the much more difficult Humanity’s Last Exam, which tests PhD-level knowledge and reasoning, Gemini set another record, scoring 37.5 percent without tool use.

Math and coding are also a focus of Gemini 3. The model set new records in MathArena Apex (23.4 percent) and WebDev Arena (1487 ELO). In the SWE-bench Verified, which tests a model’s ability to generate code, Gemini 3 hit an impressive 76.2 percent.

So there are plenty of respectable but modest benchmark improvements, but Gemini 3 also won’t make you cringe as much. Google says it has tamped down on sycophancy, a common problem in all these overly polite LLMs. Outputs from Gemini 3 Pro are reportedly more concise, with less of what you want to hear and more of what you need to hear.

You can also expect Gemini 3 Pro to produce noticeably richer outputs. Google claims Gemini’s expanded reasoning capabilities keep it on task more effectively, allowing it to take action on your behalf. For example, Gemini 3 can triage and take action on your emails, creating to-do lists, summaries, recommended replies, and handy buttons to trigger suggested actions. This differs from the current Gemini models, which would only create a text-based to-do list with similar prompts.

The model also has what Google calls a “generative interface,” which comes in the form of two experimental output modes called visual layout and dynamic view. The former is a magazine-style interface that includes lots of images in a scrollable UI. Dynamic view leverages Gemini’s coding abilities to create custom interfaces—for example, a web app that explores the life and work of Vincent Van Gogh.

There will also be a Deep Think mode for Gemini 3, but that’s not ready for prime time yet. Google says it’s being tested by a small group for later release, but you should expect big things. Deep Think mode manages 41 percent in Humanity’s Last Exam without tools. Believe it or not, that’s an impressive score.

Coding with vibes

Google has offered several ways of generating and modifying code with Gemini models, but the launch of Gemini 3 adds a new one: Google Antigravity. This is Google’s new agentic development platform—it’s essentially an IDE designed around agentic AI, and it’s available in preview today.

With Antigravity, Google promises that you (the human) can get more work done by letting intelligent agents do the legwork. Google says you should think of Antigravity as a “mission control” for creating and monitoring multiple development agents. The AI in Antigravity can operate autonomously across the editor, terminal, and browser to create and modify projects, but everything they do is relayed to the user in the form of “Artifacts.” These sub-tasks are designed to be easily verifiable so you can keep on top of what the agent is doing. Gemini will be at the core of the Antigravity experience, but it’s not just Google’s bot. Antigravity also supports Claude Sonnet 4.5 and GPT-OSS agents.

Of course, developers can still plug into the Gemini API for coding tasks. With Gemini 3, Google is adding a client-side bash tool, which lets the AI generate shell commands in its workflow. The model can access file systems and automate operations, and a server-side bash tool will help generate code in multiple languages. This feature is starting in early access, though.

AI Studio is designed to be a faster way to build something with Gemini 3. Google says Gemini 3 Pro’s strong instruction following makes it the best vibe coding model yet, allowing non-programmers to create more complex projects.

A big experiment

Google will eventually have a whole family of Gemini 3 models, but there’s just the one for now. Gemini 3 Pro is rolling out in the Gemini app, AI Studio, Vertex AI, and the API starting today as an experiment. If you want to tinker with the new model in Google’s Antigravity IDE, that’s also available for testing today on Windows, Mac, and Linux.

Gemini 3 will also launch in the Google search experience on day one. You’ll have the option to enable Gemini 3 Pro in AI Mode, where Google says it will provide more useful information about a query. The generative interface capabilities from the Gemini app will be available here as well, allowing Gemini to create tools and simulations when appropriate to answer the user’s question. Google says these generative interfaces are strongly preferred in its user testing. This feature is available today, but only for AI Pro and Ultra subscribers.

Because the Pro model is the only Gemini 3 variant available in the preview, AI Overviews isn’t getting an immediate upgrade. That will come, but for now, Overviews will only reach out to Gemini 3 Pro for especially difficult search queries—basically the kind of thing Google thinks you should have used AI Mode to do in the first place.

There’s no official timeline for releasing more Gemini 3 models or graduating the Pro variant to general availability. However, given the wide rollout of the experimental release, it probably won’t be long.

Photo of Ryan Whitwam

Ryan Whitwam is a senior technology reporter at Ars Technica, covering the ways Google, AI, and mobile technology continue to change the world. Over his 20-year career, he’s written for Android Police, ExtremeTech, Wirecutter, NY Times, and more. He has reviewed more phones than most people will ever own. You can follow him on Bluesky, where you will see photos of his dozens of mechanical keyboards.

Google unveils Gemini 3 AI model and AI-first IDE called Antigravity Read More »

5-plead-guilty-to-laptop-farm-and-id-theft-scheme-to-land-north-koreans-us-it-jobs

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs

Each defendant also helped the IT workers pass employer vetting procedures. Travis and Salazar, for example, appeared for drug testing on behalf of the workers.

Travis, an active-duty member of the US Army at the time, received at least $51,397 for his participation in the scheme. Phagnasay and Salazar earned at least $3,450 and $4,500, respectively. In all, the fraudulent jobs earned roughly $1.28 million in salary payments from the defrauded US companies, the vast majority of which were sent to the IT workers overseas.

The fifth defendant, Ukrainian national Oleksandr Didenko, pleaded guilty to one count of aggravated identity theft, in addition to wire fraud. He admitted to participating in a “years-long scheme that stole the identities of US citizens and sold them to overseas IT workers, including North Korean IT workers, so they could fraudulently gain employment at 40 US companies.” Didenko received hundreds of thousands of dollars from victim companies who hired the fraudulent applicants. As part of the plea agreement, Didenko is forfeiting more than $1.4 million, including more than $570,000 in fiat and virtual currency seized from him and his co-conspirators.

In 2022, the US Treasury Department said that the Democratic People’s Republic of Korea employs thousands of skilled IT workers around the world to generate revenue for the country’s weapons of mass destruction and ballistic missile programs.

“In many cases, DPRK IT workers represent themselves as US-based and/or non-North Korean teleworkers,” Treasury Department officials wrote. “The workers may further obfuscate their identities and/or location by sub-contracting work to non North Koreans. Although DPRK IT workers normally engage in IT work distinct from malicious cyber activity, they have used the privileged access gained as contractors to enable the DPRK’s malicious cyber intrusions. Additionally, there are likely instances where workers are subjected to forced labor.”

Other US government advisories posted in 2023 and 2024 concerning similar programs have been removed with no explanation.

In Friday’s release, the Justice Department also said it’s seeking the forfeiture of more than $15 million worth of USDT, a cryptocurrency stablecoin pegged to the US dollar, that the FBI seized in March from North APT38 actors. The seized funds were derived from four heists APT38 carried out, two in July 2023 against virtual currency payment processors in Estonia and Panama and two in November 2023 thefts from exchanges in Panama and Seychelles.

Justice Department attempts to locate, seize, and forfeit all the stolen assets remain ongoing because APT38 has laundered them through virtual currency bridges, mixers, exchanges, and over-the-counter traders, the Justice Department said.

5 plead guilty to laptop farm and ID theft scheme to land North Koreans US IT jobs Read More »

report-claims-that-apple-has-yet-again-put-the-mac-pro-“on-the-back-burner”

Report claims that Apple has yet again put the Mac Pro “on the back burner”

Do we still need a Mac Pro, though?

Regardless of what Apple does with the Mac Pro, the desktop makes less sense than ever in the Apple Silicon era. Part of the appeal of the early 2010s and the 2019 Mac Pro towers was their internal expandability, particularly with respect to storage, graphics cards, and RAM. But while the Apple Silicon Mac Pro does include six internal PCI Express slots, it supports neither RAM upgrades nor third-party GPUs from Nvidia, AMD, or Intel. Thunderbolt 5’s 120 Gbps transfer speeds are also more than fast enough to support high-speed external storage devices.

That leaves even the most powerful of power users with few practical reasons to prefer a $7,000 Mac Pro tower to a $4,000 Mac Studio. And that would be true even if both desktops used the same chip—currently, the M3 Ultra Studio comes with more and newer CPU cores, newer GPU cores, and 32GB more RAM for that price, making the comparison even more lopsided.

Mac Pro aside, the Mac should have a pretty active 2026. Every laptop other than the entry-level 14-inch MacBook Pro should get an Apple M5 upgrade, with Pro and Max chips coming for the higher-end Pros. Those chips, plus the M5 Ultra, would give Apple all the ingredients it would need to refresh the iMac, Mac mini, and Mac Studio lineups as well.

Insistent rumors also indicate that Apple will be introducing a new, lower-cost MacBook model with an iPhone-class chip inside, a device that seems made to replace the 2020 M1 MacBook Air that Apple has continued to sell via Walmart for between $600 and $650. It remains to be seen whether this new MacBook would remain a Walmart exclusive or if Apple also plans to offer the laptop through other retailers and its own store.

Report claims that Apple has yet again put the Mac Pro “on the back burner” Read More »

benoit-blanc-takes-on-a-“perfectly-impossible-crime”-in-wake-up-dead-man-trailer

Benoit Blanc takes on a “perfectly impossible crime” in Wake Up Dead Man trailer

Wake Up Dead Man garnered early rave reviews after screening at the Toronto International Film Festival (TIFF) in September, and an initial teaser released shortly after showcased Blanc puzzling over a classic locked-room mystery. The new trailer builds out some of the details without giving too much away.

Rev. Jud is the prime suspect in Wicks’ murder, since he loathed the man and hence had a clear motive, but he insists to Blanc that he is innocent. We learn that Wicks was wealthy, and this being a classic whodunit, we know the rest of the characters no doubt have their deep, dark secrets—one of which could have led to murder. And Johnson brings the humor, too, as Blanc, the groundskeeper, and Martha discover the desecration of Wicks’ tombstone with scrawled graffiti penises. “Makes me sick, these kids painting rocket ships all over his sacred resting place,” the unworldly Martha says.

Wake Up Dead Man will be in select theaters on November 26, 2025, and will start streaming on Netflix on December 12, 2o25. We can’t wait.

Benoit Blanc takes on a “perfectly impossible crime” in Wake Up Dead Man trailer Read More »

on-writing-#2

On Writing #2

In honor of my dropping by Inkhaven at Lighthaven in Berkeley this week, I figured it was time for another writing roundup. You can find #1 here, from March 2025.

I’ll be there from the 17th (the day I am publishing this) until the morning of Saturday the 22nd. I am happy to meet people, including for things not directly about writing.

  1. Table of Contents.

  2. How I Use AI For Writing These Days.

  3. Influencing Influence.

  4. Size Matters.

  5. Time To Write A Shorter One.

  6. A Useful Tool.

  7. A Maligned Tool.

  8. Neglected Topics.

  9. The Humanities Don’t Seem Relevant To Writing About Future Humanity?

  10. Writing Every Day.

  11. Writing As Deep Work.

  12. Most Of Your Audience Is Secondhand.

  13. That’s Funny.

  14. Fiction Writing Advice.

  15. Just Say The Thing.

  16. Cracking the Paywall.

How have I been using AI in my writing?

Directly? With the writing itself? Remarkably little. Almost none.

I am aware that this is not optimal. But at current capability levels, with the prompts and tools I know about, in the context of my writing, AI has consistently proven to have terrible taste and to make awful suggestions, and also to be rather confident about them. This has proven sufficiently annoying that I haven’t found it worth checking with the AIs.

I also worry about AI influence pushing me towards generic slop, pushing me to sounding more like the AIs, and rounding off the edges of things, since every AI I’ve tried this with keeps trying to do all that.

I am sure it does not help that my writing style is very unusual, and basically not in the training data aside from things written by actual me, as far as I can tell.

Sometimes I will quote LLM responses in my writing, always clearly labeled, when it seems useful to point to this kind of ‘social proof’ or sanity check.

The other exception is that if you ask the AI to look for outright errors, especially things like spelling and grammar, it won’t catch everything, but when it does catch something it is usually right. When you ask it to spot errors of fact, it’s not as reliable, but it’s good enough to check the list. I should be making a point of always doing that.

I did the ‘check for errors and other considerations’ thing on this piece in particular with both Sonnet and 5.1-Thinking. This did improve the post but it’s not obvious it improved it enough to be worth the time.

I will also sometimes ask it about a particular line or argument I’m considering, to see if it buys it, but only when what I care about is a typical reaction.

If I was devoting more time to refining and editing, and cared more about marginal improvements there, that would open up more use cases, but I don’t think that’s the right use of time for me on current margins versus training on more data or doing more chain of thought.

Indirectly? I use it a lot more there, and again I could be doing more.

There are some specific things:

  1. I have a vibe coded Chrome extension that saves me a bunch of trouble, that could be improved a lot with more work. It does things like generate the Table of Contents, crosspost to WordPress, auto-populate many links and quickly edit quotes to fix people’s indifference to things like capitalization.

  2. I have a GPT called Zvi Archivist that I use to search through my past writing, to check if and when I’ve already covered something and what I’ve said about it.

  3. I have a transcriber for converting images to text because all the websites I know about that offer to do this for you are basically broken due to gating. This works.

Then there’s things that are the same as what everyone does all the time. I do a lot of fact checking, sanity checking, Fermi estimation, tracking down information or sources, asking for explanations, questioning papers for the things I care about. Using the AI assistant in its classic sense. All of that is a big help and I notice my activation requirement to do this is higher than it should be.

I want this to be true so I’m worried I can’t be objective, but it seems true to me?

Janus: i think that it’s almost always a bad idea to attempt to grow as an influencer on purpose.

you can believe that it would be good if you were to grow, and still you shouldn’t optimize for it.

the only way it goes well is if it happens while you optimize for other things.

More precisely than “you shouldn’t on purpose” what I’m saying is you shouldn’t be spending significant units of optimization on this goal and performing actions you wouldn’t otherwise for this purpose

I am confident that if you optimize primarily for influence, that’s full audience capture, slopification and so on, and you’ve de facto sold your soul. You can in theory turn around and then use that influence to accomplish something worthwhile, but statistically speaking you won’t do that.

Janus: Name a single account that explicitly optimizes for being a bigger influencer / “tries to grow” (instead of just happening as a side effect) and that does more good than harm to the ecosystem and generally has good vibes and interesting content

You probably can’t!

actually, https://x.com/AISafetyMemes is a contender

but i know they’re VERY controversial and I do think they’re playing with fire

i do consider them net positive but this is mostly bc they sometimes have very good taste and maybe cancel out the collateral damage

but WOULD NOT RECOMMEND almost anyone trying this, lol

AISafetyMemes is definitely an example of flying dangerously close to the sun on this, but keeping enough focus and having enough taste to maybe be getting away with it. It’s unclear that the net sign of impact there is positive, there are some very good posts but also some reasons to worry.

No one reads the blog posts, they’re too long, so might as well make them longer?

Visakan Veerasamy: An idea I’ve been toying with and discussed with a couple of friends is the idea that blog posts could and probably should get much longer now that fewer people are reading them.

One of the difficult things about writing a good essay is figuring out what to leave out so it is more manageable for readers.

But on a blog where there is no expectation that anybody reads it, you do not have to leave anything out.

My guess is this is going to end up being a barbell situation like so many other things. If you cut it down, you want to cut it down as much as possible. If you’re going long, then on the margin you’re better off throwing everything in.

I highlight this exactly because it seems backwards to me. I notice that my experience is very much the opposite – when I want to write a good short piece it is MUCH more work per token, and often more total work.

Timothy Lee: I think a big reason that writing a book is such a miserable experience is that the time to write a good piece is more-than-linear in the number of words. A good 2,000-word piece is a lot more than 4x the work of a good 500-word piece.

I assume this continues for longer pieces and a good 100,000 book is a lot more than 50x the work of a good 2,000-word article. Most authors deal with this by cutting corners and turning in books that aren’t very good. And then there’s Robert Caro.

Josh You: I think by “good 2000 word piece” Tim means “a 2000 word piece that has been edited down from a much longer first draft”

Even then. Yes, a tight longer piece requires more structure and planning, but the times I write those 500-800 word pieces it takes forever, because you really do struggle over every word as you try to pack everything into the tiniest possible space.

Writing a 100,000 word book at the precision level of an 800 word thinkpiece would take forever, but also I presume it almost never happens. If it does, that better be your masterpiece or I don’t see why you’d do it.

Dwarkesh Patel is using the Smart Composer Plugin for Obsidian, which he says is basically Cursor for writing, and loves it. Sounds great conditional on using Obsidian, but it is not being actively maintained.

Eric Raymond joins ‘the em-dash debate’ on the side of the em-dash.

Eric Raymond (yes that one): My wacky theory about the em-dash debate:

Pro writers use em-dashes a lot because many of them, possibly without consciously realizing it, have become elocutionary punctuationists.

That is, they’ve fallen into the habit of using punctuation not as grammatical phrase structure markers but as indicators of pauses of varying length in the flow of speech.

The most visible difference you see in people who write in this style that their usage of commas becomes somewhat more fluid — that’s the marker for the shortest pause. But they also reach for less commonly used punctuation marks as indicators of longer pauses of varying length.

Em dash is about the second or third longest pause, only an ellipsis or end-of-sentence period being clearly longer.

Historical note: punctuation marks originally evolved as pause or breathing markers in manuscripts to aid recitation. In the 19th century, after silent reading had become normal, they were reinterpreted by grammarians as phrase structure markers and usage rules became much more rigid.

Really capable writers have been quietly rediscovering elocutionary punctuation ever since.

RETVRN!

I too have been increasingly using punctuation, especially commas, to indicate pauses. I still don’t use em dashes, partly because I almost never want that exact length and style of a pause for whatever reason, and also because my instinct is that you’re trying to do both ‘be technically correct’ and also ‘evoke what you want’ and my brain thinks of the em-dash as technically incorrect.

That’s all true and I never used em-dashes before but who are we kidding, the best reason not to use em-dashes is that people will think you’re using AI. I don’t love that dynamic either, but do you actually want to die on that hill?

Tyler Cowen lists some reasons why he does not cover various topics much. The list resonates with me quite a bit.

  1. I feel that writing about the topic will make me stupider.

  2. I believe that you reading more about the topic will make you stupider.

  3. I believe that performative outrage usually brings low or negative returns. Matt Yglesias has had some good writing on this lately.

  4. I don’t have anything to add on the topic. Abortion and the Middle East would be two examples here.

  5. Sometimes I have good inside information on a topic, but I cannot reveal it, not even without attribution. And I don’t want to write something stupider than my best understanding of the topic.

  6. I just don’t feel like it.

  7. On a few topics I feel it is Alex’s province.

I don’t have an Alex, instead I have decided on some forms of triage that are simply ‘I do not have the time to look into this and I will let it be someone else’s department.’

Otherwise yes, all of these are highly relevant.

Insider information is tough, and I am very careful about not revealing things I am not supposed to reveal, but this rarely outright stops me. If nothing else, you can usually get net smarter via negativa, where you silently avoid saying false things, including by using careful qualifiers on statements.

One big thing perhaps missing from Tyler’s list is that I avoid certain topics where my statements would potentially interfere with my ability to productively discuss other topics. If you are going to make enemies, or give people reasons to attack you or dismiss you, only do that on purpose. One could also file this under making you and others stupider. Similarly, there are things that I need to not think about – I try to avoid thinking about trading for this reason.

A minor thing is that I’d love to be able to talk more about gaming, and other topics dear to my heart, but that consistently drive people away permanently when I do that. So it’s just not worth it. If the extra posts simply had no impact, I’d totally do it, but as is I’d be better off writing the post and then not hitting publish. Sad. Whereas Tyler has made it very clear he’s going to post things most readers don’t care about, when he feels like doing so, and that’s part of the price of admission.

If you want to write or think about the future, maybe don’t study the humanities?

Startup Archive: Palmer Luckey explains why science fiction is a great place to look for ideas

“One of the things that I’ve realized in my career is that nothing I ever come up with will be new. I’ve literally never come up with an idea that a science fiction author has not come up with before.”

Dr. Julie Gurner: Funny how valuable those English majors and writers truly are, given how much liberal arts has been put down. Why philosophy, creativity and hard tech skills make such fantastic bedfellows. Span of vision wins.

Orthonormalist: Heinlein was an aeronautical engineer.

Asimov was a biochemistry professor.

Arthur Clarke was a radio operator who got a physics degree.

Ray Bradbury never went to college (but did go straight to being a writer)

I quote this because ‘study the humanities’ is a natural thing to say to someone looking to write or think about the future, and yet I agree that when I look at the list of people whose thinking about the future has influenced me, I notice essentially none of them have studied the humanities.

Alan Jacobs has a very different writing pattern. Rather than write every day, he waits until the words are ready, so he’ll work every day but often that means outlines or index card reordering or just sitting in his chair and thinking, even for weeks at a time. This is alien to me. If I need to figure out what to write, I start writing, see what it looks like, maybe delete it and try again, maybe procrastinate by working on a different thing.

Neal Stephenson explains that for him writing is Deep Work, requiring long blocks of reliably uninterrupted time bunched together, writing novels is the best thing he does, and that’s why he doesn’t go to conferences or answer your email. Fair enough.

I’ve found ways to not be like that. I deal with context shifts and interruptions all the time and it is fine, indeed when dealing with difficult tasks I almost require them. That’s a lot of how I can be so productive. But the one time I wrote something plausibly like a book, the Immoral Mazes sequence, I did spend a week alone in my apartment doing nothing else. And I haven’t figured out how to write a novel, or almost any fiction at all.

Also, it’s rather sad if it is true that Neal Stephenson only gets a middle class life out of writing so many fantastic and popular books, and can’t afford anyone to answer his email. That makes writing seem like an even rougher business than I expected. Although soon AI can perhaps do it for him?

Patrick McKenzie highlights an insight from Alex Danco, which is that most of the effective audience of any successful post is not people who read the post, but people who are told about the post by someone who did read it. Patrick notes this likely also applies to formal writing, I’d note it seems to definitely apply to most books.

Relatedly, I have in the past postulated a virtual four-level model of flow of ideas, where each level can understand the level above it, and then rephrase and present it to the level below.

So if you are Level 1, either in general or in an area, you can formulate fully new ideas. If you are Level 2, you can understand what the Level 1s say, look for consensus or combine what they say, riff on it and then communicate that to those who are up to Level 3, who can then fully communicate to the public who end up typically around at Level 4.

Then the public will communicate a simplified and garbled version to each other.

You can be Level 1 and then try to ‘put on your Level 2 or 3 hat’ to write a dumber, simpler version to a broader audience, but it is very hard to simultaneously do that and also communicate the actual concepts to other Level 1s.

These all then interact, but if you go viral with anything longer than a Tweet, you inevitably are going to primarily end up with a message primarily communicated via (in context) Level 3 and Level 4 people communicating to other Level 3-4 people.

At that point, and any time you go truly viral or your communication is ‘successful,’ you run into the You Get About Five Words problem.

My response to this means that at this point I essentially never go all that directly viral. I have a very narrow range of views, where even the top posts never do 100% better than typical posts, and the least popular posts – which are when I talk about AI alignment or policy on their own – will do at worst 30% less than typical.

The way the ideas go viral is someone quotes, runs with or repackages them. A lot of the impact comes from the right statement reaching the right person.

I presume that would work differently if I was working with mediums that work on virality, such as YouTube or TikTok, but my content seems like a poor fit for them, and when I do somewhat ‘go viral’ in such places it is rarely content I care about spreading. Perhaps I am making a mistake by not branching out. But on Twitter I still almost never go viral, as it seems my speciality is small TAM (total available market) Tweets.

Never have a character try to be funny, the character themselves should have no idea.

I think this is directionally correct but goes too far, for the same reasons that you, in your real life, will often try to be funny, and sometimes it will work. The trick is they have to be trying to be funny in a way that makes sense for the character, in context, for those around them, not trying to be funny to the viewer.

I notice that in general I almost never ‘try to be funny,’ not exactly. I simply say things because they would be funny, and to say things in the funniest way possible, because why not. A lot of my favorite people seem to act similarly.

Lydia Davis offers her top ten recommendations for good (fiction?) writing: Keep notes, including sentences out of context, work from your own interest, be mostly self-taught, read and revise the notes constantly, grow stories or develop poems out of those notes, learn techniques from great works and read the best writers across time.

Orson Scott Card explains that you don’t exhaust the reader by having too much tension in your book, you exhaust them by having long stretches without tension. The tension keeps us reading.

Dwarkesh Patel: Unreasonably effective writing advice:

“What are you trying to say here?

Okay, just write that.”

I’ve (separately) started doing this more.

I try to make sure that it’s very easy to find the central point, the thing I’m most trying to say, and hard to miss it.

Patrick McKenzie: Cosigned, and surprisingly effective with good writers in addition to ones who more obviously need the prompting.

Writing an artifact attaches you to the structure of it while simultaneously subsuming you in the topic. The second is really good for good work; the first, less so.

One thing that I tried, with very limited success, to get people to do is to be less attached to words on a page. Writing an essay? Write two very different takes on it; different diction, different voice, maybe even different argument. Then pick the one which speaks to you.

Edit *thatrather than trying to line edit the loser towards greatness.

There is something which people learn, partially from school and partially from work experience, which causes them to write as if they were charged for every word which goes down on the page.

Words are free! They belong in a vast mindscape! You can claw more from the aether!

I think people *mightoperationalize better habits after LLMs train them that throwing away a paragraph is basically costless.

Jason Cohen: Yeah this works all the time.

Also when getting someone to explain their product, company, customer, why to work for them, etc..

So funny how it jogs them out of their own way!

BasedBigTech: An excellent Group PM reviewed my doc with me. He said “what does this mean?” and I told him.

“Then why didn’t you write that?”

Kevin Kelly: At Whole Earth Review people would send us book reviews with a cover letter explaining why we should run their book review. We’d usually toss the review and print their much shorter cover letter as the review which was much clearer and succinct.

Daniel Eth: It’s crazy how well just straight up asking people that gets them to say the thing they should write down

Why does it work?

The answer is that writing is doing multiple tasks.

Only one of them is ‘tell you what all this means.’

You have to do some combination of things such as justify that, explain it, motivate it, provide details, teach your methods and reasoning, perform reporting, be entertaining and so on.

Also, how did you know what you meant to say until you wrote the damn thing?

You still usually should find a way to loudly say what it all means, somewhere in there.

But this creates the opportunity for the hack.

If I hand you a ten-page paper, and you ask ‘what are you trying to say?’ then I have entered into evidence that I have Done the Work and Written the Report.

Now I can skip the justifications, details and context, and Say The Thing.

The point of a reference post is sometimes to give people the opportunity to learn.

The point of a reference post can also be to exist and then not be clicked on. It varies.

This is closely related to the phenomenon where often a movie or show will have a scene that logically and structurally has to exist, but which you wish you didn’t have to actually watch. In theory you could hold up a card that said ‘Scene in which Alice goes to the bank, acts nervous and get the money’ or whatever.

Probably they should do a graceful version of something like that more often, or even interactive versions where you can easily expand or condense various scenes. There’s something there.

Similarly, with blog posts (or books) there are passages that are written or quoted knowing many or most people will skip them, but that have to be there.

Aella teaches us how to make readers pay up to get behind a Paywall. Explain why you are the One Who Knows some valuable thing, whereas others including your dear reader are bad at this and need your help. Then actually provide value both outside and inside of the paywall, ideally because the early free steps are useful even without the payoff you’re selling.

I am thankful that I can write without worrying about maximizing such things, while I also recognize that I’m giving up a lot of audience share not optimizing for doing similar things on the non-paywall side.

Discussion about this post

On Writing #2 Read More »

dogs-came-in-a-wide-range-of-sizes-and-shapes-long-before-modern-breeds

Dogs came in a wide range of sizes and shapes long before modern breeds

“The concept of ‘breed’ is very recent and does not apply to the archaeological record,” Evin said. People have, of course, been breeding dogs for particular traits for as long as we’ve had dogs, and tiny lap dogs existed even in ancient Rome. However, it’s unlikely that a Neolithic herder would have described his dog as being a distinct “breed” from his neighbor’s hunting partner, even if they looked quite different. Which, apparently, they did.

A big yellow dog, a little gray dog, and a little white dog

Dogs had about half of their modern diversity (at least in skull shapes and sizes) by the Neolithic. Credit: Kiona Smith

Bones only tell part of the story

“We know from genetic models that domestication should have started during the late Pleistocene,” Evin told Ars. A 2021 study suggested that domestic dogs have been a separate species from wolves for more than 23,000 years. But it took a while for differences to build up.

Evin and her colleagues had access to 17 canine skulls that ranged from 12,700 to 50,000 years old—prior to the end of the ice age—and they all looked enough like modern wolves that, as Evin put it, “for now, we have no evidence to suggest that any of the wolf-like skulls did not belong to wolves or looked different from them.” In other words, if you’re just looking at the skull, it’s hard to tell the earliest dogs from wild wolves.

We have no way to know, of course, what the living dog might have looked like. It’s worth mentioning that Evin and her colleagues found a modern Saint Bernard’s skull that, according to their statistical analysis, looked more wolf-like than dog-like. But even if it’s not offering you a brandy keg, there’s no mistaking a live Saint Bernard, with its droopy jowls and floppy ears, for a wolf.

“Skull shape tells us a lot about function and evolutionary history, but it represents only one aspect of the animal’s appearance. This means that two dogs with very similar skulls could have looked quite different in life,” Evin told Ars. “It’s an important reminder that the archaeological record captures just part of the biological and cultural story.”

And with only bones—and sparse ones, at that—to go on, we may be missing some of the early chapters of dogs’ biological and cultural story. Domestication tends to select the friendliest animals to produce the next generation, and apparently that comes with a particular set of evolutionary side effects, whether you’re studying wolves, foxes, cattle, or pigs. Spots, floppy ears, and curved tails all seem to be part of the genetic package that comes with inter-species friendliness. But none of those traits is visible in the skull.

Dogs came in a wide range of sizes and shapes long before modern breeds Read More »

after-years-of-saying-no,-tesla-reportedly-adding-apple-carplay-to-its-cars

After years of saying no, Tesla reportedly adding Apple CarPlay to its cars

Apple CarPlay, the interface that lets you cast your phone to your car’s infotainment screen, may finally be coming to Tesla’s electric vehicles. CarPlay is nearly a decade old at this point, and it has become so popular that almost half of car buyers have said they won’t consider a car without the feature, and the overwhelming majority of automakers have included CarPlay in their vehicles.

Until now, that hasn’t included Tesla. CEO Elon Musk doesn’t appear to have opined on the omission, though he has frequently criticized Apple. In the past, Musk has said the goal of Tesla infotainment is to be “the most amount of fun you can have in a car.” Tesla has regularly added purile features like fart noises to the system, and it has also integrated video games that drivers can play while they charge.

For customers who want to stream music, Tesla has instead offered Spotify, Tidal, and even Apple Music apps.

But Tesla is no longer riding high—its sales are crashing, and its market share is shrinking around the world as car buyers tire of a stale and outdated lineup of essentially two models at a time when competition has never been higher from legacy and startup automakers.

According to Bloomberg, which cites “people with knowledge of the matter,” the feature could be added within months if it isn’t cancelled internally.

Tesla is not the only automaker to reject Apple CarPlay. The startup Lucid took some time to add the feature to its high-end EVs, and Rivian still refuses to consider including the system, claiming that a third-party system would degrade the user experience. And of course, General Motors famously removed CarPlay from its new EVs, and it may do the same to its other vehicles in the future.

After years of saying no, Tesla reportedly adding Apple CarPlay to its cars Read More »

google-will-let-android-power-users-bypass-upcoming-sideloading-restrictions

Google will let Android power users bypass upcoming sideloading restrictions

Google recently decided that the freedom afforded by Android was a bit too much and announced developer verification, a system that will require developers outside the Google Play platform to register with Google. Users and developers didn’t accept Google’s rationale and have been complaining loudly. As Google begins early access testing, it has conceded that “experienced users” should have an escape hatch.

According to Google, online scam and malware campaigns are getting more aggressive, and there’s real harm being done in spite of the platform’s sideloading scare screens. Google says it’s common for scammers to use social engineering to create a false sense of urgency, prompting users to bypass Android’s built-in protections to install malicious apps.

Google’s solution to this problem, as announced several months ago, is to force everyone making apps to verify their identities. Unverified apps won’t install on any Google-certified device once verification rolls out. Without this, the company claims malware creators can endlessly create new apps to scam people. However, the centralized nature of verification threatened to introduce numerous headaches into a process that used to be straightforward for power users.

This isn’t the first time Google has had to pull back on its plans. Each time the company releases a new tidbit about verification, it compromises a little more. Previously, it confirmed that a free verification option would be available for hobbyists and students who wanted to install apps on a small number of devices. It also conceded that installation over ADB via a connected computer would still be allowed.

Now, Google has had to acknowledge that its plans for verification are causing major backlash among developers and people who know what an APK is. So there will be an alternative, but we don’t know how it will work just yet.

How high is your risk tolerance?

Google’s latest verification update explains that the company has received a lot of feedback from users and developers who want to be able to sideload without worrying about verification status. For those with “higher risk tolerance,” Google is exploring ways to make that happen. This is a partial victory for power users, but the nature of Google’s “advanced flow” for sideloading is murky.

Google will let Android power users bypass upcoming sideloading restrictions Read More »

tracking-the-winds-that-have-turned-mars-into-a-planet-of-dust

Tracking the winds that have turned Mars into a planet of dust

Where does all this dust come from? It’s thought to be the result of erosion caused by the winds. Because the Martian atmosphere is so thin, dust particles can be difficult to move, but larger particles can become more easily airborne if winds are turbulent enough, later taking smaller dust motes with them. Perseverance and previous Mars rovers have mostly witnessed wind vortices that were associated with either dust devils or convection, during which warm air rises.

CaSSIS and HRSC data showed that most dust devils occur in the northern hemisphere of Mars, mainly in the Amazonis and Elysium Planitiae, with Amazonis Planitia being a hotspot. They can be kicked up by winds on both rough and smooth terrain, but they tend to spread farther in the southern hemisphere, with some traveling across nearly that entire half of the planet. Seasonal occurrence of dust devils is highest during the southern summer, while they are almost nonexistent during the late northern fall.

Martian dust devils tend to peak between mid-morning and midafternoon, though they can occur from early morning through late afternoon. They also migrate toward the Martian north pole in the northern summer and toward the south pole during the southern summer. Southern dust devils tend to move faster than those in the northern hemisphere. Movement determined by winds can be as fast as 44 meters per second (about 98 mph), which is much faster than dust devils move on Earth.

Weathering the storm

Dust devils have also been found to accelerate extremely rapidly on the red planet. These fierce storms are associated with winds that travel along with them but do not form a vortex, known as nonvortical winds. It only takes a few seconds for these winds to accelerate to velocities high enough that they’re able to lift dust particles from the ground and transfer them to the atmosphere. It is not only dust devils that do this—the team found that even nonvortical winds lift large amounts of dust particles on their own, more than was previously thought, and create a dusty haze in the atmosphere.

Tracking the winds that have turned Mars into a planet of dust Read More »