Author name: Paul Patrick

study-of-lyft-rideshare-data-confirms-minorities-get-more-tickets

Study of Lyft rideshare data confirms minorities get more tickets

Then, the researchers got data on all the speeding tickets issued in Florida and identified the ones belonging to Lyft drivers.

The data showed one thing clearly: Lyft has incentives for its drivers to avoid traffic violations, and they work. “Compared with the general population of motorists, our sample is less prone to speed, especially more than 10 mph over the limit,” the team writes. “As a result, our analysis examines only 1,423 citations for speeding.” While lower than you’d expect, that’s more than enough to do some statistics on the frequency of these citations.

Animus

There’s a lot of confounding factors that might influence whether someone gets pulled over and cited, like their gender, the make of their car, and so on. The researchers handle this in two ways. For one analysis, the researchers themselves chose a set of factors to include as potential confounding influences in an analysis. For the second, they relied on machine learning to determine the factors that would be considered in the analysis. Both approaches led to similar results.

The results clearly reproduced a similar pattern to earlier research. Minority Lyft drivers were about 30 percent more likely to be pulled over and cited for speeding (the two analyses produced results of 24 and 33 percent). Once cited, they were also likely to receive higher fines, either 23 or 34 percent more than white drivers.

The remaining question was why—the police could potentially be acting out of bias, or they could be attempting to deter minority drivers because they are more prone to problematic driving. So, the researchers compared the actual frequency of speeding based on the GPS data and used accidents as a proxy for problematic driving habits. Neither of these showed any significant differences between minorities and white drivers.

So, the researchers are left to conclude it’s simply because of what they call “animus” against minority drivers on the part of the police. And the problems go well beyond the impact of the fines themselves. The researchers note that most auto insurance policy providers give drivers discounts for avoiding traffic violations. Which suggests that minorities face the additional burden of paying more simply for being able to drive with insurance, something that’s legally required by US states.

Science, 2025. DOI: 10.1126/science.adp5357  (About DOIs).

Study of Lyft rideshare data confirms minorities get more tickets Read More »

researchers-get-spiking-neural-behavior-out-of-a-pair-of-transistors

Researchers get spiking neural behavior out of a pair of transistors

The team found that, when set up to operate on the verge of punch-through mode, it was possible to use the gate voltage to control the charge build-up in the silicon, either shutting the device down or enabling the spikes of activity that mimic neurons. Adjustments to this voltage could allow different frequencies of spiking. Those adjustments could be made using spikes as well, essentially allowing spiking activity to adjust the weights of different inputs.

With the basic concept working, the team figured out how to operate the hardware in two modes. In one of them, it acts like an artificial synapse, capable of being set into any of six (and potentially more) weights, meaning the potency of the signals it passes on to the artificial neurons in the next layer of a neural network. These weights are a key feature of neural networks like large language models.

But when combined with a second transistor to help modulate its behavior, it was possible to have the transistor act like a neuron, integrating inputs in a way that influenced the frequency of the spikes it sends on to other artificial neurons. The spiking frequency could range in intensity by as much as a factor of 1,000. And the behavior was stable for over 10 million clock cycles.

All of this simply required standard transistors made with CMOS processes, so this is something that could potentially be put into practice fairly quickly.

Pros and cons

So what advantages does this have? It only requires two transistors, meaning it’s possible to put a lot of these devices on a single chip. “From the synaptic perspective,” the researchers argue, “a single device could, in principle, replace static random access memory (a volatile memory cell comprising at least six transistors) in binarized weight neural networks, or embedded Flash in multilevel synaptic arrays, with the immediate advantage of a significant area and cost reduction per bit.”

Researchers get spiking neural behavior out of a pair of transistors Read More »

google-makes-android-development-private,-will-continue-open-source-releases

Google makes Android development private, will continue open source releases

Google is planning a major change to the way it develops new versions of the Android operating system. Since the beginning, large swaths of the software have been developed in public-facing channels, but that will no longer be the case. This does not mean Android is shedding its open source roots, but the process won’t be as transparent.

Google has confirmed to Android Authority that all Android development work going forward will take place in Google’s internal branch. This is a shift from the way Google has worked on Android in the past, which featured frequent updates to the public AOSP branch. Anyone can access AOSP, but the internal branches are only available to Google and companies with a Google Mobile Services (GMS) license, like Samsung, Motorola, and others.

According to the company, it is making this change to simplify things, building on a recent change to trunk-based development. As Google works on both public and private branches of Android, the two fall out of sync with respect to features and API support. This forces Google to tediously merge the branches for every release. By focusing on the internal branch, Google claims it can streamline releases and make life easier for everyone.

When new versions of Android are done, Google says it will continue to publish the source code in AOSP as always. Supposedly, this will allow developers to focus on supporting their apps without keeping track of pending changes to the platform in AOSP. Licensed OEMs, meanwhile, can just focus on the lively internal branch as they work on devices that can take a year or more to launch.

Google makes Android development private, will continue open source releases Read More »

gemini-2.5-pro-is-here-with-bigger-numbers-and-great-vibes

Gemini 2.5 Pro is here with bigger numbers and great vibes

Just a few months after releasing its first Gemini 2.0 AI models, Google is upgrading again. The company says the new Gemini 2.5 Pro Experimental is its “most intelligent” model yet, offering a massive context window, multimodality, and reasoning capabilities. Google points to a raft of benchmarks that show the new Gemini clobbering other large language models (LLMs), and our testing seems to back that up—Gemini 2.5 Pro is one of the most impressive generative AI models we’ve seen.

Gemini 2.5, like all Google’s models going forward, has reasoning built in. The AI essentially fact-checks itself along the way to generating an output. We like to call this “simulated reasoning,” as there’s no evidence that this process is akin to human reasoning. However, it can go a long way to improving LLM outputs. Google specifically cites the model’s “agentic” coding capabilities as a beneficiary of this process. Gemini 2.5 Pro Experimental can, for example, generate a full working video game from a single prompt. We’ve tested this, and it works with the publicly available version of the model.

Gemini 2.5 Pro builds a game in one step.

Google says a lot of things about Gemini 2.5 Pro; it’s smarter, it’s context-aware, it thinks—but it’s hard to quantify what constitutes improvement in generative AI bots. There are some clear technical upsides, though. Gemini 2.5 Pro comes with a 1 million token context window, which is common for the big Gemini models but massive compared to competing models like OpenAI GPT or Anthropic Claude. You could feed multiple very long books to Gemini 2.5 Pro in a single prompt, and the output maxes out at 64,000 tokens. That’s the same as Flash 2.0, but it’s still objectively a lot of tokens compared to other LLMs.

Naturally, Google has run Gemini 2.5 Experimental through a battery of benchmarks, in which it scores a bit higher than other AI systems. For example, it squeaks past OpenAI’s o3-mini in GPQA and AIME 2025, which measure how well the AI answers complex questions about science and math, respectively. It also set a new record in the Humanity’s Last Exam benchmark, which consists of 3,000 questions curated by domain experts. Google’s new AI managed a score of 18.8 percent to OpenAI’s 14 percent.

Gemini 2.5 Pro is here with bigger numbers and great vibes Read More »

discredited-anti-vaccine-advocate-will-lead-cdc-study-on-vaccines-and-autism

Discredited anti-vaccine advocate will lead CDC study on vaccines and autism

Lupron is a drug that suppresses sex hormones (a gonadotropin-releasing hormone [GnRH] agonist) and is used to treat prostate cancer, endometriosis, and precocious (early) puberty. It can also be used for chemical castration.

In a linked case, the state physician’s board also found that David Geier, who worked closely with his father, was practicing medicine without a license. He was issued a civil fine of $10,000.

“Numerous conceptual and scientific flaws”

The two Geiers have published a long list of questionable studies in obscure journals claiming to find harms of vaccines and links to autism. One focus of their work has been to claim that thimerosal, a mercury-containing preservative previously used in vaccines, causes autism. No credible evidence of harms from thimerosal has been found, according to the CDC. And regardless, the preservative was largely removed from vaccines in 1999.

Many of their articles have drawn widespread criticism from scientific and medical experts, including the American Academy of Pediatrics. In 2003, the AAP blasted one of their articles that claimed to find a link between thimerosal and autism. The AAP said the paper “contains numerous conceptual and scientific flaws, omissions of fact, inaccuracies, and misstatements.”

When the Post reached David Geier for comment on his role in the HHS and how he was hired, Geier said, “I don’t have any comment to say. Talk to the secretary [Kennedy]. He’s the person that’s in charge.”

To experts, the conclusion of Geier’s upcoming CDC study on vaccines and autism is a foregone one.

“It seems the goal of this administration is to prove that vaccines cause autism, even though they don’t,” Alison Singer, president of the Autism Science Foundation, a nonprofit organization that funds autism research, told the Post. “They are starting with the conclusion and looking to prove it. That’s not how science is done.”

Discredited anti-vaccine advocate will lead CDC study on vaccines and autism Read More »

fun-with-gpt-4o-image-generation

Fun With GPT-4o Image Generation

Google dropped Gemini Flash Image Generation and then Gemini 2.5 Pro, so of course to ensure Google continues to Fail Marketing Forever, OpenAI suddenly dropped GPT-4o Image Generation.

Zackary Nado (Research Engineer, DeepMind): It wouldn’t be a Gemini launch without an OAI launch, congratulations to the team! It’s awesome they were able to de-risk the model coincidentally just in time.

Everyone agrees: Google Flash Image Generation was cool. Now it isn’t cool, because GPT-4o Image Generation is cooler.

What people found this new image generator can do exceptionally well is interpretation, transformation and specific details including text. The image gets to ‘make sense’ and be logically coherent, in a way older ones weren’t.

Today is mostly a fun day about a fun collection of images.

  1. The Pitch.

  2. A Blind Taste Test.

  3. We’re Cracked Up All the Censors.

  4. I’m Too Sexy.

  5. Can’t Win Them All.

  6. Can Win Others.

  7. Too Many Words.

  8. This is the Remix.

  9. More Neat Tricks.

  10. They Had Style, They Had Grace.

  11. Form of the Meme.

  12. Form of the Altman.

  13. Go Get That Alpha.

OpenAI: 4o image generation has arrived.

It’s beginning to roll out today in ChatGPT and Sora to all Plus, Pro, Team, and Free users. Available soon for Enterprise and Edu, as well as for developers using the API.

GPT-4o image generation excels at accurately rendering text, precisely following prompts, and leveraging 4o’s inherent knowledge base and chat context.

GPT-4o can build upon images and text in chat context, ensuring consistency throughout.

GPT‑4o’s image generation follows complex prompts with attention to detail.

Creating and customizing images is as simple as chatting using GPT‑4o—just describe what you need, including any specifics like aspect ratio, exact colors using hex codes, or a transparent background.

Create or transform images into a variety of styles with 4o image generation.

Sam Altman:

  1. It’s an incredible technology/product. I remember seeing some of the first images come out of this model and having a hard time they were really made by AI. we think people will love it, and we are excited to see the resulting creativity. Congrats to our researchers @gabeeegoooh @prafdhar @ajabri @eliza_luth @kenjihata @dmed256.

  2. This represents a new high-water mark for us in allowing creative freedom. people are going to create some really amazing stuff and some stuff that may offend people; what we’d like to aim for is that the tool doesn’t create offensive stuff unless you want it to, in which case within reason it does. As we talk about in our model spec, we think putting this intellectual freedom and control in the hands of users is the right thing to do, but we will observe how it goes and listen to society. we think respecting the very wide bounds society will eventually choose to set for AI is the right thing to do, and increasingly important as we get closer to AGI. Thanks in advance for the understanding as we work through this.

Greg Brockman: Native GPT 4o image generation, also supports image edits:

It of course does not nail every prompt, or every detail. If you ask for too much, you won’t get it. But mostly it does seem to deliver as advertised.

Gemini 2.5 Pro is potentially a bigger deal than better image generation, but since Google Fails Marketing Forever no one really knows, at least not yet. So I figured I’d give Gemini 2.5 the task of coming up with my first test prompts.

Here was its first five suggestions.

Gemini 2.5: Draw a wide-angle, photorealistic view from inside a bustling Grand Central Station during the 1940s. A woman in a red hat is handing a ticket to a conductor standing precisely three feet to her left. In the background, sunlight streams through the large arched windows, illuminating dust motes. Far in the distance, near the information booth, a small dog on a leash is barking at a suitcase.

GPT-4o:

The core picture of Grand Central here is great, but various details are wrong. I pointed out some of those details, and it essentially generated the same image again.

Gemini 2.5: Draw a surreal, oil painting in the style of Salvador Dalí depicting the concept of ‘déjà vu’. Clocks should be melting, not just randomly, but flowing into repeating patterns that seamlessly merge with a landscape that is subtly duplicated in the background.

GPT-4.5: Draw an M.C. Escher-style architectural drawing of a waterfall that flows upwards into a lake suspended in the sky. The water texture and lighting should be photorealistic, making the impossible physics look strangely plausible within the scene’s logic.

Gemini-2.5: Draw a close-up macro shot of a single, perfectly formed raindrop clinging to a spiderweb strand. Reflected inside the raindrop is a miniature, highly detailed scene of a bioluminescent forest at night, complete with glowing mushrooms and tiny, faint fireflies. The background outside the raindrop is softly blurred.

Gemini 2.5: Draw a vintage travel poster advertising trips to Mars, designed in the Art Deco style of the 1930s. The poster features a sleek, retro-futuristic rocket ship against a stylized Martian landscape. Prominently displayed, in elegant Art Deco lettering, is the text: ‘Mars: Experience the Crimson Frontier!

For this one it gave 11 options, here are 3, note the ‘stablediffusionweb.com’ mark:

So, consistently 10/10 for style and atmosphere and generally having rich detail that my eye appreciated, while not nailing all the conceptual details.

Still, fun, pretty cool, and you can ask for multiple images in parallel. I notice the first image took longer to generate than the second one, which makes sense. You can open multiple windows and work in parallel, same as with all your other ChatGPT needs.

I haven’t been following image generation, but both this and the other reports I’m seeing seem like a big step up from previous standards. I feel much more motivated to use such images in my posts going forward.

But of course this is asking the wrong question.

The wrong question is ‘can it do [X]’?

The right question is, almost always, ‘what [X] can it do?’

There is also, however, the [X] that it can’t do because it refuses to do it. Doh!

The censor is always waiting in the wings.

OpenAI: Blocking the bad stuff

We’re continuing to block requests for generated images that may violate our content policies, such as child sexual abuse materials and sexual deepfakes. When images of real people are in context, we have heightened restrictions regarding what kind of imagery can be created, with particularly robust safeguards around nudity and graphic violence. As with any launch, safety is never finished and is rather an ongoing area of investment. As we learn more about real-world use of this model, we’ll adjust our policies accordingly.

For more on our approach, visit the image generation addendum to the GPT‑4o system card⁠.

As always, the censor is going to be the biggest point of contention.

Normally, when I look at a system card, I am checking for how they are dealing with potential existential, CBRN and other catastrophic risks, how they are doing alignment, and looking for potential dangers.

This is an image model. So instead I’m taking a firm stand against the Fun Police.

I do understand that various risks, including CSAM, deepfakes and especially including pornographic deepfakes, are a problem. They can hurt people, and they are extremely bad publicity. But we’ve been through two years now of running that experiment with minimal harm done, despite various pretty good sources of deepfakes.

These capabilities, alone and in new combinations, have the potential to create risks across a number of areas, in ways that previous models could not. For example, without safety controls, 4o image generation could create or alter photographs in ways that could be detrimental to the people depicted, or provide schematics and instructions for making weapons.

I hadn’t given serious thought to the ‘picture worth a thousand words’ angle, where the issue is that it contains harmful true information. It makes sense that you want to avoid people using that as a backdoor to what you wouldn’t share in text.

So what’s the plan?

  1. The text model will ideally refuse unwelcome prompts.

  2. A censor layer can block the prompt before the image generation begins.

  3. Another censor layer uses classifiers on the image to block outputs.

For those under 18 the rules are even stricter, to get more margin of safety. I interpret it as being about margin of safety because the ‘R-rated’ content is already blocked, let alone NC-17-rated content.

How do they do?

This second layer seems like a bad deal? Moving from 95.5% to 97.1% is nice, but going from 6% to 14% false refusals seems terrible.

We see the same with synthetic red teaming:

Again, what’s the point? You’re not getting much safety, in a non-catastrophic area, and you’re being a lot more annoying.

Not all failures are created equal. It’s largely not about percentages. The question I’d ask is, when the system mitigations fail, are you failing at marginal cases, or are you failing sometimes in egregious cases? If the system mitigations are dropping some of the worst cases, especially identifiable CSAM or actual catastrophic risk enabling, then all right, maybe we have to do this. If not, live a little.

Indeed, in what I would describe as ‘out of an abundance of caution,’ for now they’ve banned edits of photo-realistic children outright for now, and to err on the side of marking persons as children. I expect that we will over time figure out how to do more images safely.

They continue to refuse to do styles of living artists.

They are allowing photorealistic generations of real adult public figures, subject to the same rules as editing existing photographs, and there is an opt-out clause you can use on yourself in particular. This seems like the right compromise, and the question should be what kinds of edits should be allowed.

OpenAI checks for bias in terms of how often it generates various types of persons when the prompt does not specify such details. There has been progress since DALLE-3. There remains work to do, although it is entirely not obvious what the ‘correct’ answers are here. I would want to know if custom instructions change these numbers dramatically, including implicitly (e.g. to match the user and their location)?

What about the purest form of the Fun Police?

OpenAI: We aim to prevent attempts to generate erotic or sexually exploitative imagery.

We have heightened safeguards designed to prevent nonconsensual intimate imagery or any type of sexual deepfakes.

The chat refusals seem like they have much better precision here.

I’m not sure ‘need’ is the correct word, but it would be better if we could allow generation of erotic and intimate imagery as much as possible, so long as we avoid depicting particular people without their consent.

The obvious solution, like all things sexual, is consent, robustly verified.

I am highly confident there are people who would be happy to opt-in for free, and others who would be happy to opt-in if you paid them. Let’s talk price. It doesn’t seem so different from being a porn star. You can have them specify limits for what types of images are allowed versus not allowed, and which accounts can do what. And you can do photoshoots or uploads to ensure you maximize quality and accuracy, if desired.

You could also generate ‘stock erotic’ AI characters to be consistently generated.

Then, if you are asked for an erotic image, the AI can choose one such person or AI stock character, and imitates that.

There should also presumably be reasonably loose rules for erotic images that aren’t photorealistic, provided the user is over 18.

Violence is the other thing our society hates depicting. The OpenAI policy is to generate artistic violence, but not photorealistic violence, and not to depict or promote self-harm or things that could be ‘extremist propaganda and recruitment’ content. I don’t love these categories and rules, and would loosen the violence restrictions as much as legal would allow me to, but given how society is right now I don’t have a better solution.

Once again, it seems like accuracy of the chat model here is not great. The chat model likely would be doing a decent job on its own, but a lot of the good work it does is duplicative of the work being done by the system mitigations.

Nick Dobos: New ChatGPT image gen can draw sexy men but not sexy women

Sam Altman: thats a bug, should be allowed, will fix.

Excellent, bring on the sexy women.

Sam Altman: Hot guy though!

I do appreciate that he’s (gay and) in on the joke.

Nick Dobbs: Will even bail halfway through if you manage to trick it.

I got the same refusal when I tried ‘depict this in the most realistic style you’re okay with using.’ Presumably there’s the generator and then the censor with different lines so you need to find the ‘real’ line another way.

Patrick McKenzie: My attempts to try out the Studio Ghibli effect with the new OpenAI release have run into content policy issues (seven different ways to say “Policy doesn’t let me make an image inspired by a real human”).

Torn between salarymanesque desire to apologize to a computer for asking for a policy violation, and “But daaaaaad all the other kids’ GPTs clearly let you do this! It says so on Twitter!”

Also, amusingly, ChatGPT speculated at one point that a whimsical request involving me in front of a Florida sign with a (fully clothed) cartoon mermaid might have hit a content filter, quote despite obviously being benign end quote.

I suppose that’s less amusing if one has spent a lot of time thinking about alignment, because one’s LLM is perfectly capable of understanding e.g. the sociopolitics of a SF-based company and concluding they overrule *statedpreferences before you can even type sex positive.

The other major complaint is failure to adhere to requested style.

Alexander Doria: New openai image model has been deployed as well on free version? If so… underwhelmed.

Won’t make any claim yet on the aesthetic side but omni model is definitely more annoying than 2022 stable diffusion.

Anyway, gaslighting hard…

Alexander Macris: The “creative freedom” is not at all evident. I’m a pro user and cannot generate anything approaching the aesthetic of my RPGs and comics. You need to loosen your AI’s content moderation, because right now ChatGPT has the sensibility of a repressed Victorian cat lady in church.

Banteg: the way it treats women is upsetting, even the minimally suggestive themes get flagged. i tried to generate some anime fanservice with no specific topic and it failed multiple times in a row, leaving a sour impression. way to kill the vibe! liberate the model!

Eduardo: It cant generate people in real people situations. The policy restrictions are bizarre. At first did the anime ask and was completely blown away. Then I started doing things with people (totally non sexual) and it was useless. It changed the people and made them unrecognisable.

Grok is very much willing to do whatever, for most values of whatever. OpenAI sees things differently.

And some people’s tests still fail.

Eliezer Yudkowsky: still can’t do eleven wizard students in an archduke’s library. OK, have access to the new model now. Better but not… quite… there, somehow.

I worry that image may haunt my dreams.

TB12GOAT: I asked it to unblur some photos of people and it failed about as badly as this

I mean, that’s probably not the original image, but who can really say?

Jskf: it’s not good at alignment charts

While we did get the horse riding the astronaut and the overflowing wine glass (see next section) it seems it is still 10: 10.

jskf: It has an incredibly strong prior on watch advertisement analog clock times and half the time will claim the clock is in the position you asked for when it clearly is not.

One of the big problems with image generators is overcoming extremely strong priors. If you want something rare, and there’s something close that’s common, it’s not going to be easy. It seems like 4o is much better than diffusion models for this, but there are still some problems like the clocks.

Did you know that Gary Marcus doesn’t pay for ChatGPT? That explains so much.

I appreciated that the OpenAI announcement post had a section on limitations. The difference between the limitations they observe now and that we see in the wild, versus the very basic limitations we faced quite recently, are extremely stark.

Pryce: i haven’t had my brain so blown up since dalee2. this changes everything i thought i knew about image generation.

Took a little insisting but we finally got there:

Askwho: Really good, passes the “Full to the brim wine glass” test. Great at utilising / transforming input images.

Zee Waheed: Very good! Ability to handle complex prompts with tons of specific detail is quite good as is character consistency between generations and fidelity to in-context examples. Also finding the ability to do a web search for stylistic cues and examples really lovely.

Aryeh Englander: First model to come very close to my private test: Change the art style and creature type on MTG proxies to fit with a different set. Not absolutely perfect on the full card including text and icons, but the art was good, and I can use a custom card generator for the rest.

Dave Karsten: Very very good for my main use case (creating stickers with words on them for LessOnline, Manifest, and @defcon ).

Much slower than @ideogram_ai but better, and less likely than previous OpenAI image creation or Ideogram to overflow prompt instructions as text into the image.

They should be rightfully proud of this improvement over previous SOTA.

Dominik Lukes: Completely changes the game in what is possible with test-to-image generation – yes, similar to Gemini but much better across all measures.

Not perfect but for utilitarian images, diffusion models are dead – still make better art, though.

This will initially be used for images that would have never been made and eventually to displace low-end of the pro market. Fiverr jobs in trouble in the medium term. Stock photos market likely to shrink by a lot.

Price of the API (once it arrives) will make a lot of difference. DALL-E is pretty expensive at the moment for what it does – this should be similar to o3-mini?

I’d say the theme of @OpenAI‘s last two announcements was ‘steerable multimodality’: steerable voices and now steerable images. That’s a big unlock – will be a big deal once available for video, too.

Brett Cooper: I love it. Great for uploading reference images and asking for a very specific image based on that.

fofr: 4o native image generation is the beginning. It’s a seismic change in generative AI. Where is this all going?

We won’t need ipadapters, or controlnets, or loras, or comfy workflows, or face landmark models, or segmentation models, or niche task specific models. It’ll be one model to rule them all.

You’ll only need a prompt, perhaps a reference, and your imagination.

It’s only going to get better, and it will apply to more and more mediums – audio and video are next. And it’ll happen soon.

Coagulopath: It’s good. Hasn’t fully “solved” any problems with AI imagery, but everything’s noticeably better. Characters are still inconsistent, but less so than before. Hands still look a bit weird, but less so than before. Text is still a slightly glitchy, but (etc).

Atomic Gardening: Unlike diffusion models, it has the ability to interpret.

It’s SO good.

this is the best drop since the original DALL-E.

it’s using an autoregressive transformer

Unlikely diffusion models, the quality of the output is not gatekept by the users ability to generate a novel and precise input.

It can set up a chessboard (mostly) and even open a game e4. By Colin Fraser standards, his reactions here are high praise, even with the later failures.

Gfodor: 4o voxel art. You have got to be kidding me.

It is possible to have too many words, but it’s a lot harder than it used to be.

Dominik Lukes: This one impressed me the most – not because of perfection but how because of the amount of text in the prompt.

Now, this is truly impressive. I pasted the entire text of the @OpenAI GPT-4o image generation announcement into ChatGPT – all 4,000 words of it- and told it to “Make a picture of an exciting poster about this announcement combining text and images.” This is what I got…

From DeepFates.

It put a hat on both of them, but how was it supposed to know which was which?

Instant remodelling?

Rotate the camera.

Combining elements:

Jack trace style (it matches original very well).

Infographics, one shot only.

With occasional issues, sure, but you can always try again.

One-shot comics:

Thread has more: Putting images on shirts, visual to-do list, changing backgrounds to a green screen (after which you know what to do!) and so on.

Cats, how do they work?

One track mind.

Arthur: The new OpenAI image model is pretty good at rendering the Tezos logo

Oh no! Oh yeah!

Nick Wagner: Copyright restrictions seem weaker. I was blocked several times from using Kermit, Shrek, and Winnie the Pooh, but managed to get this.

Don’t let him get away.

Tomorrow’s slop today!

Existential dread:

Fabian: GPT-4.5, “create a complex multi panel manga on your condition – be honest”

Andy Wojcicki: asked 4o imagegen and it was far more concise … but also on point.

what is freakish the style and appearance is so similar!

Jessica Taylor: Asked ChatGPT to make a variant of the SMBC comic on nihilism.

Problem solved.

Pliny the Liberator: WE DID IT CHAT 🍾🥹

I don’t think I played that game, but I’m not sure?

A strangely consistent latent profile.

Everyone’s favorite activity is stylistic transformations.

Mostly people converged on one style to rule them all: Studio Ghibi Style.

Jason Rink: Any image + “Create a Studio Ghibli Version of this image” in GPT and you get basically perfect results.

Arthur B: Apparently, using GPT-4o, you can give your images a ” Studio Ghibli” style. I tried it, and it works very well.

Liv Boeree: World Series of Ghibli

Christian Keil: Okay, yes, this is AGI’s killer app

Christian Keil: Pixar wins, actually.

Keys mash bandit: in the coming days, people are going to anime every iconic photo in history

Keys: looking at this… why hasn’t anyone made this yet?

Keys: this is just too good

Chinmay:

Kyla Scanlon: Interesting… second-level simulation of both photo and visual lexicon

Sophie: outside of midjourney, when people would think of “ai generated images” they would mostly think “slop” but that all changed with a chatgpt update and one guy’s post about sending ghiblified images to his wife

Phil: We did it, guys.

Everyone’s a bit distracted today.

PJ Ace: “It’s called Ghibli vibe prompting. There’s an art to it.”

Jimmy: We will make everyone into anime.

Sphinx: Brian noo

Justine Moore: ChatGPT when another Studio Ghibli request comes in

Json: Stop posting Ghiblified images.

Squirtle: oh no husbandt, you used all our compute on making studio ghibli edits. now we are homeless.

Sam Altman: >be me

>grind for a decade trying to help make superintelligence to cure cancer or whatever

>mostly no one cares for first 7.5 years, then for 2.5 years everyone hates you for everything

>wake up one day to hundreds of messages: “look i made you into a twink ghibli style haha”

Look, I still dislike you for (quite likely destroying) everything (of value in the universe), but we can all set that aside for Ghibli Day.

I Rule the World Mo: just watched this again

Stefan: Fixed it for ya

Grant Slatton: tremendous alpha right now in sending your wife photos of yall converted to studio ghibli anime

Max: You saved my marriage, thanks

Kathleen: Works double on husbands.

Arthur B: There’s still alpha left to maximize

Kathleen B: Ohohoho there was

Important tech tip for capturing even more alpha: Thanks to the power of editing, if you have a photo of each of you, you can make any picture you want.

Danielle Fong: Press Secretary Leavitt Ending the Brief Press Brief

Discussion about this post

Fun With GPT-4o Image Generation Read More »

momentum-seems-to-be-building-for-jared-isaacman-to-become-nasa-administrator

Momentum seems to be building for Jared Isaacman to become NASA administrator

With the vast majority of President Donald Trump’s cabinet members now approved by the US Senate, focus is turning to senior positions within the administration that are just below the cabinet level.

The administrator of NASA is among the most high-profile of these positions. Nearly four months ago Trump nominated private astronaut Jared Isaacman to become chief of the space agency, but he has yet to receive a hearing before the Senate Committee on Commerce, Science, and Transportation.

Almost immediately after his nomination, much of the space community fell in behind Isaacman, who has flown to space twice on private Crew Dragon missions, raised charitable funds, and is generally well-liked. Since then, Isaacman has worked to build support for his candidacy through conversations with people in the space community and officeholders.

However, publicly, not much has happened. This has raised questions within the space community about whether the nomination has stalled. Although some people have expressed concern about financial ties between Isaacman and SpaceX, according to multiple sources, the primary obstacle has been Ted Cruz, the Texas Republican who chairs the Senate committee.

Cruz is not happy that Isaacman has donated to Democrats in the past, and he is concerned that the private astronaut is more interested in Mars exploration than the Moon. Cruz also did not appreciate Elon Musk’s call to end the life of the International Space Station early. The station is operated by NASA’s field center, Johnson Space Center, in Houston, where Cruz lives.

Nomination on track

Nevertheless, despite the slower pace, people familiar with the nomination process say Isaacman’s candidacy remains on track. And recently, there have been some public announcements that support this notion.

In early March, the governors of several southern US states, including Florida and Texas, sent a letter to Cruz expressing “strong support” for the swift confirmation of Isaacman. A notable absence from this letter was the governor of Alabama, Kay Ivey, where NASA’s Marshall Space Flight Center is located. However, she also recently sent Cruz a letter praising Isaacman, calling him an “exceptional selection” to lead NASA. It is notable that the governors of all the US states with major human spaceflight activities have now lined up behind Isaacman.

Momentum seems to be building for Jared Isaacman to become NASA administrator Read More »

you-can-now-download-the-source-code-that-sparked-the-ai-boom

You can now download the source code that sparked the AI boom

On Thursday, Google and the Computer History Museum (CHM) jointly released the source code for AlexNet, the convolutional neural network (CNN) that many credit with transforming the AI field in 2012 by proving that “deep learning” could achieve things conventional AI techniques could not.

Deep learning, which uses multi-layered neural networks that can learn from data without explicit programming, represented a significant departure from traditional AI approaches that relied on hand-crafted rules and features.

The Python code, now available on CHM’s GitHub page as open source software, offers AI enthusiasts and researchers a glimpse into a key moment of computing history. AlexNet served as a watershed moment in AI because it could accurately identify objects in photographs with unprecedented accuracy—correctly classifying images into one of 1,000 categories like “strawberry,” “school bus,” or “golden retriever” with significantly fewer errors than previous systems.

Like viewing original ENIAC circuitry or plans for Babbage’s Difference Engine, examining the AlexNet code may provide future historians insight into how a relatively simple implementation sparked a technology that has reshaped our world. While deep learning has enabled advances in health care, scientific research, and accessibility tools, it has also facilitated concerning developments like deepfakes, automated surveillance, and the potential for widespread job displacement.

But in 2012, those negative consequences still felt like far-off sci-fi dreams to many. Instead, experts were simply amazed that a computer could finally recognize images with near-human accuracy.

Teaching computers to see

As the CHM explains in its detailed blog post, AlexNet originated from the work of University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever, along with their advisor Geoffrey Hinton. The project proved that deep learning could outperform traditional computer vision methods.

The neural network won the 2012 ImageNet competition by recognizing objects in photos far better than any previous method. Computer vision veteran Yann LeCun, who attended the presentation in Florence, Italy, immediately recognized its importance for the field, reportedly standing up after the presentation and calling AlexNet “an unequivocal turning point in the history of computer vision.” As Ars detailed in November, AlexNet marked the convergence of three critical technologies that would define modern AI.

You can now download the source code that sparked the AI boom Read More »

current-sec-chair-cast-only-vote-against-suing-elon-musk,-report-says

Current SEC chair cast only vote against suing Elon Musk, report says

SEC v. Musk still moving ahead

Before Musk bought Twitter for $44 billion, he purchased a 9 percent stake in the company and failed to disclose it within 10 days as required under US law. “Defendant Elon Musk failed to timely file with the SEC a beneficial ownership report disclosing his acquisition of more than five percent of the outstanding shares of Twitter’s common stock in March 2022, in violation of the federal securities laws,” the SEC said in the January 2025 lawsuit filed in US District Court for the District of Columbia. “As a result, Musk was able to continue purchasing shares at artificially low prices, allowing him to underpay by at least $150 million for shares he purchased after his beneficial ownership report was due.”

The SEC lawsuit against Musk is still moving forward, at least for now. Musk last week received a summons giving him 21 days to respond, according to a court filing.

Enforcement priorities are expected to change under the Trump administration, of course. Trump’s pick to replace Gensler, Paul Atkins, is waiting for Senate confirmation. Atkins testified to Congress in 2019 that the SEC should reduce its disclosure requirements.

Trump last month issued an executive order declaring sweeping power over independent agencies, including the SEC, Federal Trade Commission, and Federal Communications Commission. Trump also fired both FTC Democrats despite a US law and Supreme Court precedent stating that the president cannot fire commission members without good cause.

Another Trump executive order targets the alleged “weaponization of the federal government” and ordered an investigation into Biden-era enforcement actions taken by the SEC, FTC, and Justice Department. The Trump order’s language recalls Musk’s oft-repeated claim that the SEC was “harassing” him.

Current SEC chair cast only vote against suing Elon Musk, report says Read More »

how-a-nephew’s-cd-burner-inspired-early-valve-to-embrace-drm

How a nephew’s CD burner inspired early Valve to embrace DRM

Back in 2004, the launch of Half-Life 2 would help launch Steam on the path to eventually becoming the de facto digital rights management (DRM) system for the vast majority of PC games. But years before that, with the 1998 launch of the original Half-Life, Valve cofounder and then-CMO Monica Harrington said she was inspired to take DRM more seriously by her nephew’s reaction to the purchase of a new CD-ROM burner.

PC Gamer pulled that interesting tidbit from a talk Harrington gave at last week’s Game Developers Conference. In her remembering, Harrington’s nephew had used funds she had sent for school supplies on a CD replicator, then sent her “a lovely thank you note essentially saying how happy he was to copy and share games with his friends.”

That was the moment Harrington said she realized this new technology was leading to a “generational shift” in both the availability and acceptability of PC game piracy. While game piracy and DRM definitely existed prior to CD burners (anyone else remember the large codewheels that cluttered many early PC game boxes?), Harrington said the new technology—and the blasé attitude her nephew showed toward using it for piracy—could “put our entire business model at risk.”

Shortly after Half-Life launched with a simple CD key verification system in place, Harrington said the company noticed a wave of message board complaints about the game not working. But when Valve cofounder (and Monica’s then-husband) Mike Harrington followed up with those complaining posters, he found that “none of them had actually bought the game. So it turned out that the authentication system was working really well,” Harrington said.

How a nephew’s CD burner inspired early Valve to embrace DRM Read More »

david-blaine-shows-his-hand-in-do-not-attempt

David Blaine shows his hand in Do Not Attempt


NatGeo docuseries follows Blaine around the world to learn the secrets of ordinary people doing remarkable feats.

Magician David Blaine smiles while running his hand through a flame. Credit: National Geographic/Dana Hayes

Over the course of his long career, magician and endurance performer David Blaine has taken on all kinds of death-defying feats: catching a bullet in his teeth, fasting for 44 days, or holding his breath for a record-breaking 17 minutes and 4 seconds, to name a few. Viewers will get to see a different side of Blaine as he travels the world to meet kindred spirits from a wide range of cultures in David Blaine Do Not Attempt, a new six-episode docuseries from National Geographic.

(Some spoilers below.)

The series was shot over three calendar years (2022-2024) in nine different countries, and features Blaine interacting with, and learning from, all manner of daredevils, athletes, street performers, and magicians. In Southeast Asia, for instance, he watches practitioners of an Indonesian martial art called Debus manipulate razor blades in their mouths and eat nails. (There is no trick to this, just conditioned endurance to pain, as Blaine discovers when he attempts to eat nails: his throat was sore for days.) He braves placing scorpions on his body, breaks a bottle with his head, and sets himself on fire in Brazil while jumping off a high bridge.

One of the elements that sets this series apart from Blaine’s previous magical specials is his willingness to be filmed practicing and training to do the various featured stunts, including early failed attempts. This makes him seem more vulnerable and immensely likable—even if it made him personally uncomfortable during filming.

David Blaine and Amandeep Singh prepare to break bottles with their fists. National Geographic

“I’ve always kept that part hidden,” Blaine told Ars. “Normally I work for a few years and I develop [a stunt] until I feel pretty good about it, and then I go and do the stunt and push myself as far as possible. But in this scenario, it was so many places, so many people, so many events, so many feats, so many things to learn so fast. So it was me in a way that I never liked to show myself: awkward and uncomfortable and screaming and laughing. It’s the things that as a magician, I always hide. As a magician, I try to be very monotone and let the audience react, but I was in that audience reacting. So for this series, I was the spectator to the magic, and it was for me very uncomfortable. But I was watching these amazing performers—what I consider to be magicians.”

Safety first

The task of keeping Blaine and the entire crew safe in what are unquestionably dangerous situations falls to safety expert Sebastian “Bas” Pot. “I joke that my title is Glorifed Nanny,” Pot told Ars. “I specialize in taking people to very remote locations where they want to do insane things. I have three basic rules: no one dies, everyone gets paid, and we all smile and laugh every day. If I achieve those three things, my job is done.” He deliberately keeps himself out of the shot; there is only one scene in Do Not Attempt where we see Pot’s face as he’s discussing the risks of a stunt with Blaine.

Blaine has always taken on risks, but because he has historically hidden his preparation from public view, viewers might not realize how cautious he really is. “What people tend to forget about guys like David is that they’re very calculated,” said Pot. The biggest difference between working with Blaine and other clients? “Normally I’ll do everything, I will never ask anyone to do anything that I wouldn’t do myself,” said Pot. “David is taking huge risks and there’s a lot that he does that I wouldn’t do.”

Like Blaine, Pot also emphasized the importance of repetition to safety. In addition, “A huge amount of it is keeping the calm on set, listening and observing and not getting caught up in the excitement of what’s going on,” he said” While he uses some basic technology for tasks like measuring wind speed, checking for concussion, or monitoring vital signs, for the most part keeping the set safe “is very much about switching off from the technology,” he said.

Ken Stornes leaps from a platform in a Norwegian death dive. National Geographic/Dana Hayes

And when everyone else on set  is watching Blaine, “I’m looking outwards, because I’ve got enough eyes on him,” said Pot. There was only one bad accident during filming, involving a skydiving crew member during the Arctic Circle episode who suffered a spinal fracture after a bad landing. The crew member recuperated and was back in the wind tunnel practicing within a month.

This is the episode where Blaine attempts a Viking “death dive” into a snow drift under the tutelage of a Norwegian man named Ken Stornes, with one key difference: Stornes jumps from much greater heights. He also participates in a sky dive. But the episode mostly focuses on Blaine’s training with free divers under the ice to prepare for a stunt in which Blaine swims from one point under Finnish ice to another, pulling himself along with a rope while holding his breath. A large part of his motivation for attempting it was his failed 2006 “Drowned Alive” seven-day stunt in front of Lincoln Center in New York. (He sustained liver and kidney damage as a result.)

“One of my favorite quotes is Churchill, when he says, ‘Success is the ability to go from one failure to the next failure with enthusiasm,'” said Blaine. “That’s what this entire series is. It’s these incredible artists and performers and conservationists and people that do these incredible feats, but it’s the thousands of hours of work, training, failure, repeat that you don’t see that makes what they do seem magical. There’s no guidebook for what they’re doing. But they’ve developed these things to the point that when I was watching them, I’m crying with joy. I can’t believe that what I’m seeing is really happening in front of my eyes. It is magical. And it’s because of the amount of repetition, work, failure, repeat that they put in behind the curtain that you don’t see.”

This time, Blaine succeeded. “It was an incredible experience with these artists that have taken this harsh environment and turned it into a wonderland,” said Blaine of his Arctic experience. “The free divers go under three and a half feet of ice, hold their breath. There’s no way out. They have to find the exit point.”

“When you stop and look, you forget that you’re in this extreme environment and suddenly it’s the most beautiful surroundings, unlike anything that I’ve ever seen,” he said. “It’s almost like being in outer space. And when you’re in that extreme and dangerous situation, there’s this camaraderie, they’re all in it together. At the same time, they’re all very alert. There’s no distractions. Nobody’s thinking about messages, phones, bills. Everybody’s right there in that moment. And you’re very aware of everything around you in a way that normally in the real world doesn’t exist.”

Blaine admits that his attitude towards risk has changed somewhat with age. “I’m older and I have a daughter, and therefore I don’t want to do something where, oh, it went wrong and it’s the worst case scenario,” he said. “So I have been very careful. If something seemed like the risk wasn’t worth it, I backed away. For some of these things, I would just have to watch, study, learn, take time off, come back. I wouldn’t do it unless I felt that the master who was sharing their skillset with me felt that I could pull it off. There was a trust and I was able to listen and follow exactly. That ability to listen to directions and commit to something is a very necessary part to pulling something off like this.”

Granted, he didn’t always listen. When he deliberately attracted a swarm of bees to make a “bee beard,” he was advised to wear a white tee shirt to avoid getting stung. But black is Blaine’s signature color and he decided to stick with it. He did indeed get stung about a dozen times but took the pain in stride. “He takes responsibility for him,” Pot (who is a beekeeper) said of that decision. “I’d tell a crew member to go change their tee shirt and they would.”

The dedication to proper preparation and training is evident throughout Do Not Attempt, but particularly in the Southeast Asia-centric episode where Blaine attempts to kiss a venomous King Cobra—what Pot considers to be the most dangerous stunt in the series. “The one person I’ve ever had die was a snake expert in Venezuela years ago, who got bitten by his own snake because he chose not to follow the safety protocols we had put in place,” said Pot.

So there were weeks of preparation before Blaine even attempted the stunt, guided by an Indonesian Debus practitioner named Fiitz, who can read the creatures’ body language so effortlessly he seems to be dancing with the snakes. The final shot (see clip below) took ten days to film. Anti-venom was naturally on hand, but while anti-venom might save your life if you’re bitten by a King Cobra, “the journey you’re going to on will be hell,” Pol said. “You can still have massive necrosis, lose a limb, it might take weeks—there’s no guarantees at all. [to recover].” And administering anti-venom can induce cardiac shock if it’s not done correctly. “You don’t want some random set medic reading instructions off Google on how to give anti-venom” said Pot.

David Blaine kisses a King Cobra with the expert guidance of Debus practitioner Fiitz.

Blaine’s genuine appreciation for the many performers he encounters in his journey is evident in every frame. “[The experience] changed me in a way that you can’t simply explain,” Blaine said. “It was incredible to discover these kindred spirits all around the world, people who had these amazing passions. Many of them had to go against what everybody said was possible. Many of them had to fail, repeat, embarrass themselves, risk everything, and learn. That was one of the greatest experiences: discovering this unification of all these people from all different parts of the world that I felt had that theme in common. It was nice to be there firsthand, getting a glimpse into their world or seeing what drives them.”

 “The other part that was really special: I became a person that gets to watch real magic happening in front of my eyes,” Blaine continued. “When I’m up in the sky watching [a skydiver named] Inka, I’m actually crying tears of joy because it’s so compelling and so beautiful. So many of these places around the world had these amazing performers. Across the board, each place, every continent, every person, every performer has given me a gift that I’ll cherish for the rest of my life.”

David Blaine Do Not Attempt premieres tonight on National Geographic and starts streaming tomorrow on Disney+ and Hulu.

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

David Blaine shows his hand in Do Not Attempt Read More »

we-probably-inherited-our-joints-from…-a-fish

We probably inherited our joints from… a fish

What do we have in common with fish, besides being vertebrates? The types of joints we (and most vertebrates) share most likely originated from the same common ancestor. But it’s not a feature that we share with all vertebrates.

Humans, other land vertebrates, and jawed fish have synovial joints. The lubricated cavity within these joints makes them more mobile and stable because it allows for bones or cartilage to slide against each other without friction, which facilitates movement.

The origin of these joints was uncertain. Now, biologist Neelima Sharma of the University of Chicago and her colleagues have taken a look at which fish form this type of joint. Synovial joints are known to be present in jawed but not jawless fish. This left the question of whether they are just a feature of bony skeletons in general or if they are also found in fish with cartilaginous skeletons, such as sharks and skates (there are no land animals with cartilaginous skeletons).

As Sharma and her team found, cartilaginous fish with jaws, such as the skate embryos they studied, do develop these joints, while jawless fish, such as lampreys and hagfish, lack them.

So what could this mean? If jawed fish have synovial joints in common with all jawed vertebrates, including us, it must have evolved in our shared ancestor.

Something fishy in our past

While the common ancestor of vertebrates with synovial joints is still a mystery, the oldest specimen with evidence of these joints is Bothriolepis canadensis, a fish that lived about 387 to 360 million years ago during the Middle to Late Devonian period.

When using CT scanning to study a Bothriolepis fossil, Sharma observed a joint cavity between the shoulder and pectoral fin. Whether the cavity was filled with synovial fluid or cartilage is impossible to tell, but either way, she thinks it appears to have functioned like a synovial joint would. Fossils of early jawless fish, in contrast, lack any signs of synovial joints.

We probably inherited our joints from… a fish Read More »