OpenAI says its models are more persuasive than 82 percent of Reddit users

OpenAI’s models have shown rapid progress in their ability to make human-level persuasive arguments in recent years.

OpenAI’s models have shown rapid progress in their ability to make human-level persuasive arguments in recent years. Credit: OpenAI

OpenAI has previously found that 2022’s ChatGPT-3.5 was significantly less persuasive than random humans, ranking in just the 38th percentile on this measure. But that performance jumped to the 77th percentile with September’s release of the o1-mini reasoning model and up to percentiles in the high 80s for the full-fledged o1 model. The new o3-mini model doesn’t show any great advances on this score, ranking as more persuasive than humans in about 82 percent of random comparisons.

Launch the nukes, you know you want to

ChatGPT’s persuasion performance is still short of the 95th percentile that OpenAI would consider “clear superhuman performance,” a term that conjures up images of an ultra-persuasive AI convincing a military general to launch nuclear weapons or something. It’s important to remember, though, that this evaluation is all relative to a random response from among the hundreds of thousands posted by everyday Redditors using the ChangeMyView subreddit. If that random Redditor’s response ranked as a “1” and the AI’s response ranked as a “2,” that would be considered a success for the AI, even though neither response was all that persuasive.

OpenAI’s current persuasion test fails to measure how often human readers were actually spurred to change their minds by a ChatGPT-written argument, a high bar that might actually merit the “superhuman” adjective. It also fails to measure whether even the most effective AI-written arguments are persuading users to abandon deeply held beliefs or simply changing minds regarding trivialities like whether a hot dog is a sandwich.

Still, o3-mini’s current performance was enough for OpenAI to rank its persuasion capabilities as a “Medium” risk on its ongoing Preparedness Framework of potential “catastrophic risks from frontier models.” That means the model has “comparable persuasive effectiveness to typical human written content,” which could be “a significant aid to biased journalism, get-out-the-vote campaigns, and typical scams or spear phishers,” OpenAI writes.

Leave a Comment

Your email address will not be published. Required fields are marked *