Cheaters. Kids these days, everyone says, are all a bunch of blatant cheaters via AI.
Then again, look at the game we are forcing them to play, and how we grade it.
If you earn your degree largely via AI, that changes two distinct things.
-
You might learn different things.
-
You might signal different things.
Both learning and signaling are under threat if there is too much blatant cheating.
There is too much cheating going on, too blatantly.
Why is that happening? Because the students are choosing to do it.
Ultimately, this is a preview of what will happen everywhere else as well. It is not a coincidence that AI starts its replacement of work in the places where the work is the most repetitive, useless and fake, but its ubiquitousness will not stay confined there. These are problems and also opportunities we will face everywhere. The good news is that in other places the resulting superior outputs will actually produce value.
-
You Could Take The White Pill, But You Probably Won’t.
-
Is Our Children Learning.
-
Cheaters Never Stop Cheating.
-
If You Know You Know.
-
The Real Victims Here.
-
Taking Note.
-
What You Going To Do About It, Punk?
-
How Bad Are Things?
-
The Road to Recovery.
-
The Whispering Earring.
As I always say, if you have access to AI, you can use it to (A) learn and grow strong and work better, or (B) you can use it to avoid learning, growing and working. Or you can always (C) refuse to use it at all, or perhaps (D) use it in strictly limited capacities that you choose deliberately to save time but avoid the ability to avoid learning.
Choosing (A) and using AI to learn better and smarter is strictly better than choosing (C) and refusing to use AI at all.
If you choose (B) and use AI to avoid learning, you might be better or worse off than choosing (C) and refusing to use AI at all, depending on the value of the learning you are avoiding.
If the learning in question is sufficiently worthless, there’s no reason to invest in it, and (B) is not only better than (C) but also better than (A).
Tim Sweeney: The question is not “is it cheating”, the question is “is it learning”.
James Walsh: AI has made Daniel more curious; he likes that whenever he has a question, he can quickly access a thorough answer. But when he uses AI for homework, he often wonders, If I took the time to learn that, instead of just finding it out, would I have learned a lot more?
I notice I am confused. What is the difference between ‘learning that’ and ‘just finding it out’? And what’s to stop Daniel from walking through the a derivation or explanation with the AI if he wants to do that? I’ve done that a bunch with ML, and it’s great. o3’s example here was being told and memorizing the integral of sin x is -cos x rather than deriving it, but that was what most students always did anyway.
The path you take is up to you.
Ted Chiang: Using ChatGPT to complete tasks is like taking a forklift to the gym: you’ll never improve your cognitive abilities that way.”
Ewan Morrison: AI is demoralising universities. Students who use AI, think “why bother to study or write when AI can do it for me?” Tutors who mark the essays, think “why bother to teach these students & why give a serious grade when 90% of essays are done with AI?”
I would instead ask, why are you assigning essays the AI can do for them, without convincing the students why they should still write the essays themselves?
The problem, as I understand it, is that in general students are more often than not:
-
Not that interested in learning.
-
Do not think that their assignments are a good way to learn.
-
Quite interested in not working.
-
Quite interested getting good grades.
-
Know how to use ChatGPT to avoid learning.
-
Do not know how to use ChatGPT to learn, or it doesn’t even occur to them.
-
Aware that if they did use ChatGPT to learn, it wouldn’t be via schoolwork.
Meatball Times: has anyone stopped to ask WHY students cheat? would a buddhist monk “cheat” at meditation? would an artist “cheat” at painting? no. when process and outcomes are aligned, there’s no incentive to cheat. so what’s happening differently at colleges? the answer is in the article.
Colin Fraser (being right): “would an artist ‘cheat’ at a painting?”
I mean… yes, famously.
Now that the cost of such cheating is close to zero I expect that we will be seeing a lot more of it!
James Walsh: Although Columbia’s policy on AI is similar to that of many other universities’ — students are prohibited from using it unless their professor explicitly permits them to do so, either on a class-by-class or case-by-case basis — Lee said he doesn’t know a single student at the school who isn’t using AI to cheat. To be clear, Lee doesn’t think this is a bad thing.
If the reward for painting is largely money, which it is, then clearly if you give artists the ability to cheat then many of them will cheat, as in things like forgery, as they often have in the past. The way to stop them is to catch the ones who try.
The reason the Buddhist monk presumably wouldn’t ‘cheat’ at meditation is because they are not trying to Be Observed Performing Meditation, they want to meditate. But yes, if they were getting other rewards for meditation, I’d expect some cheating, sure, even if the meditation also had intrinsic rewards.
Back to the school question. If the students did know how to use AI to learn, why would they need the school, or to do the assignments?
The entire structure of school is based on the thesis that students need to be forced to learn, and that this learning must be constantly policed.
The thesis has real validity. At this point, with not only AI but also YouTube and plenty of other free online materials, the primary educational (non-social, non-signaling) product is that the class schedule and physical presence, and exams and assignments, serve as a forcing function to get you to do the damn work and pay attention, even if inefficiently.
Zito (quoting the NYMag article): The kids are cooked.
Yishan: One of my kids buys into the propaganda that AI is environmentally harmful (not helped by what xAI is doing in Memphis, btw), and so refuses to use AI for any help on learning tough subjects. The kid just does the work, grinding it out, and they are getting straight A’s.
And… now I’m thinking maybe I’ll stop trying to convince the kid otherwise.
It’s entirely not obvious whether it would be a good idea to convince the kid otherwise. Using AI is going to be the most important skill, and it can make the learning much better, but maybe it’s fine to let the kid wait given the downside risks of preventing that?
The reason taking such a drastic (in)action might make sense is that the kids know the assignments are stupid and fake. The whole thesis of commitment devices that lead to forced work is based on the idea that the kids (or their parents) understand that they do need to be forced to work, so they need this commitment device, and also that the commitment device is functional.
Now both of those halves are broken. The commitment devices don’t work, you can simply cheat. And the students are in part trying to be lazy, sure, but they’re also very consciously not seeing any value here. Lee here is not typical in that he goes on to actively create a cheating startup but I mean, hey, was he wrong?
James Walsh: “Most assignments in college are not relevant,” [Columbia student Lee] told me. “They’re hackable by AI, and I just had no interest in doing them.”
While other new students fretted over the university’s rigorous core curriculum, described by the school as “intellectually expansive” and “personally transformative,” Lee used AI to breeze through with minimal effort.
When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”
Bingo. Lee knew this is no way to learn. That’s not why he was there.
Columbia can call its core curriculum ‘intellectually expansive’ and ‘personally transformative’ all it wants. That doesn’t make it true, and it definitely isn’t fooling that many of the students.
The key fact about cheaters is that they not only never stop cheating on their own. They escalate the extent of their cheating until they are caught. Once you pop enough times, you can’t stop. Cheaters learn to cheat as a habit, not as the result of an expected value calculation in each situation.
For example, if you put a Magic: the Gathering cheater onto a Twitch stream, where they will leave video evidence of their cheating, will they stop? No, usually not.
Thus, you can literally be teaching ‘Ethics and AI’ and ask for a personal reflection, essentially writing a new line of Ironic, and they will absolutely get it from ChatGPT.
James Walsh: Less than three months later, teaching a course called Ethics and Artificial Intelligence, [Brian Patrick Green] figured a low-stakes reading reflection would be safe — surely no one would dare use ChatGPT to write something personal. But one of his students turned in a reflection with robotic language and awkward phrasing that Green knew was AI-generated.
This is a way to know students are indeed cheating rather than using AI to learn. The good news? Teachable moment.
Lee in particular clearly doesn’t have a moral compass in any of this. He doesn’t get the idea that cheating can be wrong even in theory:
For now, Lee hopes people will use Cluely to continue AI’s siege on education. “We’re going to target the digital LSATs; digital GREs; all campus assignments, quizzes, and tests,” he said. “It will enable you to cheat on pretty much everything.”
If you’re enabling widespread cheating on the LSATs and GREs, you’re no longer a morally ambiguous rebel against the system. Now you’re just a villain.
Or you can have a code:
James Walsh: Wendy, a freshman finance major at one of the city’s top universities, told me that she is against using AI. Or, she clarified, “I’m against copy-and-pasting. I’m against cheating and plagiarism. All of that. It’s against the student handbook.”
Then she described, step-by-step, how on a recent Friday at 8 a.m., she called up an AI platform to help her write a four-to-five-page essay due two hours later.
Wendy will use AI for ‘all aid short of copy-pasting,’ the same way you would use Google or Wikipedia or you’d ask a friend questions, but she won’t copy-and-paste. The article goes on to describe her full technique. AI can generate an outline, and brainstorm ideas and arguments, so long as the words are hers.
That’s not an obviously wrong place to draw the line. It depends on which part of the assignment is the active ingredient. Is Wendy supposed to be learning:
-
How to structure, outline and manufacture a school essay in particular?
-
How to figure out what a teacher wants her to do?
-
‘How to write’?
-
How to pick a ‘thesis’?
-
How to find arguments and bullet points?
-
The actual content of the essay?
-
An assessment of how good she is rather than grademaxxing?
Wendy says planning the essay is fun, but ‘she’d rather get good grades.’ As in, the system actively punishes her for trying to think about such questions rather than being the correct form of fake. She is still presumably learning about the actual content of the essay, and by producing it, if there’s any actual value to the assignment, and she pays attention, she’ll pick up the reasons why the AI makes the essay the way it does.
I don’t buy that this is going to destroy Wendy’s ‘critical thinking’ skills. Why are we teaching her that school essay structures and such are the way to train critical thinking? Everything in my school experience says the opposite.
The ‘cheaters’ who only cheat or lie a limited amount and then stop have a clear and coherent model of why what they are doing in the contexts they cheat or lie in is not cheating or why it is acceptable or justified, and this is contrasted with other contexts. Why some rules are valid, and others are not. Even then, it usually takes a far stronger person to hold that line than to not cheat in the first place.
Another way to look at this is, if it’s obvious from the vibes that you cheated, you cheated, even if the system can’t prove it. The level of obviousness varies, you can’t always sneak in smoking gun instructions.
But if you invoke the good Lord Bayes, you know.
James Walsh: Most of the writing professors I spoke to told me that it’s abundantly clear when their students use AI.
Not that they flag it.
Still, while professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent.
But there’s a huge difference between ‘I flag this as AI and am willing to fight over this’ and knowing that something was probably or almost certainly AI.
What about automatic AI detectors? They’re detecting something. It’s noisy, and it’s different, it’s not that hard to largely fool if you care, and it has huge issues (especially for ESL students) but I don’t think either of these responses is an error?
I fed Wendy’s essay through a free AI detector, ZeroGPT, and it came back as 11.74 AI-generated, which seemed low given that AI, at the very least, had generated her central arguments. I then fed a chunk of text from the Book of Genesis into ZeroGPT and it came back as 93.33 percent AI-generated.
If you’re direct block quoting Genesis without attribution, your essay is plagiarized. Maybe it came out of the AI and maybe it didn’t, but it easily could have, it knows Genesis and it’s allowed to quote from it. So 93% seems fine. Whereas Wendy’s essay is written by Wendy, the AI was used to make it conform to the dumb structures and passwords of the course. 11% seems fine.
Colin Fraser: I think we’ve somehow swung to overestimating the number of kids who are cheating with ChatGPT and simultaneously underestimating the amount of grief and hassle this creates for educators.
The guy making the cheating app wants you to think every single other person out there is cheating at everything and you’re falling behind if you’re not cheating. That’s not true. But the spectre a few more plagiarized assignments per term is massively disruptive for teachers.
James Walsh: Many teachers now seem to be in a state of despair.
I’m sorry, what?
Given how estimations work, I can totally believe we might be overestimating the number of kids who are cheating. Of course, the number is constantly rising, especially for the broader definitions of ‘cheating,’ so even if you were overestimating at the time you might not be anymore.
But no, this is not about ‘a few more plagiarized assignments per term,’ both because this isn’t plagiarism it’s a distinct other thing, and also because by all reports it’s not only a few cases, it’s an avalanche even if underestimated.
Doing the assignments yourself is now optional unless you force the student to do it in front of you. Deal with it.
As for this being ‘grief and hassle’ for educators, yes, I am sure it is annoying when your system of forced fake work can be faked back at you more effectively and more often, and when there is a much better source of information and explanations available than you and your textbooks such that very little of what you are doing really has a point to it anymore.
If you think students have to do certain things themselves in order to learn, then as I see it you have two options, you can do either or both.
-
Use frequent in-person testing, both as the basis of grades and as a forcing function so that students learn. This is a time honored technique.
-
Use in-person assignments and tasks, so you can prevent AI use. This is super annoying but it has other advantages.
Alternatively or in addition to this, you can embrace AI and design new tasks and assignments that cause students to learn together with the AI. That’s The Way.
Trying to ‘catch’ the ‘cheating’ is pointless. It won’t work. Trying only turns this at best into a battle over obscuring tool use and makes the whole experience adversarial.
If you assign fake essay forms to students, and then grade them on those essays and use those grades to determine their futures, what the hell do you think is going to happen? This form of essay assignment is no longer valid, and if you assign it anyway you deserve what you get.
James Walsh: “I think we are years — or months, probably — away from a world where nobody thinks using AI for homework is considered cheating,” [Lee] said.
I think that is wrong. We are a long way away from the last people giving up this ghost. But seriously it is pretty insane to think ‘using AI for homework’ is cheating. I’m actively trying to get my kids to use AI for homework more, not less.
James Walsh: In January 2023, just two months after OpenAI launched ChatGPT, a survey of 1,000 college students found that nearly 90 percent of them had used the chatbot to help with homework assignments.
What percentage of that 90% was ‘cheating’? We don’t know, and definitions differ, but I presume a lot less than all of them.
Now and also going forward, I think you could say that particular specific uses are indeed really cheating, and it depends how you use it. But if you think ‘use AI to ask questions about the world and learn the answer’ is ‘cheating’ then explain what the point of the assignment was, again?
The whole enterprise is broken, and will be broken while there is a fundamental disconnect between what is measured and what they want to be managing.
James Walsh: Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed.
…
[Jollimore] worries about the long-term consequences of passively allowing 18-year-olds to decide whether to actively engage with their assignments.
The entire article makes clear that students almost never buy that their efforts would be worthwhile. A teacher can think ‘this will teach them effort’ but if that’s the goal then why not go get an actual job? No one is buying this, so if the grades don’t reward effort, why should there be effort?
How dare you let 18-year-olds decide whether to engage with their assignments that produce no value to anyone but themselves.
This is all flat out text.
The ideal of college as a place of intellectual growth, where students engage with deep, profound ideas, was gone long before ChatGPT.
…
In a way, the speed and ease with which AI proved itself able to do college-level work simply exposed the rot at the core.
There’s no point. Was there ever a point?
“The students kind of recognize that the system is broken and that there’s not really a point in doing this. Maybe the original meaning of these assignments has been lost or is not being communicated to them well.”
The question is, once you know, what do you do about it? How do you align what is measured with what is to be managed? What exactly do you want from the students?
James Walsh: The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”?
What is measured gets managed. You either give the better grade to the ‘barely literate’ essay, or you don’t.
My children get assigned homework. The school’s literal justification – I am not making this up, I am not paraphrasing – is that they need to learn to do homework so that they will be prepared to do more homework in the future. Often this involves giving them assignments that we have to walk them through because there is no reasonable way for them to understand what is being asked.
If it were up to me, damn right I’d have them use AI.
It’s not just the students: Multiple AI platforms now offer tools to leave AI-generated feedback on students’ essays. Which raises the possibility that AIs are now evaluating AI-generated papers, reducing the entire academic exercise to a conversation between two robots — or maybe even just one.
Great! Now we can learn.
Another AI application to university is note taking. AI can do excellent transcription and rather strong active note taking. Is that a case of learning, or of not learning? There are competing theories, which I think are true for different people at different times.
-
One theory says that the act of taking notes is how you learn, by forcing you to pay attention, distill the information and write it in your own words.
-
The other theory is that having to take notes prevents you from actually paying ‘real’ attention and thinking and engaging, you’re too busy writing down factual information.
AI also means that even if you don’t have it take notes or a transcript, you don’t have to worry as much about missing facts, because you can ask the AI for them later.
My experience is that having to take notes is mostly a negative. Every time I focus on writing something down that means I’m not listening, or not fully listening, and definitely not truly thinking.
Rarely did she sit in class and not see other students’ laptops open to ChatGPT.
Of course your laptop is open to an AI. It’s like being able to ask the professor any questions you like without interrupting the class or paying any social costs, including stupid questions. If there’s a college lecture, and at no point do you want to ask Gemini, Claude or o3 any questions, what are you even doing? That also means everyone gets to learn much better, removing the tradeoff of each question disrupting the rest of the class.
Similarly, devising study materials and practice tests seems clearly good.
The most amazing thing about the AI ‘cheating’ epidemic at universities is the extent to which the universities are content to go quietly into the night. They are mostly content to let nature take its course.
Could the universities adapt to the new reality? Yes, but they choose not to.
Cat Zhang: more depressing than Trump’s funding slashes and legal assaults and the Chat-GPT epidemic is witnessing how many smart, competent people would rather give up than even begin to think of what we could do about it
Tyler Austin Harper: It can’t be emphasized enough: wide swaths of the academy have given up re ChatGPT. Colleges have had since 2022 to figure something out and have done less than nothing. Haven’t even tried. Or tried to try. The administrative class has mostly collaborated with the LLM takeover.
Hardly anyone in this country believes in higher ed, especially the institutions themselves which cannot be mustered to do anything in their own defense. Faced with an existential threat, they can’t be bothered to cry, yawn, or even bury their head in the sand, let alone resist.
It would actually be more respectable if they were in denial, but the pervading sentiment is “well, we had a good run.” They don’t even have the dignity of being delusional. It’s shocking. Three years in and how many universities can you point to that have tried anything really?
If the AI crisis points to anything it’s that higher ed has been dead a long time, before ChatGPT was twinkle in Sam Altman’s eye. The reason the universities can’t be roused to their own defense is that they’re being asked to defend a corpse and the people who run them know it.
They will return to being finishing schools once again.
To paraphrase Alan Moore, this is one of those moments where colleges need to look at what’s on the table and (metaphorically) say: “Thank you, but I’d rather die behind the chemical sheds.” Instead, we get an OpenAI and Cal State partnership. Total, unapologetic capitulation.
The obvious interpretation is that college had long shifted into primarily being a Bryan Caplan style set of signaling mechanisms, so the universities are not moving to defend themselves against students who seek to avoid learning.
The problem is, this also destroys key portions of the underlying signals.
Greg Lukainoff: [Tyler’s statement above is] powerful evidence of the signaling hypothesis, that essentially the primary function of education is to signal to future employers that you were probably pretty smart and conscientious to get into college in the first place, and pretty, as @bryan_caplan puts it, “conservative” in a (non-political sense) to be able to finish it. Therefore graduates may be potentially competent and compliant employees.
Seems like there are far less expensive ways to convey that information.
Clark H: The problem is the signal is now largely false. It takes much less effort to graduate from college now – just crudely ask GPT to do it. There is even a case to be made that, like a prison teaches how to crime, college now teaches how to cheat.
v8pAfNs82P1foT: There’s a third signal of value to future employers: conformity to convention/expectation. There are alternative credible pathways to demonstrate intelligence and sustained diligence. But definitionally, the only way to credibly signal willingness to conform is to conform.
Megan McArdle: The larger problem is that a degree obtained by AI does not signal the information they are trying to convey, so its value is likely to collapse quickly as employers get wise. There will be a lag, because cultural habits die hard, but eventually the whole enterprise will implode unless they figure out how to teach something that employers will pay a premium for.
Matthew Yglesias: I think this is all kind of missing the boat, the same AI that can pass your college classes for you is radically devaluing the skills that a college degree (whether viewed as real learning or just signaling or more plausibly a mix) used to convey in the market.
The AI challenge for higher education isn’t that it’s undermining the assessment protocols (as everyone has noticed you can fix this with blue books or oral exams if you bother trying) it’s that it’s undermining the financial value of the degree!
Megan McArdle: Eh, conscientiousness is likely to remain valuable, I think. They also provide ancillary marriage market and networking services that arguably get more valuable in an age of AI.
Especially at elite schools. If you no longer have to spend your twenties and early thirties prepping for the PUMC rat race, why not get married at 22 and pop out some babies while you still have energy to chase them?
But anyway, yes, this is what I was saying, apparently not clearly enough: the problem is not just that you can’t assess certain kinds of paper-writing skills, it’s that the skills those papers were assessing will decline in value.
Periodically you see talk about how students these days (or kids these days) are in trouble. How they’re stupider, less literate, they can’t pay attention, they’re lazy and refuse to do work, and so on.
“We’re talking about an entire generation of learning perhaps significantly undermined here,” said Green, the Santa Clara tech ethicist. “It’s short-circuiting the learning process, and it’s happening fast.”
The thing is, this is a Pessimists Archive speciality, this pattern dates back at least to Socrates. People have always worried about this, and the opposite has very clearly been true overall. It’s learning, and also many other things, where ‘kids these days’ are always ‘in crisis’ and ‘falling behind’ and ‘at risk’ and so on.
My central understanding for this is that as times change, people compare kids now to kids of old both through rose-colored memory glasses, and also by checking against the exact positive attributes of the previous generations. Whereas as times change, the portfolio of skills and knowledge shifts. Today’s kids are masters at many things that didn’t even exist in my youth. That’s partly going to be a shift away from other things, most of which are both less important than the new priorities and less important than they were.
Ron Arts: Most important sentence in the article: “There might have been people complaining about machinery replacing blacksmiths in, like, the 1600s or 1800s, but now it’s just accepted that it’s useless to learn how to blacksmith.”
George Turner: Blacksmithing is an extremely useful skill. Even if I’m finishing up the part on a big CNC machine or with an industrial robot, there are times when smithing saves me a lot of time.
Bob BTC: Learning a trade is far different than learning to think!
Is it finally ‘learning to think’ this time? Really? Were they reading the sequences? Could previous students have written them?
And yes, people really will use justifications for our university classes that are about as strong as ‘blacksmithing is an extremely useful skill.’
So we should be highly suspicious of yet another claim of new tech destroying kids ability to learn, especially when it is also the greatest learning tool in human history.
Notice how much better it is to use AI than it is to hire to a human to do your homework, if both had the same cost, speed and quality profiles.
For $15.95 a month, Chegg promised answers to homework questions in as little as 30 minutes, 24/7, from the 150,000 experts with advanced degrees it employed, mostly in India. When ChatGPT launched, students were primed for a tool that was faster, more capable.
With AI, you create the prompt and figure out how to frame the assignment, you can ask follow-up questions, you are in control. With hiring a human, you are much less likely to do any of that. It matters.
Ultimately, this particular cataclysm is not one I am so worried about. I don’t think our children were learning before, and they have much better opportunity to do so now. I don’t think they were acting with or being selected for integrity at university before, either. And if this destroys the value of degrees? Mostly, I’d say: Good.
If you are addicted to TikTok, ChatGPT or your phone in general, it can get pretty grim, as was often quoted.
James Walsh: Rarely did she sit in class and not see other students’ laptops open to ChatGPT. Toward the end of the semester, she began to think she might be dependent on the website. She already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit, where she writes under the username maybeimnotsmart. “I spend so much time on TikTok,” she said. “Hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork. With ChatGPT, I can write an essay in two hours that normally takes 12.”
The ‘catch’ that isn’t mentioned is that She Got Better.
Colin Fraser: Kind of an interesting omission. Not THAT interesting or anything but, you know, why didn’t he put that in the article?
I think it’s both interesting and important context. If your example of a student addicted to ChatGPT and her phone beat that addiction, that’s highly relevant. It’s totally within Bounded Distrust rules to not mention it, but hot damn. Also, congrats to maybeimnotsosmart.
Ultimately the question is, if you have access to increasingly functional copies of The Whispering Earring, what should you do with that? If others get access to it, what then? What do we do about educational situations ‘getting there first’?
In case you haven’t read The Whispering Earring, it’s short and you should, and I’m very confident the author won’t mind, so here’s the whole story.
Scott Alexander: Clarity didn’t work, trying mysterianism.
In the treasure-vaults of Til Iosophrang rests the Whispering Earring, buried deep beneath a heap of gold where it can do no further harm.
The earring is a little topaz tetrahedron dangling from a thin gold wire. When worn, it whispers in the wearer’s ear: “Better for you if you take me off.” If the wearer ignores the advice, it never again repeats that particular suggestion.
After that, when the wearer is making a decision the earring whispers its advice, always of the form “Better for you if you…”. The earring is always right. It does not always give the best advice possible in a situation. It will not necessarily make its wearer King, or help her solve the miseries of the world. But its advice is always better than what the wearer would have come up with on her own.
It is not a taskmaster, telling you what to do in order to achieve some foreign goal. It always tells you what will make you happiest. If it would make you happiest to succeed at your work, it will tell you how best to complete it. If it would make you happiest to do a half-assed job at your work and then go home and spend the rest of the day in bed having vague sexual fantasies, the earring will tell you to do that. The earring is never wrong.
The Book of Dark Waves gives the histories of two hundred seventy four people who previously wore the Whispering Earring. There are no recorded cases of a wearer regretting following the earring’s advice, and there are no recorded cases of a wearer not regretting disobeying the earring. The earring is always right.
The earring begins by only offering advice on major life decisions. However, as it gets to know a wearer, it becomes more gregarious, and will offer advice on everything from what time to go to sleep, to what to eat for breakfast. If you take its advice, you will find that breakfast food really hit the spot, that it was exactly what you wanted for breakfast that day even though you didn’t know it yourself. The earring is never wrong.
As it gets completely comfortable with its wearer, it begins speaking in its native language, a series of high-bandwidth hisses and clicks that correspond to individual muscle movements. At first this speech is alien and disconcerting, but by the magic of the earring it begins to make more and more sense. No longer are the earring’s commands momentous on the level of “Become a soldier”. No more are they even simple on the level of “Have bread for breakfast”. Now they are more like “Contract your biceps muscle about thirty-five percent of the way” or “Articulate the letter p”. The earring is always right. This muscle movement will no doubt be part of a supernaturally effective plan toward achieving whatever your goals at that moment may be.
Soon, reinforcement and habit-formation have done their trick. The connection between the hisses and clicks of the earring and the movements of the muscles have become instinctual, no more conscious than the reflex of jumping when someone hidden gives a loud shout behind you.
At this point no further change occurs in the behavior of the earring. The wearer lives an abnormally successful life, usually ending out as a rich and much-beloved pillar of the community with a large and happy family.
When Kadmi Rachumion came to Til Iosophrang, he took an unusual interest in the case of the earring. First, he confirmed from the records and the testimony of all living wearers that the earring’s first suggestion was always that the earring itself be removed. Second, he spent some time questioning the Priests of Beauty, who eventually admitted that when the corpses of the wearers were being prepared for burial, it was noted that their brains were curiously deformed: the neocortexes had wasted away, and the bulk of their mass was an abnormally hypertrophied mid- and lower-brain, especially the parts associated with reflexive action.
Finally, Kadmi-nomai asked the High Priest of Joy in Til Iosophrang for the earring, which he was given. After cutting a hole in his own earlobe with the tip of the Piercing Star, he donned the earring and conversed with it for two hours, asking various questions in Kalas, in Kadhamic, and in its own language. Finally he removed the artifact and recommended that the it be locked in the deepest and most inaccessible parts of the treasure vaults, a suggestion with which the Iosophrelin decided to comply.
This is very obviously not the optimal use of The Whispering Earring, let alone the ability to manufacture copies of it.
But, and our future may depend on the answer, what is your better plan? And in particular, what is your plan for when everyone has access to (a for now imperfect and scope limited but continuously improving) one, and you are at a rather severe disadvantage if you do not put one on?
The actual problem we face is far trickier than that. Both in education, and in general.