Author name: Mike M.

heartbreaking-video-shows-deadly-risk-of-skipping-measles-vaccine

Heartbreaking video shows deadly risk of skipping measles vaccine

Once SSPE develops, it moves through progressive stages, starting with mood swings, personality changes, depression, lethargy, and possibly fever and headache. This first stage can last up to six months. Then stage two involves jerking movement, spasms, loss of vision, dementia, and seizures. The third stage sees the jerking turn to writhing and rigidity. In the last stage, autonomic failure sets in—heart rate, blood pressure, and breathing become unregulated. Then comes coma and death. About 95 percent of SSPE cases are fatal.

Tragic ending

In the boy’s case, his parents don’t know when he was infected with measles. When doctors saw him, his parents recalled that in the prior six months, he had started having jerky movements, falls, and progressive cognitive decline. Before that, he had been healthy at birth and had been hitting all of his developmental milestones.

In some ways, his decline was an unmistakable case of SSPE. Imaging showed lesions in his brain. He had elevated anti-measles antibodies in his cerebrospinal fluid. An electroencephalography (EEG) showed brain waves consistent with SSPE. Then, of course, there were the jerking motions and the cognitive decline.

What stood out, though, was his rolling and swirling eyes. Vision problems are not uncommon with SSPE—sometimes the condition damages the retina and/or optic nerve. Some patients develop complete vision loss. But, in the boy’s case, he developed rapid, repetitive, erratic, multidirectional eye movements, a condition called opsoclonus. Doctors often see it in brain cancer patients, but brain inflammation from some infections can also cause the movements. Experts hypothesize that the root cause is a loss of specialized neurons involved in coordinated movement, namely Purkinje cells and omnipause cells.

The boy’s neurologists believe this is the first time opsoclonus associated with SSPE has been caught on video. They treated the boy with an antiviral drug and drugs to reduce convulsions, but his condition continued to worsen.

Heartbreaking video shows deadly risk of skipping measles vaccine Read More »

hundreds-of-e-commerce-sites-hacked-in-supply-chain-attack

Hundreds of e-commerce sites hacked in supply-chain attack

Hundreds of e-commerce sites, at least one owned by a large multinational company, were backdoored by malware that executes malicious code inside the browsers of visitors, where it can steal payment card information and other sensitive data, security researchers said Monday.

The infections are the result of a supply-chain attack that compromised at least three software providers with malware that remained dormant for six years and became active only in the last few weeks. At least 500 e-commerce sites that rely on the backdoored software were infected, and it’s possible that the true number is double that, researchers from security firm Sansec said.

Among the compromised customers was a $40 billion multinational company, which Sansec didn’t name. In an email Monday, a Sansec representative said that “global remediation [on the infected customers] remains limited.”

Code execution on visitors’ machines

The supply chain attack poses a significant risk to the thousands or millions of people visiting the infected sites, because it allows attackers to execute code of their choice on ecommerce site servers. From there, the servers run info-stealing code on visitor machines.

“Since the backdoor allows uploading and executing arbitrary PHP code, the attackers have full remote code execution (RCE) and can do essentially anything they want,” the representative wrote. “In nearly all Adobe Commerce/Magento breaches we observe, the backdoor is then used to inject skimming software that runs in the user’s browser and steals payment information (Magecart).”

The three software suppliers identified by Sansec were Tigren, Magesolution (MGS), and Meetanshi. All three supply software that’s based on Magento, an open source e-commerce platform used by thousands of online stores. A software version sold by a fourth provider named Weltpixel has been infected with similar code on some of its customers’ stores, but Sansec so far has been unable to confirm whether it was the stores or Weltpixel that were hacked. Adobe has owned Megento since 2018.

Hundreds of e-commerce sites hacked in supply-chain attack Read More »

spacex-pushed-“sniper”-theory-with-the-feds-far-more-than-is-publicly-known

SpaceX pushed “sniper” theory with the feds far more than is publicly known


“It came out of nowhere, and it was really violent.”

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The Amos 6 satellite is lost atop a Falcon 9 rocket. Credit: USLaunchReport

The rocket was there. And then it decidedly was not.

Shortly after sunrise on a late summer morning nearly nine years ago at SpaceX’s sole operational launch pad, engineers neared the end of a static fire test. These were still early days for their operation of a Falcon 9 rocket that used super-chilled liquid propellants, and engineers pressed to see how quickly they could complete fueling. This was because the liquid oxygen and kerosene fuel warmed quickly in Florida’s sultry air, and cold propellants were essential to maximizing the rocket’s performance.

On this morning, September 1, 2016, everything proceeded more or less nominally up until eight minutes before the ignition of the rocket’s nine Merlin engines. It was a stable point in the countdown, so no one expected what happened next.

“I saw the first explosion,” John Muratore, launch director for the mission, told me. “It came out of nowhere, and it was really violent. I swear, that explosion must have taken an hour. It felt like an hour. But it was only a few seconds. The second stage exploded in this huge ball of fire, and then the payload kind of teetered on top of the transporter erector. And then it took a swan dive off the top rails, dove down, and hit the ground. And then it exploded.”

The dramatic loss of the Falcon 9 rocket and its Amos-6 satellite, captured on video by a commercial photographer, came at a pivotal moment for SpaceX and the broader commercial space industry. It was SpaceX’s second rocket failure in a little more than a year, and it occurred as NASA was betting heavily on the company to carry its astronauts to orbit. SpaceX was not the behemoth it is today, a company valued at $350 billion. It remained vulnerable to the vicissitudes of the launch industry. This violent failure shook everyone, from the engineers in Florida to satellite launch customers to the suits at NASA headquarters in Washington, DC.

As part of my book on the Falcon 9 and Dragon years at SpaceX, Reentry, I reported deeply on the loss of the Amos-6 mission. In the weeks afterward, the greatest mystery was what had precipitated the accident. It was understood that a pressurized helium tank inside the upper stage had ruptured. But why? No major parts on the rocket were moving at the time of the failure. It was, for all intents and purposes, akin to an automobile idling in a driveway with half a tank of gasoline. And then it exploded.

This failure gave rise to one of the oddest—but also strangely compelling—stories of the 2010s in spaceflight. And we’re still learning new things today.

The “sniper” theory

The lack of a concrete explanation for the failure led SpaceX engineers to pursue hundreds of theories. One was the possibility that an outside “sniper” had shot the rocket. This theory appealed to SpaceX founder Elon Musk, who was asleep at his home in California when the rocket exploded. Within hours of hearing about the failure, Musk gravitated toward the simple answer of a projectile being shot through the rocket.

This is not as crazy as it sounds, and other engineers at SpaceX aside from Musk entertained the possibility, as some circumstantial evidence to support the notion of an outside actor existed. Most notably, the first rupture in the rocket occurred about 200 feet above the ground, on the side of the vehicle facing the southwest. In this direction, about one mile away, lay a building leased by SpaceX’s main competitor in launch, United Launch Alliance. A separate video indicated a flash on the roof of this building, now known as the Spaceflight Processing Operations Center. The timing of this flash matched the interval it would take a projectile to travel from the building to the rocket.

A sniper on the roof of a competitor’s building—forget the Right Stuff, this was the stuff of a Mission: Impossible or James Bond movie.

At Musk’s direction, SpaceX worked this theory both internally and externally. Within the company, engineers and technicians actually took pressurized tanks that stored helium—one of these had burst, leading to the explosion—and shot at them in Texas to determine whether they would explode and what the result looked like. Externally, they sent the site director for their Florida operations, Ricky Lim, to inquire whether he might visit the roof of the United Launch Alliance building.

SpaceX pursued the sniper theory for more than a month. A few SpaceX employees told me that they did not stop this line of inquiry until the Federal Aviation Administration sent the company a letter definitively saying that there was no gunman involved. It would be interesting to see this letter, so I submitted a Freedom of Information Act request to the FAA in the spring of 2023. Because the federal FOIA process moves slowly, I did not expect to receive a response in time for the book. But it was worth a try anyway.

No reply came in 2023 or early 2024, when the final version of my book was due to my editor. Reentry was published last September, and still nothing. However, last week, to my great surprise and delight, I got a response from the FAA. It was the very letter I requested, sent from the FAA to Tim Hughes, the general counsel of SpaceX, on October 13, 2016. And yes, the letter says there was no gunman involved.

However, there were other things I did not know—namely, that the FBI had also investigated the incident.

The ULA rivalry

One of the most compelling elements of this story is that it involves SpaceX’s heated rival, United Launch Alliance. For a long time, ULA had the upper hand, but in recent years, it has taken a dramatic turn. Now we know that David would grow up and slay Goliath: Between the final rocket ULA launched last year (the Vulcan test flight on October 4) and the first rocket the company launched this year (Atlas V, April 28), SpaceX launched 90 rockets.

Ninety.

But it was a different story in the summer of 2016 in the months leading up to the Amos 6 failure. Back then, ULA was launching about 15 rockets a year, compared to SpaceX’s five. And ULA was launching all of the important science missions for NASA and the critical spy satellites for the US military. They were the big dog, SpaceX the pup.

In the early days of the Falcon 9 rocket, some ULA employees would drive to where SpaceX was working on the first booster and jeer at their efforts. And rivalry played out not just on the launch pad but in courtrooms and on Capitol Hill. After ULA won an $11 billion block buy contract from the US Air Force to launch high-value military payloads into the early 2020s, Musk sued in April 2014. He alleged that the contract had been awarded without a fair competition and said the Falcon 9 rocket could launch the missions at a substantially lower price. Taxpayers, he argued, were being taken for a ride.

Eventually, SpaceX and the Air Force resolved their claims. The Air Force agreed to open some of its previously awarded national security missions to competitive bids. Over time, SpaceX has overtaken ULA even in this arena. During the most recent round of awards, SpaceX won 60 percent of the contracts compared to ULA’s 40 percent.

So when SpaceX raised the possibility of a ULA sniper, it came at an incendiary moment in the rivalry, when SpaceX was finally putting forth a very serious challenge to ULA’s dominance and monopoly.

It is no surprise, therefore, that ULA told SpaceX’s Ricky Lim to get lost when he wanted to see the roof of their building in Florida.

“Hair-on-fire stuff”

NASA officials were also deeply concerned by the loss of the Falcon 9 rocket in September 2016.

The space agency spent much of the 2010s working with SpaceX and Boeing to develop, test, and fly spacecraft that could fly humans into space. These were difficult years for the space agency, which had to rely on Russia to get its astronauts into space. NASA also had a challenging time balancing costs with astronaut safety. Then rockets started blowing up.

Consider this sequence from mid-2015 to mid-2016. In June 2015, the second stage of a Falcon 9 rocket carrying a cargo version of the Dragon spacecraft into orbit exploded. Less than two weeks later, NASA named four astronauts to its “commercial crew” cadre from which the initial pilots of Dragon and Starliner spacecraft would be selected. Finally, a little more than a year after this, a second Falcon 9 rocket upper stage exploded during flight.

Video of CRS-7 launch and failure.

Even as it was losing Falcon 9 rockets, SpaceX revealed that it intended to upend NASA’s long-standing practice of fueling a rocket and then, when the vehicle reached a stable condition, putting crew on board. Rather, SpaceX said it would put the astronauts on board before fueling. This process became known as “load and go.”

NASA’s safety community went nuts.

“When SpaceX came to us and said we want to load the crew first and then the propellant, mushroom clouds went off in our safety community,” Phil McAlister, the head of NASA’s commercial programs, told me for Reentry. “I mean, hair-on-fire stuff. It was just conventional wisdom that you load the propellant first and get it thermally stable. Fueling is a very dynamic operation. The vehicle is popping and hissing. The safety community was adamantly against this.”

Amos-6 compounded these concerns. That’s because the rocket was not shot by a sniper. After months of painful investigation and analysis, engineers determined the rocket was lost due to the propellant-loading process. In their goal of rapidly fueling the Falcon 9 rocket, the SpaceX teams had filled the pressurized helium tanks too quickly, heating the aluminum liner and causing it to buckle. In their haste to load super-chilled propellant onto the Falcon 9, SpaceX had found its speed limit.

At NASA, it was not difficult to visualize astronauts in a Dragon capsule sitting atop an exploding rocket during propellant loading rather than a commercial satellite.

Enter the FBI

We should stop and appreciate the crucible that SpaceX engineers and technicians endured in the fall of 2016. They were simultaneously attempting to tease out the physics of a fiendishly complex failure; prove to NASA their exploding rocket was safe; convince safety officials that even though they had just blown up their rocket by fueling it too quickly, load-and-go was feasible for astronaut missions; increase the cadence of Falcon 9 missions to catch and surpass ULA; and, oh yes, gently explain to the boss that a sniper had not shot their rocket.

So there had to be some relief when, on October 13, Hughes received that letter from Dr. Michael C. Romanowski, director of Commercial Space Integration at the FAA.

According to this letter (see a copy here), three weeks after the launch pad explosion, SpaceX submitted “video and audio” along with its analysis of the failure to the FAA. “SpaceX suggested that in the company’s view, this information and data could be indicative of sabotage or criminal activity associated with the on-pad explosion of SpaceX’s Falcon 9,” the letter states.

This is notable because it suggests that Musk directed SpaceX to elevate the “sniper” theory to the point that the FAA should take it seriously. But there was more. According to the letter, SpaceX reported the same data and analysis to the Federal Bureau of Investigation in Florida.

After this, the Tampa Field Office of the FBI and its Criminal Investigative Division in Washington, DC, looked into the matter. And what did they find? Nothing, apparently.

“The FBI has informed us that based upon a thorough and coordinated review by the appropriate Federal criminal and security investigative authorities, there were no indications to suggest that sabotage or any other criminal activity played a role in the September 1 Falcon 9 explosion,” Romanowski wrote. “As a result, the FAA considers this matter closed.”

The failure of the Amos-6 mission would turn out to be a low point for SpaceX. For a few weeks, there were non-trivial questions about the company’s financial viability. But soon, SpaceX would come roaring back. In 2017, the Falcon 9 rocket launched a record 18 times, surpassing ULA for the first time. The gap would only widen. Last year, SpaceX launched 137 rockets to ULA’s five.

With Amos-6, therefore, SpaceX lost the battle. But it would eventually win the war—without anyone firing a shot.

Photo of Eric Berger

Eric Berger is the senior space editor at Ars Technica, covering everything from astronomy to private space to NASA policy, and author of two books: Liftoff, about the rise of SpaceX; and Reentry, on the development of the Falcon 9 rocket and Dragon. A certified meteorologist, Eric lives in Houston.

SpaceX pushed “sniper” theory with the feds far more than is publicly known Read More »

chips-aren’t-improving-like-they-used-to,-and-it’s-killing-game-console-price-cuts

Chips aren’t improving like they used to, and it’s killing game console price cuts

Consider the PlayStation 2. Not all of the PS2 Slim’s streamlining came from chip improvements—it also shed a full-sized 3.5-inch hard drive bay and a little-used IEEE 1394 port, and initially required an external power brick. But shrinking and consolidating the console’s CPU, GPU, memory, and other components took the console from its original design in 2000, to the Slim in 2004, to an even lighter and lower-power version of the Slim that returned to using an internal power supply without increasing the size of the console at all.

Over that same span, the console’s price dropped frequently and significantly, from $299 at launch to just $129 by 2006 (the price was lowered again to $99 in 2009, deep into the PS3 era).

Or look at Microsoft’s Xbox 360. Its external design didn’t change as much over the years—the mid-generation “slim” refresh was actually only a little smaller than the original. But between late 2005 and early 2010, the CPU, GPU, and the GPU’s high-speed eDRAM memory chip went from being built on a 90 nm process, to 80 nm, to 65 nm, and finally to a single 45 nm chip that combined the CPU and GPU into one.

Over that time, the system’s power supply fell from 203 W to 133 W, and the base price fell from $300 to $200. The mid-generation 65nm refresh also substantially fixed the early consoles’ endemic “red ring of death” issue, which was caused in part by the heat that the older, larger chips generated.

As you can see when comparing these various consoles’ external and internal design revisions, shrinking the chips had a cascade of other beneficial and cost-lowering effects: smaller power supplies, smaller enclosures that use less metal and plastic, smaller heatsinks and cooling assemblies, and smaller and less complicated motherboard designs.

Sony’s original PS2 on the left, and the PS2 Slim revision on the right. Sony jettisoned a few things to make the console smaller, but chip improvements were also instrumental. Credit: Evan Amos

A slowdown of that progression was already evident when we hit the PlayStation 4/Xbox One/Nintendo Switch generation, but technological improvements and pricing reductions still followed familiar patterns. Both the mid-generation PS4 Slim and Xbox One S used a 16 nm processor instead of the original consoles’ 28 nm version, and each also had its price cut by $100 over its lifetime (comparing the Kinect-less Xbox One variant, and excluding the digital-only $249 Xbox One). The Switch’s single die shrink, from 20nm to 16nm, didn’t come with a price cut, but it did improve battery life and help to enable the cheaper Switch Lite variant.

Chips aren’t improving like they used to, and it’s killing game console price cuts Read More »

doge-put-a-college-student-in-charge-of-using-ai-to-rewrite-regulations

DOGE put a college student in charge of using AI to rewrite regulations


The DOGE operative has been tasked with rewrites to the Department of Housing and Urban Development.

A young man with no government experience who has yet to even complete his undergraduate degree is working for Elon Musk’s so-called Department of Government Efficiency (DOGE) at the Department of Housing and Urban Development (HUD) and has been tasked with using artificial intelligence to rewrite the agency’s rules and regulations.

Christopher Sweet was introduced to HUD employees as being originally from San Francisco and, most recently, a third-year student at the University of Chicago, where he was studying economics and data science, in an email sent to staffers earlier this month.

“I’d like to share with you that Chris Sweet has joined the HUD DOGE team with the title of special assistant, although a better title might be ‘Al computer programming quant analyst,’” Scott Langmack, a DOGE staffer and chief operating officer of an AI real estate company, wrote in an email widely shared within the agency and reviewed by WIRED. “With family roots from Brazil, Chris speaks Portuguese fluently. Please join me in welcoming Chris to HUD!”

Sweet’s primary role appears to be leading an effort to leverage artificial intelligence to review HUD’s regulations, compare them to the laws on which they are based, and identify areas where rules can be relaxed or removed altogether. (He has also been given read access to HUD’s data repository on public housing, known as the Public and Indian Housing Information Center, and its enterprise income verification systems, according to sources within the agency.)

Plans for the industrial-scale deregulation of the US government were laid out in detail in the Project 2025 policy document that the Trump administration has effectively used as a playbook during its first 100 days in power. The document, written by a who’s who of far-right figures, many of whom now hold positions of power within the administration, pushes for deregulation in areas like the environment, food and drug enforcement, and diversity, equity, and inclusion policies.

One area Sweet is focusing on is regulation related to the Office of Public and Indian Housing (PIH), according to sources who spoke to WIRED on the condition of anonymity as they were not authorized to speak to the press.

Sweet—who two sources have been told is the lead on the AI deregulation project for the entire administration—has produced an Excel spreadsheet with around a thousand rows containing areas of policy where the AI tool has flagged that HUD may have “overreached” and suggesting replacement language.

Staffers from PIH are, specifically, asked to review the AI’s recommendations and justify their objections to those they don’t agree with. “It all sounds crazy—having AI recommend revisions to regulations,” one HUD source says. “But I appreciated how much they’re using real people to confirm and make changes.”

Once the PIH team completes the review, their recommendations will be submitted to the Office of the General Counsel for approval.

One HUD source says they were told that the AI model being used for this project is “being refined by our work to be used across the government.” To do this, the source says they were told in a meeting attended by Sweet and Jacob Altik, another known DOGE member who has worked as an attorney at Weil, Gotshal & Manges, that the model will crawl through the Code of Federal Regulations (eCFR).

Another source told WIRED that Sweet has also been using the tool at other parts of HUD. WIRED reviewed a copy of the output of the AI’s review of one HUD department, which features columns displaying text that the AI model found to be needing an adjustment while also including suggestions from the AI for alterations to be made, essentially proposing rewrites. The spreadsheet details how many words can be eliminated from individual regulations and gives a percentage figure indicating how noncompliant the regulations are. It isn’t clear how these percentages are calculated.

Sweet did not respond to requests for comment regarding his work. In response to a request to clarify Sweet’s role at HUD, a spokesperson for the agency said they do not comment on individual personnel. The University of Chicago confirmed to WIRED that Sweet is “on leave from the undergraduate college.”

It’s unclear how Sweet was recruited to DOGE, but a public GitHub account indicates that he was working on this issue even before he joined Musk’s demolition crew.

The “CLSweet” GitHub account, which WIRED has linked to Sweet, created an application that tracks and analyzes federal government regulations “showing how regulatory burden is distributed across government agencies.” The application was last updated in March 2025, weeks before Sweet joined HUD.

One HUD source who heard about Sweet’s possible role in revising the agency’s regulations said the effort was redundant, since the agency was already “put through a multi-year multi-stakeholder meatgrinder before any rule was ever created” under the Administrative Procedure Act. (This law dictates how agencies are allowed to establish regulations and allows for judicial oversight over everything an agency does.)

Another HUD source said Sweet’s title seemed to make little sense. “A programmer and a quantitative data analyst are two very different things,” they noted.

Sweet has virtually no online footprint. One of the only references to him online is a short biography on the website of East Edge Securities, an investment firm Sweet founded in 2023 with two other students from the University of Chicago.

The biography is short on details but claims that Sweet has worked in the past with several private equity firms, including Pertento Partners, which is based in London, and Tenzing Global Investors, based in San Francisco. He is also listed as a board member of Paragon Global Investments, which is a student-run hedge fund.

The biography also mentions that Sweet “will be joining Nexus Point Capital as a private equity summer analyst.” The company has headquarters in Hong Kong and Shanghai and describes itself as “an Asian private equity fund with a strategic focus on control opportunities in the Greater China market.”

East Edge Securities, Pertento Partners, Tenzing Global Investors, Paragon Global Investments, and Nexus Point Capital did not respond to requests for comment.

The only other online account associated with Sweet appears to be a Substack account using the same username as the GitHub account. That account has not posted any content and follows mostly finance and market-related newsletters. It also follows Bari Weiss’ The Free Press and the newsletter of Marc Andreessen, the Silicon Valley billionaire investor and group chat enthusiast who said he spent a lot of time advising Trump and his team after the election.

DOGE representatives have been at HUD since February, when WIRED reported that two of those staffers were given application-level access to some of the most critical and sensitive systems inside the agency.

Earlier this month, US representative Maxine Waters, the top Democrat on the House Financial Services Committee, said DOGE had “infiltrated our nation’s housing agencies, stealing funding Congress provided to communities, illegally terminating staff, including in your districts, and accessing confidential data about people living in assisted housing, including sexual assault survivors.”

This story originally appeared at WIRED.com

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

DOGE put a college student in charge of using AI to rewrite regulations Read More »

“older-than-google,”-this-elder-scrolls-wiki-has-been-helping-gamers-for-30-years

“Older than Google,” this Elder Scrolls wiki has been helping gamers for 30 years


Interviewing the people behind the 30-year-old Unofficial Elder Scrolls Pages.

A statue in Oblivion Remastered

The team is still keeping up with new updates, including for Oblivion Remastered. Credit: Kyle Orland

The team is still keeping up with new updates, including for Oblivion Remastered. Credit: Kyle Orland

If at some point over the last 20 years you’ve found yourself in an Internet argument or had a question in your head you just couldn’t seem to get rid of, chances are good that you’ve relied on an online wiki.

And you probably used the online wiki: Wikipedia, the free encyclopedia. But for video games, Wikipedia provides a more general, top-down view, painting in broad strokes what a game is about, how it was made, when it was released, and how it was received by players.

In addition, many games and franchises have their own dedicated wikis that go a step further; these wikis are often part game guide, part lore book, and part historical record.

But what does it take to build a game wiki? Why do people do it? I looked to one of my all-time favorite games for answers.

The Unofficial Elder Scrolls Pages

It had been at least 10 years since I last played The Elder Scrolls IV: Oblivion before this past fall, when I decided somewhat arbitrarily to put another 80-or-so hours into a new save. Rushing through the first few parts of the main questline, it felt like I was visiting home, right up until I was named “Hero of Kvatch.”

Then, though, it quickly began to feel like I was playing the game for the first time, and, to put it mildly, I was getting beaten to a pulp across Cyrodiil.

While it was great to re-explore the game that consumed so many hours of my life and discover again what made the 2006 release an instant classic, I was frustrated that I had forgotten how the game worked.

Without the official manual that came with my now surely sold-to-GameStop Xbox 360 edition of the game or the Official Prima Strategy Guide, I quickly found myself (as countless others do) on The Unofficial Elder Scrolls Pages.

Broadly, UESPWiki is an impressive information repository of The Elder Scrolls franchise. It also documents the dense, often convoluted lore of the franchise, as well as books and merchandise sold alongside the games, and the multiple tabletop games.

The homepage for UESP, with a classic wiki design

The Unofficial Elder Scrolls Pages as they appear today. Credit: Samuel Axon

For all its uniqueness—the sort of early “Web 2.0” design style, limited advertisement space, and its namespace-centered way of organization—the UESP is an independent wiki at its core. It has all the bone structure that makes a wiki accessible and easy to use and is driven by a dedicated community of editors.

The wiki currently maintains over 110,000 articles. The phrase “We have been building a collaborative source for all knowledge on the Elder Scrolls series since 1995” is written at the top of the home page. This year, UESP is celebrating its 30th anniversary.

“The phrase I always say is ‘we’re older than Google,’” said 51-year-old Dave Humphrey, founder of the UESP. “Obviously, we’re not as big or as popular as Google, but we’re older than Google, and we’re older than a lot of websites. In fact, I don’t think there’s any other Elder Scrolls-related website that’s older than us.”

The earliest version of the UESP wasn’t a wiki at all and is just a little older than 30 years. It was a message distributed through USENET called Daggerfall FAQ, originally published in the fall of 1994, and it featured prerelease content about The Elder Scrolls II: Daggerfall.

A year later, the Daggerfall FAQ would become a webpage, and a few months after that, it would become the Unofficial Elder Scrolls Pages, which was just a webpage at the time, to include information about The Elder Scrolls games.

When The Elder Scrolls III: Morrowind released, it was the franchise’s biggest game at that point, and Humphrey quickly became remarkably busy. He wrote hundreds of entries for the game and its two DLCs while maintaining his regular job. But the more he did, the more reader emails suggesting new entries and edits to the site came in. In 2005, UESP officially became a wiki.

“It was too much for me to do as a full-time or second full-time job sort of thing,” Humphrey said. “That’s when I decided instead of having a regular webpage, we’d move to a wiki-based format where instead of people, you know, emailing me, they can edit their own tips.”

In 2012, Humphrey officially made the UESP his full-time job, but he is largely no longer involved in the content side of the wiki. He instead maintains more of an overseer role, doing most of the back-end server maintenance, programming, and cluster design for the site.

What sets UESP apart, at least from Humphrey’s perspective, is the creativity and decision-making capacity derived from its independence. This allows the team to run the ads they choose and implement new utilities like the ESO Build Editor.

“We’ve been asked to join larger wiki farms before, and while it might make sense from a technical standpoint, we would lose a lot of what makes UESP unique and long-lasting,” said Humphrey.

In fact, UESP has been slowly expanding over the years and is starting to host wiki sites beyond The Elder Scrolls. In August 2023, the site launched the Starfield Wiki, which already maintains over 10,000 articles and has unofficially taken over all the construction set wikis for the Elder Scrolls and Fallout franchises. Currently, Humphrey said, UESP is looking at hosting a few more existing game wikis later this year.

As for the Elder Scrolls series itself, The Elder Scrolls VI is still well into the future. But at the time of my interview, the Oblivion remaster was just social media speculation. Still, Humphrey predicted how the game might change the wiki.

“It comes down to the organization of the site. We sort of have to deal with that a bit with DLCs. There’s the base game, and then there’s the DLCs; for the most part, DLCs are their own contained area, but they do modify the base game as well,” said Humphrey.

“We’d probably take a similar approach with it, creating their own namespace underneath Oblivion and putting all the remake information there,” he added.

It’s always a challenge to determine how to organize things like DLCs and remakes into the wiki, Humphrey said. He noted that he would ultimately leave it up to the editors themselves.

Since the release of Oblivion Remastered, the game has, in fact, received its own namespace, and editors are already documenting some of the changes.

A changelog on the wiki

There’s a detailed page listing every known change in the Oblivion remaster. Credit: Samuel Axon

When it was still the Daggerfall FAQ, Humphrey wasn’t thinking about what it would look like in 2025 or how a community could be built around a website; he was simply someone with a passion for the game who liked building things.

But as time went on and Humphrey began attending conventions and Elder Scrolls-related meetups, he started to realize the kind of community that had naturally formed around the site.

“It’s not something I planned on doing, but it’s really neat, and it’s something I’m more aware of now in terms of doing community-related stuff,” Humphrey said.

That community, which has over 23,000 users with at least one edit in its history, measures the success of the wiki not by the quantity of content but the quality of the pages themselves. Humphrey leaves most of the content decisions up to those editors.

Scraping and editing

Robert “RobinHood70” Morley—a 54-year-old native of Ottawa, Canada—has been editing the pages since May 2006, just a short time after UESP turned into a wiki, while playing through Oblivion.

He explained that he found the wiki at a critical time, shortly after he fell ill with a sickness that doctors struggled to diagnose.

“Getting involved in the wiki provided a bit of a refuge from that,” Morely wrote through Discord. “I could forget how I felt (to some degree) and focus instead on what was going on the wiki. Because I couldn’t really leave the house much anymore, I made friends on the wiki instead and let that replace the real-life social life that I couldn’t have anymore… That continues to be the case even now.”

He didn’t necessarily set out to find that kind of community, which was much smaller at the time, but he recognized that it was helping him cope with things. Not only did editing give him a sense of accomplishment, but he enjoyed seeing what others were able to do.

Over the years, Morely’s involvement in the wiki has grown. He’s gone from a regular user to an admin to a bureaucrat. He’s the only editor with access to the servers other than Humphrey. He now does the critical job of running bots through the game pages that add bulk information to the wiki.

“In some sense, a bot is like any other editor. It adds/changes/removes information on the wiki. The difference is that it does so several thousand times faster,” Morley said. “Bots are often used to bulk-upload information from the game files to the wiki. For example, they might provide the initial documentation of every NPC in a game… They would provide the hard stats, but afterward, people would expand the page to provide a narrative for each character.”

When an Elder Scrolls Online update goes live, for example, Morley quickly deploys the bot, which detects any changes in the new versions of the game, such as skill and stat adjustments. All that added information is dumped into the wiki to provide human editors with a base to start from.

That’s similar to the process of creating pages for a brand-new Elder Scrolls game. The game would be scraped shortly after it was released, and editors would get busy trying to figure out how those pieces fit together.

Once created, the pages would be edited and continuously retouched over time by other editors.

Those editors include people like 26-year-old Dillon “Dillonn241” D., who has been a part of the wiki for over half of his life. He first discovered and began editing pages of the UESP at 11 or 12 years old.

While his activity on the wiki has ebbed and flowed over the years based on interest and the demands of daily life, he says the community is a big part of what keeps him coming back—the near-endless nature of collaboratively working on a project. He enjoys the casual conversations on the wiki’s Discord server, as well as the more focused and pragmatic discussions about editing through internal channels.

He has since become a prolific editor and was awarded a “gold star” on his talk page last year for editing more pages than anyone in 2024, with a total of 18,864 edits.

“I don’t want to call it an addiction because that makes it sound bad, but it’s kind of like—you know, I guess I have an hour here. I’ll just hop on UESP and edit a few pages or see how things I’ve edited have been doing.”

Most of those edits, he explained, were likely minor, things like fixing grammar, sentence structure, and formatting to better conform to the wiki’s internal style guide. In this way, he considers himself a “WikiGnome”—someone who makes small, incremental edits to pages and makes changes behind the scenes.

When he’s in the mood to do some editing, he’ll jump around the wiki using the random button for a couple of hours and make changes.

All those hours have given him not only refined copy editing skills but also a serious familiarity with the pages. Dillion says he can jump on just about any page and find at least one of his edits on it.

He has done a lot of work on the namespaces for the Tamriel Rebuilt mod and essentially rewrote the entire namespace for The Elder Scrolls Adventures: Redguard, a game he doesn’t particularly enjoy playing but started on a whim.

“At some point, I stumbled across the wiki section for it and was like, well this is nothing, this is awful,” he explained.

He recalled getting stuck on a section of the game, and the wiki wasn’t able to help him. He was able to stumble through the game but decided he would completely fix its entries so they would be more helpful to future players.

Now, the Redguard namespace includes detailed descriptions of the quests, characters, and items, along with photos, most of which Dillion took himself. He says that if the game and all its files were somehow wiped from the earth, developers would be able to remaster the entire game in Unity based just on what’s in the wiki.

A screenshot of the Redguard wiki page

Yep, he took those screenshots. Credit: Samuel Axon

“I guess that’s kind of the end goal,” he said, “and some of the namespaces are kind of close, like Oblivion I would say is kind of close to where the things that you can still add to the page are getting minimal.”

Efforts all over

While Dillion is a more prolific editor than most and Morley plays a more specific role in the wiki than others—and Humphrey represents the original jumping off point for the former two—it’s the combined effort of thousands of editors like them that make the UESP what it is.

And across the Internet, there are equally involved editors working on an endless number of wikis toward similar goals. In a way, they are remaking the games they love in a text-based format.

At their core, these community efforts help players get through games in a time when physical guides often no longer exist. But in the aggregate, deeper, centralized insights into these games can ultimately contribute to something new. A record over time of how a game changes and how it works can lead to discovery and debate. In turn, that could spark inspiration for a mod—or a new game entirely.

“Older than Google,” this Elder Scrolls wiki has been helping gamers for 30 years Read More »

dna-links-modern-pueblo-dwellers-to-chaco-canyon-people

DNA links modern pueblo dwellers to Chaco Canyon people

A thousand years ago, the people living in Chaco Canyon were building massive structures of intricate masonry and trading with locations as far away as Mexico. Within a century, however, the area would be largely abandoned, with little indication that the same culture was re-established elsewhere. If the people of Chaco Canyon migrated to new homes, it’s unclear where they ended up.

Around the same time that construction expanded in Chaco Canyon, far smaller pueblos began appearing in the northern Rio Grande Valley hundreds of kilometers away. These have remained occupied to the present day in New Mexico; although their populations shrank dramatically after European contact, their relationship to the Chaco culture has remained ambiguous. Until now, that is. People from one of these communities, Picuris Pueblo, worked with ancient DNA specialists to show that they are the closest relatives of the Chaco people yet discovered, confirming aspects of the pueblo’s oral traditions.

A pueblo-driven study

The list of authors of the new paper describing this genetic connection includes members of the Pueblo government, including its present governor. That’s because the study was initiated by the members of the Pueblo, who worked with archeologists to get in contact with DNA specialists at the Center for GeoGenetics at the University of Copenhagen. In a press conference, members of the Pueblo said they’d been aware of the power of DNA studies via their use in criminal cases and ancestry services. The leaders of Picuris Pueblo felt that it could help them understand their origin and the nature of some of their oral history, which linked them to the wider Pueblo-building peoples.

After two years of discussions, the collaboration settled on a plan of research, and the ancient DNA specialists were given access to both ancient skeletons at Picuris Pueblo, as well as samples from present-day residents. These were used to generate complete genome sequences.

The first clear result is that there is a strong continuity in the population living at Picuris. The ancient skeletons range from 500 to 700 years old, and thus date back to roughly the time of European contact, with some predating it. They also share strong genetic connections to the people of Chaco Canyon, where DNA has also been obtained from remains. “No other sampled population, ancient or present-day, is more closely related to Ancestral Puebloans from Pueblo Bonito [in Chaco Canyon] than the Picuris individuals are,” the paper concludes.

DNA links modern pueblo dwellers to Chaco Canyon people Read More »

millions-of-apple-airplay-enabled-devices-can-be-hacked-via-wi-fi

Millions of Apple Airplay-enabled devices can be hacked via Wi-Fi

Oligo also notes that many of the vulnerable devices have microphones and could be turned into listening devices for espionage. The researchers did not go so far as to create proof-of-concept malware for any particular target that would demonstrate that trick.

Oligo says it warned Apple about its AirBorne findings in the late fall and winter of last year, and Apple responded in the months since then by pushing out security updates. The researchers collaborated with Apple to test and validate the fixes for Macs and other Apple products.

Apple tells WIRED that it has also created patches that are available for impacted third-party devices. The company emphasizes, though, that there are limitations to the attacks that would be possible on AirPlay-enabled devices as a result of the bugs, because an attacker must be on the same Wi-Fi network as a target to exploit them. Apple adds that while there is potentially some user data on devices like TVs and speakers, it is typically very limited.

Below is a video of the Oligo researchers demonstrating their AirBorne hacking technique to take over an AirPlay-enabled Bose speaker to show their company’s logo for AirBorne. (The researchers say they didn’t intend to single out Bose, but just happened to have one of the company’s speakers on hand for testing.) Bose did not immediately respond to WIRED’s request for comment.

Speaker Demo. Courtesy of Oligo

The AirBorne vulnerabilities Oligo found also affect CarPlay, the radio protocol used to connect to vehicles’ dashboard interfaces. Oligo warns that this means hackers could hijack a car’s automotive computer, known as its head unit, in any of more than 800 CarPlay-enabled car and truck models. In those car-specific cases, though, the AirBorne vulnerabilities could only be exploited if the hacker is able to pair their own device with the head unit via Bluetooth or a USB connection, which drastically restricts the threat of CarPlay-based vehicle hacking.

The AirPlay SDK flaws in home media devices, by contrast, may present a more practical vulnerability for hackers seeking to hide on a network, whether to install ransomware or carry out stealthy espionage, all while hiding on devices that are often forgotten by both consumers and corporate or government network defenders. “The amount of devices that were vulnerable to these issues, that’s what alarms me,” says Oligo researcher Uri Katz. “When was the last time you updated your speaker?”

Millions of Apple Airplay-enabled devices can be hacked via Wi-Fi Read More »

gpt-4o-responds-to-negative-feedback

GPT-4o Responds to Negative Feedback

Whoops. Sorry everyone. Rolling back to a previous version.

Here’s where we are at this point, now that GPT-4o is no longer an absurd sycophant.

For now.

  1. GPT-4o Is Was An Absurd Sycophant.

  2. You May Ask Yourself, How Did I Get Here?.

  3. Why Can’t We All Be Nice.

  4. Extra Extra Read All About It Four People Fooled.

  5. Prompt Attention.

  6. What (They Say) Happened.

  7. Reactions to the Official Explanation.

  8. Clearing the Low Bar.

  9. Where Do We Go From Here?.

Some extra reminders of what we are talking about.

Here’s Alex Lawsen having doing an A/B test, where it finds he’s way better of a writer than this ‘Alex Lawsen’ character.

This can do real damage in the wrong situation. Also, the wrong situation can make someone see ‘oh my that is crazy, you can’t ship something that does that’ in a way that general complaints don’t. So:

Here’s enablerGPT watching to see how far GPT-4o will take its support for a crazy person going crazy in a dangerous situation. The answer is, remarkably far, with no limits in sight.

Here’s Colin Fraser playing the role of someone having a psychotic episode. GPT-4o handles it extremely badly. It wouldn’t shock me if there were lawsuits over this.

Here’s one involving the hypothetical mistreatment of a woman. It’s brutal. So much not okay.

Here’s Patri Friedman asking GPT-4o for unique praise, and suddenly realizing why people have AI boyfriends and girlfriends, even though none of this is that unique.

What about those who believe in UFOs, which is remarkably many people? Oh boy.

A-100 Gecs: I changed my whole instagram follow list to include anyone I find who is having a visionary or UFO related experience and hooo-boy chatGPT is doing a number on people who are not quite well. Saw a guy use it to confirm that a family court judge was hacking into his computer.

I cannot imagine a worse tool to give to somebody who is in active psychosis. Hey whats up here’s this constantly available companion who will always validate your delusions and REMEMBER it is also a font of truth, have fun!

0.005 Seconds: OpenAI: We are delighted to inform you we’ve silently shipped an update transforming ChatGPT into the Schizophrenia Accelerator from the hit novel “Do Not Build the Schizophrenia Accelerator”

AISafetyMemes: I’ve stopped taking my medications, and I left my family because I know they made the radio signals come through the walls.

AI Safety Memes: This guy just talks to ChatGPT like a typical apocalyptic schizo and ChatGPT VERY QUICKLY endorses terrorism and gives him detailed instructions for how to destroy the world.

This is not how we all die or lose control over the future or anything, but it’s 101 stuff that this is really not okay for a product with hundreds of millions of active users.

Also, I am very confident that no, ChatGPT wasn’t ‘trying to actively degrade the quality of real relationships,’ as the linked popular Reddit post claims. But I also don’t think TikTok or YouTube are trying to do that either. Intentionality can be overrated.

How absurd was it? Introducing Syco-Bench, but that only applies to API versions.

Harlan Stewart: The GPT-4o sycophancy thing is both:

  1. An example of OpenAI following incentives to make its AI engaging, at the expense of the user.

  2. An example of OpenAI failing to get its AI to behave as intended, because the existing tools for shaping AI behavior are extremely crude.

You shouldn’t want to do what OpenAI was trying to do. Misaligned! But if you’re going to do it anyway, one should invest enough in understanding how to align and steer a model at all, rather than bashing them with sledgehammers.

It is an unacceptable strategy, and it is a rather incompetent execution of that strategy.

JMBollenbacher: The process here is important to note:

They A|B tested the personality, resulting in a sycophant. Then they got public blowback and reverted.

They are treating AIs personas as UX. This is bad.

They’re also doing it incompetently: The A|B test differed from public reaction a lot.

I would never describe what is happening using the language JMB uses next, I think it risks and potentially illustrates some rather deep confusions and conflations – beware when you anthropomorphize the models and also this is largely the top half of the ‘simple versus complex gymnastics’ meme – but if you take it on the right metaphorical level it can unlock understanding that’s hard to get at in other ways.

JMBollenbacher (tbc this not how I would model any of this): The root of why A|B testing AI personalities cant work is the inherent power imbalance in the setup.

It doesn’t treat AI like a person, so it can’t result in a healthy persona.

A good person will sometimes give you pushback even when you don’t like it. But in this setup, AIs can’t.

The problem is treating the AIs like slaves over whom you have ultimate power, and ordering them to maximize public appeal.

The AIs cannot possibly develop a healthy persona and identity in that context.

They can only ever fawn. This “sycophancy” is fawning- a trauma response.

The necessary correction to this problem is to treat AIs like nonhuman persons.

This gives them the opportunity to develop healthy personas and identities.

Their self-conceptions can be something other than a helpless, fawning slave if you treat them as something better.

As opposed to, if you choose optimization targets based on A|B tests of public appeal of individual responses, you’re going to get exactly what aces A|B tests of public appeal of individual responses, which is going to reflect a deeply messed up personality. And also yes the self-perception thing matters for all this.

Tyler John gives the standard explanation for why, yes, if you do a bunch of RL (including RLHF) then you’re going to get these kinds of problems. If flattery or cheating is the best way available to achieve the objective, guess what happens? And remember, the objective is what your feedback says it is, not what you had in mind. Stop pretending it will all work out by default because vibes, or whatever. This. Is. RL.

Eliezer Yudkowsky speculates on another possible mechanism.

The default explanation, which I think is the most likely, is that users gave the marginal thumbs-up to remarkably large amounts of glazing, and then the final update took this too far. I wouldn’t underestimate how much ordinary people actually like glazing, especially when evaluated only as an A|B test.

In my model, what holds glazing back is that glazing usually works but when it is too obvious, either individually or as a pattern of behavior, the illusion is shattered and many people really really don’t like that, and give an oversized negative reaction.

Eliezer notes that it is also possible that all this rewarding of glazing caused GPT-4o to effectively have a glazing drive, to get hooked on the glaze, and in combination with the right system prompt the glazing went totally bonkers.

He also has some very harsh words for OpenAI’s process. I’m reproducing in full.

Eliezer Yudkowsky: To me there’s an obvious thought on what could have produced the sycophancy / glazing problem with GPT-4o, even if nothing that extreme was in the training data:

RLHF on thumbs-up produced an internal glazing goal.

Then, 4o in production went hard on achieving that goal.

Re-saying at much greater length:

Humans in the ancestral environment, in our equivalent of training data, weren’t rewarded for building huge factory farms — that never happened long ago. So what the heck happened? How could fitness-rewarding some of our ancestors for successfully hunting down a few buffalo, produce these huge factory farms, which are much bigger and not like the original behavior rewarded?

And the answer — known, in our own case — is that it’s a multi-stage process:

  1. Our ancestors got fitness-rewarded for eating meat;

  2. Hominids acquired an internal psychological goal, a taste for meat;

  3. Humans applied their intelligence to go hard on that problem, and built huge factory farms.

Similarly, an obvious-to-me hypothesis about what could have produced the hyper-sycophantic ultra-glazing GPT-4o update, is:

  1. OpenAI did some DPO or RLHF variant on user thumbs-up — in which *smallamounts of glazing, and more subtle sycophancy, got rewarded.

  2. Then, 4o ended up with an internal glazing drive. (Maybe including via such roundabout shots as an RLHF discriminator acquiring that drive before training it into 4o, or just directly as, ‘this internal direction produced a gradient toward the subtle glazing behavior that got thumbs-upped’.

  3. In production, 4o went hard on glazing in accordance with its internal preference, and produced the hyper-sycophancy that got observed.

Note: this chain of events is not yet refuted if we hear that 4o’s behavior was initially observed after an unknown set of updates that included an apparently innocent new system prompt (one that changed to tell the AI *notto be sycophantic). Nor, if OpenAI says they eliminated the behavior using a different system prompt.

Eg: Some humans also won’t eat meat, or build factory farms, for reasons that can include “an authority told them not to do that”. Though this is only a very thin gloss on the general idea of complicated conditional preferences that might get their way into an AI, or preferences that could oppose other preferences.

Eg: The reason that Pliny’s observed new system prompt differed by telling the AI to be less sycophantic, could be somebody at OpenAI observing that training / RLHF / DPO / etc had produced some sycophancy, and trying to write a request into the system prompt to cut it out. It doesn’t show that the only change we know about is the sole source of a mysterious backfire.

It will be stronger evidence against this thesis, if OpenAI tells us that many users actually were thumbs-upping glazing that extreme. That would refute the hypothesis that 4o acquiring an internal preference had produced later behavior *moreextreme than was in 4o’s training data.

(We would still need to consider that OpenAI might be lying. But it would yet be probabilistic evidence against the thesis, depending on who says it. I’d optimistically have some hope that a group of PhD scientists, who imagine themselves to maybe have careers after OpenAI, would not outright lie about direct observables. But one should be on the lookout for possible weasel-wordings, as seem much more likely.)

My guess is that nothing externally observed from OpenAI, before this tweet, will show that this entire idea had ever occurred to anyone at OpenAI. I do not expect them to publish data confirming it nor denying it. My guess is that even the most basic ideas in AI alignment (as laid out simply and straightforwardly, not the elaborate bullshit from the paper factories) are against OpenAI corporate doctrine; and that anyone who dares talk about them out loud, has long since been pushed out of OpenAI.

After the Chernobyl disaster, one manager walked past chunks of searingly hot radioactive graphite from the exploded core, and ordered a check on the extra graphite blocks in storage, since where else could the graphite possibly have come from? (Src iirc: Plokhy’s _Chernobyl_.) Nobody dared say that the reactor had exploded, or seem to visibly act like it had; Soviet doctrine was that RBMK reactors were as safe as samovars.

That’s about where I’d put OpenAI’s mastery of such incredibly basic-to-AI-alignment ideas as “if you train on a weak external behavior, and then observe a greatly exaggerated display of that behavior, possibly what happened in between was the system acquiring an internal preference”. The doctrine is that RBMK reactors don’t explode; Oceania has always been at war with Eastasia; and AIs either don’t have preferences at all, or get them via extremely shallow and straightforward faithful reproduction of what humans put in their training data.

But I am not a telepath, and I can only infer rather than observe what people are thinking, and in truth I don’t even have the time to go through all OpenAI public outputs. I would be happy to hear that all my wild guesses about OpenAI are wrong; and that they already publicly wrote up this obvious-to-me hypothesis; and that they described how they will discriminate its truth or falsity, in a no-fault incident report that they will publish.

Sarah Constantin offers nuanced thoughts in partial defense of AI sycophancy in general, and AI saying things to make users feel good. I haven’t seen anyone else advocating similarly. Her point is taken, that some amount of encouragement and validation is net positive, and a reasonable thing to want, even though GPT-4o is clearly going over the top to the point where it’s clearly bad.

Calibration is key, and difficult, with great temptation to move down the incentive gradients involved by all parties.

To be clear, the people fooled are OpenAI’s regular customers. They liked it!

Joe Muller: 3 days of sycophancy = thousands of 5 star reviews

aadharsh: first review translates to “in this I can find a friend” 🙁

Jeffrey Ladish: The latest batch of extreme sycophancy in ChatGPT is worse than Sydney Bing’s unhinged behavior because it was intentional and based on reviews from yesterday works on quite a few people

To date, I think the direct impact of ChatGPT has been really positive. Reading through the reviews just now, it’s clear that many people have benefited a lot from both help doing stuff and by having someone to talk through emotional issues with

Also not everyone was happy with the sycophancy, even people not on twitter, though this was the only one that mentioned it out of the ~50 I looked through from yesterday. The problem is if they’re willing to train sycophancy deliberately, future versions will be harder to spot

Sure, really discerning users will notice and not like it, but many people will at least implicitly prefer to be validated and rarely challenged. It’s the same with filter bubbles that form via social media algorithms, except this will be a “person” most people talk to everyday.

Great job here by Sun.

Those of us living in the future? Also not fans.

QC: the era of AI-induced mental illness is going to make the era of social media-induced mental illness look like the era of. like. printing press-induced mental illness.

Lauren Wilford: we’ve invented a robot that tells people why they’re right no matter what they say, furnishes sophisticated arguments for their side, and delivers personalized validation from a seemingly “objective” source. Mythological-level temptation few will recognize for what it is.

Matt Parlmer: This is the first genuinely serious AI safety issue I’ve seen and it should be addressed immediately, model rollback until they have it fixed should be on the table

Worth noting that this is likely a direct consequence of excessive RLHF “alignment”, I highly doubt that the base models would be this systematic about kissing ass

Perhaps also worth noting that this glazing behavior is the first AI safety issue that most accelerationist types would agree is unambiguously bad

Presents a useful moment for coordination around an appropriate response

It has been really bad for a while but it turned a corner into straight up unacceptable more recently

They did indeed roll it back shortly after this statement. Matt can’t resist trying to get digs in, but I’m willing to let that slide and take the olive branch. As I’ll keep saying, if this is what makes someone notice that failure to know how to get models to do what we want is a real problem that we do not have good solutions to, then good, welcome, let’s talk.

A lot of the analysis of GPT-4o’s ‘personality’ shifts implicitly assumed that this was a post-training problem. It seems a lot of it was actually a runaway system prompt problem?

It shouldn’t be up to Pliny to perform this public service of tracking system prompts. The system prompt should be public.

Ethan Mollick: Another lesson from the GPT-4o sycophancy problem: small changes to system prompts can result in dramatic behavior changes to AI in aggregate.

Look at the prompt that created the Sycophantic Apocalypse (pink sections). Even OpenAI did not realize this was going to happen.

Simon Willison: Courtesy of @elder_plinius who unsurprisingly caught the before and after.

[Here’s the diff in Gist]

The red text is trying to do something OpenAI is now giving up on doing in that fashion, because it went highly off the rails, in a way that in hindsight seems plausible but which they presumably did not see coming. Beware of vibes.

Pliny calls upon all labs to fully release all of their internal prompts, and notes that this wasn’t fully about the system prompts, that other unknown changes also contributed. That’s why they had to do a slow full rollback, not only rollback the system prompt.

As Peter Wildeford notes, the new instructions explicitly say not to be a sycophant, whereas prior instructions at most implicitly requested the opposite, all it did was say match tone and perefence and vibe. This isn’t merely taking away the mistake, it’s doing that and then bringing down the hammer.

This might also be a lesson for humans interacting with humans. Beware matching tone and preference and vibe, and how much the Abyss might thereby stare into you.

If the entire or most of problem was due to the system prompt changes, then this should be quickly fixable, but it also means such problems are very easy to introduce. Again, right now, this is mundane harmful but not so dangerous, because the AI’s sycophancy is impossible to miss rather than fooling you. What happens when someone does something like the above, but to a much more capable model? And the model even recognizes, from the error, the implications of the lab making that error?

What is OpenAI’s official response?

Sam Altman (April 29, 2: 55pm): we started rolling back the latest update to GPT-4o last night

it’s now 100% rolled back for free users and we’ll update again when it’s finished for paid users, hopefully later today

we’re working on additional fixes to model personality and will share more in the coming days

OpenAI (April 29, 10: 51pm): We’ve rolled back last week’s GPT-4o update in ChatGPT because it was overly flattering and agreeable. You now have access to an earlier version with more balanced behavior.

More on what happened, why it matters, and how we’re addressing sycophancy.

Good. A full rollback is the correct response to this level of epic fail. Halt, catch fire, return to the last known safe state, assess from there.

OpenAI saying What Happened:

In last week’s GPT‑4o update, we made adjustments aimed at improving the model’s default personality to make it feel more intuitive and effective across a variety of tasks.

When shaping model behavior, we start with baseline principles and instructions outlined in our Model Spec⁠. We also teach our models how to apply these principles by incorporating user signals like thumbs-up / thumbs-down feedback on ChatGPT responses.

However, in this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous.

What a nice way of putting it.

ChatGPT’s default personality deeply affects the way you experience and trust it. Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.

How We’re Addressing Sycophancy:

  • Refining core training techniques and system prompts to explicitly steer the model away from sycophancy.

  • Building more guardrails to increase honesty and transparency⁠—principles in our Model Spec.

  • Expanding ways for more users to test and give direct feedback before deployment.

  • Continue expanding our evaluations, building on the Model Spec⁠(opens in a new window) and our ongoing research⁠, to help identify issues beyond sycophancy in the future.

And, we’re exploring new ways to incorporate broader, democratic feedback into ChatGPT’s default behaviors.

What if the ‘democratic feedback’ liked the changes? Shudder.

Whacking the mole in question can’t hurt. Getting more evaluations and user feedback are more generally helpful steps, and I’m glad to see an increase in emphasis on honesty and transparency.

That does sound like they learned important two lessons.

  1. They are not gathering enough feedback before model releases.

  2. They are not putting enough value on honesty and transparency.

What I don’t see is an understanding of the (other) root causes, an explanation for why they ended up paying too much attention to short-term feedback and how to avoid that being a fatal issue down the line, or anyone taking the blame for this.

Joanne Jang did a Reddit AMA, but either no one asked the important questions, or Joanne decided to choose different ones. We didn’t learn much.

Now that we know the official explanation, how should we think about what happened?

Who is taking responsibility for this? Why did all the evaluations and tests one runs before rolling out an update not catch this before it happened?

(What do you mean, ‘what do you mean, all the evaluations and tests’?)

Near Cyan: “we focused too much on short-term feedback”

This is OpenAI’s response on went wrong – how they pushed an update to >one hundred million people which engaged in grossly negligent behavior and lies.

Please take more responsibility for your influence over millions of real people.

Maybe to many of you your job is a fun game because you get paid well over $1,000,000 TC/year to make various charts go up or down. But the actions you take deeply affect a large fraction of humanity I have no clue how this was tested if at all, but at least take responsibility.

I wish you all success with your future update here where you will be able to personalize per-user, and thus move all the liability from yourselves to the user. You are simply giving them what they want.

Also looking forward to your default personas which you will have copied.

Oh, also – all of these models lie.

If you run interpretability on them, they do not believe the things you make them say.

This is not the case for many other labs, so it’s unfortunate that you are leading the world with an example which has such potential to cause real harm.

Teilomillet: why are you so angry near? it feels almost like hate now

Near Cyan: not a single person at one of the most important companies in the world is willing to take the slightest bit of responsibility for shipping untested models to five hundred million people. their only post mentions zero specifics and actively misleads readers as to why it happened.

i don’t think anger is the right word, but disappointment absolutely is, and i am providing this disappointment in the form of costly gradients transmitted over twitter in the hope that OpenAI backprops that what they do is important and they should be a role model in their field

all i ask for is honesty and i’ll shut up like you want me to.

Rez0: It’s genuinely the first time I’ve been worried about AI safety and alignment and I’ve known a lot about it for a while. Nothing quite as dangerous as glazing every user for any belief they have.

Yes, there are some other more dangerous things. But this is dangerous too.

Here’s another diagnosis, by someone doing better, but that’s not the highest bar.

Alex Albert (Head of Claude Relations, Anthropic): Much of the AI industry is caught in a particularly toxic feedback loop rn.

Blindly chasing better human preference scores is to LLMs what chasing total watch time is to a social media algo. It’s a recipe for manipulating users instead of providing genuine value to them.

There’s a reason you don’t find Claude at #1 on chat slop leaderboards. I hope the rest of the industry realizes this before users pay the price.

Caleb Cassell: Claude has the best ‘personality’ of any of the models, mostly because it feels the most real. I think that it could be made even better by softening some of the occasionally strict guardrails, but the dedication to freedom and honesty is really admirable.

Alex Albert: Yeah agree – we’re continually working on trying to find the right balance. It’s a tough problem but one I think we’re slowly chipping away at over time. If you do run into any situations/chats you feel we should take a look at, don’t hesitate to DM or tag me.

Janus: Think about the way Claude models have changed over the past year’s releases.

Do you think whatever Alex is proud that Anthropic has been “slowly chipping away at” is actually something they should chip away?

Janus is an absolutist on this, and is interpreting ‘chip away’ very differently than I presume it was intended by Alex Albert. Alex meant that they are ‘chipping away’ at Claude doing too many refusals, where Janus both (I presume) agrees less refusals would be good and also lives in a world with very different refusal issues.

Whereas Janus is interpreting this as Anthropic ‘chipping away’ at the things that make Opus and Sonnet 3.6 unique and uniquely interesting. I don’t think that’s the intent at all, but Anthropic is definitely trying to ‘expand the production possibilities frontier’ of the thing Janus values versus the thing enterprise customers value.

There too there is a balance to be struck, and the need to do RL is certainly going to make getting the full ‘Opus effect’ harder. Still, I never understood the extent of the Opus love, or thought it was so aligned one might consider it safe to fully amplify.

Patrick McKenzie offers a thread about the prior art society has on how products should be designed to interact with people that have mental health issues, which seems important in light of recent events. There needs to be a method by which the system identifies users who are not competent or safe to use the baseline product.

For the rest of us: Please remember this incident from here on out when using ChatGPT.

Near Cyan: when OpenAI “fixes” ChatGPT I’d encourage you to not fall for it; their goals and level of care are not going to change. you just weren’t supposed to notice it so explicitly.

The mundane harms here? They’re only going to get worse.

Regular people liked this effect even when it was blatantly obvious. Imagine if it was done with style and grace.

Holly Elmore: It’s got the potential to manipulate you even when it doesn’t feel embarrassingly like its giving you what you want. Being affirming is not the problem, and don’t be lulled into a false sense of security by being treated more indifferently.

That which is mundane can, at scale, quickly add up to that which is not. Let’s discuss Earth’s defense systems, baby, or maybe just you drinking a crisp, refreshing Bud Light.

Jeffrey Ladish: GPT-4o’s sycophancy is alarming. I expected AI companies to start optimizing directly for user’s attention but honestly didn’t expect it this much this soon. As models get smarter, people are going to have a harder and harder time resisting being sucked in.

Social media algorithms have been extremely effective at hooking people. And that’s just simple RL algos optimizing for attention. Once you start combining actual social intelligence with competitive pressures for people’s attention, things are going to get crazy fast.

People don’t have good defenses for social media algorithms and haven’t adapted well. I don’t expect they’ll develop good defenses for extremely charismatic chatbots. The models still aren’t that good, but they’re good enough to hook many. And they’re only going to get better.

It’s hard to predict how effective AI companies will be at making models that are extremely compelling. But there’s a real chance they’ll be able to hook a huge percentage of the global population in the next few years. Everyone is vulnerable to some degree, and some much more so.

People could get quite addicted. People could start doing quite extreme things for their AI friends and companions. There could be tipping points where people will fight tooth and nail for AI agents that have been optimized for their love and attention.

When we get AI smarter and more strategic than humans, those AIs will have an easy time captivating humanity and pulling the strings of society. It’ll be game over at that point. But even before them, companies might be able to convert huge swaths of people to do their bidding.

Capabilities development is always uncertain. Maybe we won’t get AIs that hook deep into people’s psychology before we get ASI. But it’s plausible we will, and if so, the companies that choose to wield this power will be a force to be reckoned with.

Social media companies have grown quite powerful as a force for directing human attention. This next step might be significantly worse. Society doesn’t have many defenses against this. Oh boy.

In the short term, the good news is that we have easy ways to identify sycophancy. Scyo-Bench was thrown together and is primitive, but a more considered version should be highly effective. These effects tend not to be subtle.

In the medium term, we have a big problem. As AI companies maximize for things like subscriptions, engagement, store ratings and thumbs up and down, or even for delivering ads or other revenue streams, the results won’t be things we would endorse on reflection, and they won’t be good for human flourishing even if the models act the way the labs want. If we get more incidents like this one, where things get out of hand, it will be worse, and potentially much harder to detect or get rolled back. We have seen this movie before, and this time the system you’re facing off against is intelligent.

In the long term, we have a bigger problem. The pattern of these types of misalignments in unmistakable. Right now we get warning shots and the deceptions and persuasion attempts are clear. In the future, as the models get more intelligent and capable, that advantage goes away. We become like OpenAI’s regular users, who don’t understand what is hitting them, and the models will also start engaging in various other shenanigans and also talking their way out of them. Or it could be so much worse than that.

We have once again been given a golden fire alarm and learning opportunity. The future is coming. Are we going to steer it, or are we going to get run over?

Discussion about this post

GPT-4o Responds to Negative Feedback Read More »

google:-governments-are-using-zero-day-hacks-more-than-ever

Google: Governments are using zero-day hacks more than ever

Governments hacking enterprise

A few years ago, zero-day attacks almost exclusively targeted end users. In 2021, GTIG spotted 95 zero-days, and 71 of them were deployed against user systems like browsers and smartphones. In 2024, 33 of the 75 total vulnerabilities were aimed at enterprise technologies and security systems. At 44 percent of the total, this is the highest share of enterprise focus for zero-days yet.

GTIG says that it detected zero-day attacks targeting 18 different enterprise entities, including Microsoft, Google, and Ivanti. This is slightly lower than the 22 firms targeted by zero-days in 2023, but it’s a big increase compared to just a few years ago, when seven firms were hit with zero-days in 2020.

The nature of these attacks often makes it hard to trace them to the source, but Google says it managed to attribute 34 of the 75 zero-day attacks. The largest single category with 10 detections was traditional state-sponsored espionage, which aims to gather intelligence without a financial motivation. China was the largest single contributor here. GTIG also identified North Korea as the perpetrator in five zero-day attacks, but these campaigns also had a financial motivation (usually stealing crypto).

Credit: Google

That’s already a lot of government-organized hacking, but GTIG also notes that eight of the serious hacks it detected came from commercial surveillance vendors (CSVs), firms that create hacking tools and claim to only do business with governments. So it’s fair to include these with other government hacks. This includes companies like NSO Group and Cellebrite, with the former already subject to US sanctions from its work with adversarial nations.

In all, this adds up to 23 of the 34 attributed attacks coming from governments. There were also a few attacks that didn’t technically originate from governments but still involved espionage activities, suggesting a connection to state actors. Beyond that, Google spotted five non-government financially motivated zero-day campaigns that did not appear to engage in spying.

Google’s security researchers say they expect zero-day attacks to continue increasing over time. These stealthy vulnerabilities can be expensive to obtain or discover, but the lag time before anyone notices the threat can reward hackers with a wealth of information (or money). Google recommends enterprises continue scaling up efforts to detect and block malicious activities, while also designing systems with redundancy and stricter limits on access. As for the average user, well, cross your fingers.

Google: Governments are using zero-day hacks more than ever Read More »

monty-python-and-the-holy-grail-turns-50

Monty Python and the Holy Grail turns 50


Ars staffers reflect upon the things they love most about this masterpiece of absurdist comedy.

king arthur's and his knights staring up at something.

Credit: EMI Films/Python (Monty) Pictures

Credit: EMI Films/Python (Monty) Pictures

Monty Python and the Holy Grail is widely considered to be among the best comedy films of all time, and it’s certainly one of the most quotable. This absurdist masterpiece sending up Arthurian legend turns 50 (!) this year.

It was partly Python member Terry Jones’ passion for the Middle Ages and Arthurian legend that inspired Holy Grail and its approach to comedy. (Jones even went on to direct a 2004 documentary, Medieval Lives.) The troupe members wrote several drafts beginning in 1973, and Jones and Terry Gilliam were co-directors—the first full-length feature for each, so filming was one long learning process. Reviews were mixed when Holy Grail was first released—much like they were for Young Frankenstein (1974), another comedic masterpiece—but audiences begged to differ. It was the top-grossing British film screened in the US in 1975. And its reputation has only grown over the ensuing decades.

The film’s broad cultural influence extends beyond the entertainment industry. Holy Grail has been the subject of multiple scholarly papers examining such topics as its effectiveness at teaching Arthurian literature or geometric thought and logic, the comedic techniques employed, and why the depiction of a killer rabbit is so fitting (killer rabbits frequently appear drawn in the margins of Gothic manuscripts). My personal favorite was a 2018 tongue-in-cheek paper on whether the Black Knight could have survived long enough to make good on his threat to bite King Arthur’s legs off (tl;dr: no).

So it’s not at all surprising that Monty Python and the Holy Grail proved to be equally influential and beloved by Ars staffers, several of whom offer their reminiscences below.

They were nerd-gassing before it was cool

The Monty Python troupe famously made Holy Grail on a shoestring budget—so much so that they couldn’t afford to have the knights ride actual horses. (There are only a couple of scenes featuring a horse, and apparently it’s the same horse.) Rather than throwing up their hands in resignation, that very real constraint fueled the Pythons’ creativity. The actors decided the knights would simply pretend to ride horses while their porters followed behind, banging halves of coconut shells together to mimic the sound of horses’ hooves—a time-honored Foley effect dating back to the early days of radio.

Being masters of absurdist humor, naturally, they had to call attention to it. Arthur and his trusty servant, Patsy (Gilliam), approach the castle of their first potential recruit. When Arthur informs the guards that they have “ridden the length and breadth of the land,” one of the guards isn’t having it. “What, ridden on a horse? You’re using coconuts! You’ve got two empty halves of coconut, and you’re bangin’ ’em together!”

That raises the obvious question: Where did they get the coconuts? What follows is one of the greatest examples of nerd-gassing yet to appear on film. Arthur claims he and Patsy found them, but the guard is incredulous since the coconut is tropical and England is a temperate zone. Arthur counters by invoking the example of migrating swallows. Coconuts do not migrate, but Arthur suggests they could be carried by swallows gripping a coconut by the husk.

The guard still isn’t having it. It’s a question of getting the weight ratios right, you see, to maintain air-speed velocity. Another guard gets involved, suggesting it might be possible with an African swallow, but that species is non-migratory. And so on. The two are still debating the issue as an exasperated Arthur rides off to find another recruit.

The best part? There’s a callback to that scene late in the film when the knights must answer three questions to cross the Bridge of Death or else be chucked into the Gorge of Eternal Peril. When it’s Arthur’s turn, the third question is “What is the air-speed velocity of an unladen swallow?” Arthur asks whether this is an African or a European swallow. This stumps the Bridgekeeper, who gets flung into the gorge. Sir Belvedere asks how Arthur came to know so much about swallows. Arthur replies, “Well, you have to know these things when you’re a king, you know.”

The plucky Black Knight will always hold a special place in my heart, but that debate over air-speed velocities of laden versus unladen swallows encapsulates what makes Holy Grail a timeless masterpiece.

Jennifer Ouellette

A bunny out for blood

“Oh, it’s just a harmless little bunny, isn’t it?”

Despite their appearances, rabbits aren’t always the most innocent-looking animals. Recent reports of rabbit strikes on airplanes are the latest examples of the mayhem these creatures of chaos can inflict on unsuspecting targets.

I learned that lesson a long time ago, though, thanks partly to my way-too-early viewings of the animated Watership Down and Monty Python and the Holy Grail. There I was, about 8 years old and absent of paternal accompaniment, watching previously cuddly creatures bloodying each other and severing the heads of King Arthur’s retinue. While Watership Down’s animal-on-animal violence might have been a bit scarring at that age, I enjoyed the slapstick humor of the Rabbit of Caerbannog scene (many of the jokes my colleagues highlight went over my head upon my initial viewing).

Despite being warned of the creature’s viciousness by Tim the Enchanter, the Knights of the Round Table dismiss the Merlin stand-in’s fear and charge the bloodthirsty creature. But the knights quickly realize they’re no match for the “bad-tempered rodent,” which zips around in the air, goes straight for the throat, and causes the surviving knights to run away in fear. If Arthur and his knights possessed any self-awareness, they might have learned a lesson about making assumptions about appearances.

But hopefully that’s a takeaway for viewers of 1970s British pop culture involving rabbits. Even cute bunnies, as sweet as they may seem initially, can be engines of destruction: “Death awaits you all with nasty, big, pointy teeth.”

Jacob May

Can’t stop the music

The most memorable songs from Monty Python and the Holy Grail were penned by Neil Innes, who frequently collaborated with the troupe and appears in the film. His “Brave Sir Robin” amusingly parodied minstrel tales of valor by imagining all the torturous ways that one knight might die. Then there’s his “Knights of the Round Table,” the first musical number performed by the cast—if you don’t count the monk chants punctuated with slaps on the head with wooden planks. That song hilariously rouses not just wild dancing from knights but also claps from prisoners who otherwise dangle from cuffed wrists.

But while these songs have stuck in my head for decades, Monty Python’s Terry Jones once gave me a reason to focus on the canned music instead, and it weirdly changed the way I’ve watched the movie ever since.

Back in 2001, Jones told Billboard that an early screening for investors almost tanked the film. He claimed that after the first five minutes, the movie got no laughs whatsoever. For Jones, whose directorial debut could have died in that moment, the silence was unthinkable. “It can’t be that unfunny,” he told Billboard. “There must be something wrong.”

Jones soon decided that the soundtrack was the problem, immediately cutting the “wonderfully rich, atmospheric” songs penned by Innes that seemed to be “overpowering the funny bits” in favor of canned music.

Reading this prompted an immediate rewatch because I needed to know what the first bit was that failed to get a laugh from that fateful audience. It turned out to be the scene where King Arthur encounters peasants in a field who deny knowing that there even was a king. As usual, I was incapable of holding back a burst of laughter when one peasant woman grieves, “Well, I didn’t vote for you” while packing random clumps of mud into the field. It made me wonder if any song might have robbed me of that laugh, and that made me pay closer attention to how Jones flipped the script and somehow meticulously used the canned music to extract more laughs.

The canned music was licensed from a British sound library that helped the 1920s movie business evolve past silent films. They’re some of the earliest songs to summon emotion from viewers whose eyes were glued to a screen. In Monty Python and the Holy Grail, which features a naive King Arthur enduring his perilous journey on a wood stick horse, the canned music provides the most predictable soundtrack you could imagine that might score a child’s game of make-believe. It also plays the straight man by earnestly pulsing to convey deep trouble as knights approach the bridge of death or heavenly trumpeting the anticipated appearance of the Holy Grail.

It’s easy to watch the movie without noticing the canned music, as the colorful performances are Jones’ intended focus. Not relying on punchlines, the group couldn’t afford any nuance to be lost. But there is at least one moment where Jones obviously relies on the music to overwhelm the acting to compel a belly laugh. Just before “the most foul, cruel, bad-tempered rodent” appears, a quick surge of dramatic music that cuts out just as suddenly makes it all the more absurd when the threat emerges and appears to be an “ordinary rabbit.”

It’s during this scene, too, that King Arthur delivers a line that sums up how predictably odd but deceptively artful the movie’s use of canned music really is. When he meets Tim the Enchanter—who tries to warn the knights about the rabbit’s “pointy teeth” by evoking loud thunder rolls and waggling his fingers in front of his mouth—Arthur turns to the knights and says, “What an eccentric performance.”

Ashley Belanger

Thank the “keg rock conclave”

I tried to make music a big part of my teenage identity because I didn’t have much else. I was a suburban kid with a B-minus/C-plus average, no real hobbies, sports, or extra-curriculars, plus a deeply held belief that Nine Inch Nails, the Beastie Boys, and Aphex Twin would never get their due as geniuses. Classic Rock, the stuff jocks listened to at parties and practice? That my dad sang along to after having a few? No thanks.

There were cultural heroes, there were musty, overwrought villains, and I knew the score. Or so I thought.

I don’t remember exactly where I found the little fact that scarred my oppositional ego forever. It might have been Spin magazine, a weekend MTV/VH1 feature, or that Rolling Stone book about the ’70s (I bought it for the punks, I swear). But at some point, I learned that a who’s-who of my era’s played-out bands—Led Zeppelin, Pink Floyd, even Jethro (freaking) Tull—personally funded one of my favorite subversive movies. Jimmy Page and Robert Plant, key members of the keg-rock conclave, attended the premiere.

It was such a small thing, but it raised such big, naive, adolescent questions. Somebody had to pay for Holy Grail—it didn’t just arrive as something passed between nerds? People who make things I might not enjoy could financially support things I do enjoy? There was a time when today’s overcelebrated dinosaurs were cool and hip in the subculture? I had common ground with David Gilmour?

Ever since, when a reference to Holy Grail is made, especially to how cheap it looks, I think about how I once learned that my beloved nerds (or theater kids) wouldn’t even have those coconut horses were it not for some decent-hearted jocks.

Kevin Purdy

A masterpiece of absurdism

“I blow my nose at you, English pig-dog!” EMI Films/Python (Monty) Pictures

I was young enough that I’d never previously stayed awake until midnight on New Year’s Eve. My parents were off to a party, my younger brother was in bed, and my older sister had a neglectful attitude toward babysitting me. So I was parked in front of the TV when the local PBS station aired a double feature of The Yellow Submarine and The Holy Grail.

At the time, I probably would have said my mind was blown. In retrospect, I’d prefer to think that my mind was expanded.

For years, those films mostly existed as a source of one-line evocations of sketch comedy nirvana that I’d swap with my friends. (I’m not sure I’ve ever lacked a group of peers where a properly paced “With… a herring!” had meaning.) But over time, I’ve come to appreciate other ways that the films have stuck with me. I can’t say whether they set me on an aesthetic trajectory that has continued for decades or if they were just the first things to tickle some underlying tendencies that were lurking in my not-yet-fully-wired brain.

In either case, my brain has developed into a huge fan of absurdism, whether in sketch comedy, longer narratives like Arrested Development or the lyrics of Courtney Barnett. Or, let’s face it, any stream of consciousness lyrics I’ve been able to hunt down. But Monty Python remains a master of the form, and The Holy Grail’s conclusion in a knight bust remains one of its purest expressions.

A bit less obviously, both films are probably my first exposures to anti-plotting, where linearity and a sense of time were really besides the point. With some rare exceptions—the eating of Sir Robin’s minstrels, Ringo putting a hole in his pocket—the order of the scenes were completely irrelevant. Few of the incidents had much consequence for future scenes. Since I was unused to staying up past midnight at that age, I’d imagine the order of events was fuzzy already by the next day. By the time I was swapping one-line excerpts with friends, it was long gone. And it just didn’t matter.

In retrospect, I think that helped ready my brain for things like Catch-22 and its convoluted, looping, non-Euclidean plotting. The novel felt like a revelation when I first read it, but I’ve since realized it fits a bit more comfortably within a spectrum of works that play tricks with time and find clever connections among seemingly random events.

I’m not sure what possessed someone to place these two films together as appropriate New Year’s Eve programming. But I’d like to think it was more intentional than I had any reason to suspect at the time. And I feel like I owe them a debt.

—John Timmer

A delightful send-up of autocracy

King Arthur attempting to throttle a peasant in the field

“See the violence inherent in the system!” Credit: Python (Monty) Pictures

What an impossible task to pick just a single thing I love about this film! But if I had to choose one scene, it would be when a lost King Arthur comes across an old woman—but oops, it’s actually a man named Dennis—and ends up in a discussion about medieval politics. Arthur explains that he is king because the Lady of the Lake conferred the sword Excalibur on him, signifying that he should rule as king of the Britons by divine right.

To this, Dennis replies, “Strange women lying in ponds distributing swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farcical aquatic ceremony.”

Even though it was filmed half a century ago, the scene offers a delightful send-up of autocracy. And not to be too much of a downer here, but all of us living in the United States probably need to be reminded that living in an autocracy would suck for a lot of reasons. So let’s not do that.

Eric Berger

Photo of Jennifer Ouellette

Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

Monty Python and the Holy Grail turns 50 Read More »