history

a-history-of-the-internet,-part-1:-an-arpa-dream-takes-form

A history of the Internet, part 1: An ARPA dream takes form


Intergalactic Computer Network

In our new 3-part series, we remember the people and ideas that made the Internet.

A collage of vintage computer elements

Credit: Collage by Aurich Lawson

Credit: Collage by Aurich Lawson

In a very real sense, the Internet, this marvelous worldwide digital communications network that you’re using right now, was created because one man was annoyed at having too many computer terminals in his office.

The year was 1966. Robert Taylor was the director of the Advanced Research Projects Agency’s Information Processing Techniques Office. The agency was created in 1958 by President Eisenhower in response to the launch of Sputnik. So Taylor was in the Pentagon, a great place for acronyms like ARPA and IPTO. He had three massive terminals crammed into a room next to his office. Each one was connected to a different mainframe computer. They all worked slightly differently, and it was frustrating to remember multiple procedures to log in and retrieve information.

Author’s re-creation of Bob Taylor’s office with three teletypes. Credit: Rama & Musée Bolo (Wikipedia/Creative Commons), steve lodefink (Wikipedia/Creative Commons), The Computer Museum @ System Source

In those days, computers took up entire rooms, and users accessed them through teletype terminals—electric typewriters hooked up to either a serial cable or a modem and a phone line. ARPA was funding multiple research projects across the United States, but users of these different systems had no way to share their resources with each other. Wouldn’t it be great if there was a network that connected all these computers?

The dream is given form

Taylor’s predecessor, Joseph “J.C.R.” Licklider, had released a memo in 1963 that whimsically described an “Intergalactic Computer Network” that would allow users of different computers to collaborate and share information. The idea was mostly aspirational, and Licklider wasn’t able to turn it into a real project. But Taylor knew that he could.

In a 1998 interview, Taylor explained: “In most government funding, there are committees that decide who gets what and who does what. In ARPA, that was not the way it worked. The person who was responsible for the office that was concerned with that particular technology—in my case, computer technology—was the person who made the decision about what to fund and what to do and what not to do. The decision to start the ARPANET was mine, with very little or no red tape.”

Taylor marched into the office of his boss, Charles Herzfeld. He described how a network could save ARPA time and money by allowing different institutions to share resources. He suggested starting with a small network of four computers as a proof of concept.

“Is it going to be hard to do?” Herzfeld asked.

“Oh no. We already know how to do it,” Taylor replied.

“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”

Taylor wasn’t lying—at least, not completely. At the time, there were multiple people around the world thinking about computer networking. Paul Baran, working for RAND, published a paper in 1964 describing how a distributed military networking system could be made resilient even if some nodes were destroyed in a nuclear attack. Over in the UK, Donald Davies independently came up with a similar concept (minus the nukes) and invented a term for the way these types of networks would communicate. He called it “packet switching.”

On a regular phone network, after some circuit switching, a caller and answerer would be connected via a dedicated wire. They had exclusive use of that wire until the call was completed. Computers communicated in short bursts and didn’t require pauses the way humans did. So it would be a waste for two computers to tie up a whole line for extended periods. But how could many computers talk at the same time without their messages getting mixed up?

Packet switching was the answer. Messages were divided into multiple snippets. The order and destination were included with each message packet. The network could then route the packets in any way that made sense. At the destination, all the appropriate packets were put into the correct order and reassembled. It was like moving a house across the country: It was more efficient to send all the parts in separate trucks, each taking their own route to avoid congestion.

A simplified diagram of how packet switching works. Credit: Jeremy Reimer

By the end of 1966, Taylor had hired a program director, Larry Roberts. Roberts sketched a diagram of a possible network on a napkin and met with his team to propose a design. One problem was that each computer on the network would need to use a big chunk of its resources to manage the packets. In a meeting, Wes Clark passed a note to Roberts saying, “You have the network inside-out.” Clark’s alternative plan was to ship a bunch of smaller computers to connect to each host. These dedicated machines would do all the hard work of creating, moving, and reassembling packets.

With the design complete, Roberts sent out a request for proposals for constructing the ARPANET. All they had to do now was pick the winning bid, and the project could begin.

BB&N and the IMPs

IBM, Control Data Corporation, and AT&T were among the first to respond to the request. They all turned it down. Their reasons were the same: None of these giant companies believed the network could be built. IBM and CDC thought the dedicated computers would be too expensive, but AT&T flat-out said that packet switching wouldn’t work on its phone network.

In late 1968, ARPA announced a winner for the bid: Bolt Beranek and Newman. It seemed like an odd choice. BB&N had started as a consulting firm that calculated acoustics for theaters. But the need for calculations led to the creation of a computing division, and its first manager had been none other than J.C.R. Licklider. In fact, some BB&N employees had been working on a plan to build a network even before the ARPA bid was sent out. Robert Kahn led the team that drafted BB&N’s proposal.

Their plan was to create a network of “Interface Message Processors,” or IMPs, out of Honeywell 516 computers. They were ruggedized versions of the DDP-516 16-bit minicomputer. Each had 24 kilobytes of core memory and no mass storage other than a paper tape reader, and each cost $80,000 (about $700,000 today). In comparison, an IBM 360 mainframe cost between $7 million and $12 million at the time.

An original IMP, the world’s first router. It was the size of a large refrigerator. Credit: Steve Jurvetson (CC BY 2.0)

The 516’s rugged appearance appealed to BB&N, who didn’t want a bunch of university students tampering with its IMPs. The computer came with no operating system, but it didn’t really have enough RAM for one. The software to control the IMPs was written on bare metal using the 516’s assembly language. One of the developers was Will Crowther, who went on to create the first computer adventure game.

One other hurdle remained before the IMPs could be put to use: The Honeywell design was missing certain components needed to handle input and output. BB&N employees were dismayed that the first 516, which they named IMP-0, didn’t have working versions of the hardware additions they had requested.

It fell on Ben Barker, a brilliant undergrad student interning at BB&N, to manually fix the machine. Barker was the best choice, even though he had slight palsy in his hands. After several stressful 16-hour days wrapping and unwrapping wires, all the changes were complete and working. IMP-0 was ready.

In the meantime, Steve Crocker at the University of California, Los Angeles, was working on a set of software specifications for the host computers. It wouldn’t matter if the IMPs were perfect at sending and receiving messages if the computers themselves didn’t know what to do with them. Because the host computers were part of important academic research, Crocker didn’t want to seem like he was a dictator telling people what to do with their machines. So he titled his draft a “Request for Comments,” or RFC.

This one act of politeness forever changed the nature of computing. Every change since has been done as an RFC, and the culture of asking for comments pervades the tech industry even today.

RFC No. 1 proposed two types of host software. The first was the simplest possible interface, in which a computer pretended to be a dumb terminal. This was dubbed a “terminal emulator,” and if you’ve ever done any administration on a server, you’ve probably used one. The second was a more complex protocol that could be used to transfer large files. This became FTP, which is still used today.

A single IMP connected to one computer wasn’t much of a network. So it was very exciting in September 1969 when IMP-1 was delivered to BB&N and then shipped via air freight to UCLA. The first test of the ARPANET was done with simultaneous phone support. The plan was to type “LOGIN” to start a login sequence. This was the exchange:

“Did you get the L?”

“I got the L!”

“Did you get the O?”

“I got the O!”

“Did you get the G?”

“Oh no, the computer crashed!”

It was an inauspicious beginning. The computer on the other end was helpfully filling in the “GIN” part of “LOGIN,” but the terminal emulator wasn’t expecting three characters at once and locked up. It was the first time that autocomplete had ruined someone’s day. The bug was fixed, and the test completed successfully.

IMP-2, IMP-3, and IMP-4 were delivered to the Stanford Research Institute (where Doug Engelbart was keen to expand his vision of connecting people), UC Santa Barbara, and the University of Utah.

Now that the four-node test network was complete, the team at BB&N could work with the researchers at each node to put the ARPANET through its paces. They deliberately created the first ever denial of service attack in January 1970, flooding the network with packets until it screeched to a halt.

The original ARPANET, predecessor of the Internet. Circles are IMPs, and rectangles are computers. Credit: DARPA

Surprisingly, many of the administrators of the early ARPANET nodes weren’t keen to join the network.  They didn’t like the idea of anyone else being able to use resources on “their” computers. Taylor reminded them that their hardware and software projects were mostly ARPA-funded, so they couldn’t opt out.

The next month, Stephen Carr, Stephen Crocker, and Vint Cerf released RFC No. 33. It described a Network Control Protocol (NCP) that standardized how the hosts would communicate with each other. After this was adopted, the network was off and running.

J.C.R. Licklider, Bob Taylor, Larry Roberts, Steve Crocker, and Vint Cerf. Credit: US National Library of Medicine, WIRED, Computer Timeline, Steve Crocker, Vint Cerf

The ARPANET grew significantly over the next few years. Important events included the first ever email between two different computers, sent by Roy Tomlinson in July 1972. Another groundbreaking demonstration involved a PDP-10 in Harvard simulating, in real-time, an aircraft landing on a carrier. The data was sent over the ARPANET to a MIT-based graphics terminal, and the wireframe graphical view was shipped back to a PDP-1 at Harvard and displayed on a screen. Although it was primitive and slow, it was technically the first gaming stream.

A big moment came in October 1972 at the International Conference on Computer Communication. This was the first time the network had been demonstrated to the public. Interest in the ARPANET was growing, and people were excited. A group of AT&T executives noticed a brief crash and laughed, confident that they were correct in thinking that packet switching would never work. Overall, however, the demonstration was a resounding success.

But the ARPANET was no longer the only network out there.

The two keystrokes on a Model 33 Teletype that changed history. Credit: Marcin Wichary (CC BY 2.0)

A network of networks

The rest of the world had not been standing still. In Hawaii, Norman Abramson and Franklin Kuo created ALOHAnet, which connected computers on the islands using radio. It was the first public demonstration of a wireless packet switching network. In the UK, Donald Davies’ team developed the National Physical Laboratory (NPL) network. It seemed like a good idea to start connecting these networks together, but they all used different protocols, packet formats, and transmission rates. In 1972, the heads of several national networking projects created an International Networking Working Group. Cerf was chosen to lead it.

The first attempt to bridge this gap was SATNET, also known as the Atlantic Packet Satellite Network. Using satellite links, it connected the US-based ARPANET with networks in the UK. Unfortunately, SATNET itself used its own set of protocols. In true tech fashion, an attempt to make a universal standard had created one more standard instead.

Robert Kahn asked Vint Cerf to try and fix these problems once and for all. They came up with a new plan called the Transmission Control Protocol, or TCP. The idea was to connect different networks through specialized computers, called “gateways,” that translated and forwarded packets. TCP was like an envelope for packets, making sure they got to the right destination on the correct network. Because some networks were not guaranteed to be reliable, when one computer successfully received a complete and undamaged message, it would send an acknowledgement (ACK) back to the sender. If the ACK wasn’t received in a certain amount of time, the message was retransmitted.

In December 1974, Cerf, Yogen Dalal, and Carl Sunshine wrote a complete specification for TCP. Two years later, Cerf and Kahn, along with a dozen others, demonstrated the first three-network system. The demo connected packet radio, the ARPANET, and SATNET, all using TCP. Afterward, Cerf, Jon Postel, and Danny Cohen suggested a small but important change: They should take out all the routing information and put it into a new protocol, called the Internet Protocol (IP). All the remaining stuff, like breaking and reassembling messages, detecting errors, and retransmission, would stay in TCP. Thus, in 1978, the protocol officially became known as, and was forever thereafter, TCP/IP.

A map of the Internet in 1977. White dots are IMPs, and rectangles are host computers. Jagged lines connect to other networks. Credit: The Computer History Museum

If the story of creating the Internet was a movie, the release of TCP/IP would have been the triumphant conclusion. But things weren’t so simple. The world was changing, and the path ahead was murky at best.

At the time, joining the ARPANET required leasing high-speed phone lines for $100,000 per year. This limited it to large universities, research companies, and defense contractors. The situation led the National Science Foundation (NSF) to propose a new network that would be cheaper to operate. Other educational networks arose at around the same time. While it made sense to connect these networks to the growing Internet, there was no guarantee that this would continue. And there were other, larger forces at work.

By the end of the 1970s, computers had improved significantly. The invention of the microprocessor set the stage for smaller, cheaper computers that were just beginning to enter people’s homes. Bulky teletypes were being replaced with sleek, TV-like terminals. The first commercial online service, CompuServe, was released to the public in 1979. For just $5 per hour, you could connect to a private network, get weather and financial reports, and trade gossip with other users. At first, these systems were completely separate from the Internet. But they grew quickly. By 1987, CompuServe had 380,000 subscribers.

A magazine ad for CompuServe from 1980. Credit: marbleriver

Meanwhile, the adoption of TCP/IP was not guaranteed. At the beginning of the 1980s, the Open Systems Interconnection (OSI) group at the International Standardization Organization (ISO) decided that what the world needed was more acronyms—and also a new, global, standardized networking model.

The OSI model was first drafted in 1980, but it wasn’t published until 1984. Nevertheless, many European governments, and even the US Department of Defense, planned to transition from TCP/IP to OSI. It seemed like this new standard was inevitable.

The seven-layer OSI model. If you ever thought there were too many layers, you’re not alone. Credit: BlueCat Networks

While the world waited for OSI, the Internet continued to grow and evolve. In 1981, the fourth version of the IP protocol, IPv4, was released. On January 1, 1983, the ARPANET itself fully transitioned to using TCP/IP. This date is sometimes referred to as the “birth of the Internet,” although from a user’s perspective, the network still functioned the same way it had for years.

A map of the Internet from 1982. Ovals are networks, and rectangles are gateways. Hosts are not shown, but number in the hundreds. Note the appearance of modern-looking IPv4 addresses. Credit: Jon Postel

In 1986, the NFSNET came online, running under TCP/IP and connected to the rest of the Internet. It also used a new standard, the Domain Name System (DNS). This system, still in use today, used easy-to-remember names to point to a machine’s individual IP address. Computer names were assigned “top-level” domains based on their purpose, so you could connect to “frodo.edu” at an educational institution, or “frodo.gov” at a governmental one.

The NFSNET grew rapidly, dwarfing the ARPANET in size. In 1989, the original ARPANET was decommissioned. The IMPs, long since obsolete, were retired. However, all the ARPANET hosts were successfully migrated to other Internet networks. Like a Ship of Theseus, the ARPANET lived on even after every component of it was replaced.

The exponential growth of the ARPANET/Internet during its first two decades. Credit: Jeremy Reimer

Still, the experts and pundits predicted that all of these systems would eventually have to transfer over to the OSI model. The people who had built the Internet were not impressed. In 1987, writing RFC No. 1,000, Crocker said, “If we had only consulted the ancient mystics, we would have seen immediately that seven layers were required.”

The Internet pioneers felt they had spent many years refining and improving a working system. But now, OSI had arrived with a bunch of complicated standards and expected everyone to adopt their new design. Vint Cerf had a more pragmatic outlook. In 1982, he left ARPA for a new job at MCI, where he helped build the first commercial email system (MCI Mail) that was connected to the Internet. While at MCI, he contacted researchers at IBM, Digital, and Hewlett-Packard and convinced them to experiment with TCP/IP. Leadership at these companies still officially supported OSI, however.

The debate raged on through the latter half of the 1980s and into the early 1990s. Tired of the endless arguments, Cerf contacted the head of the National Institute of Standards and Technology (NIST) and asked him to write a blue ribbon report comparing OSI and TCP/IP. Meanwhile, while planning a successor to IPv4, the Internet Advisory Board (IAB) was looking at the OSI Connectionless Network Protocol and its 128-bit addressing for inspiration. In an interview with Ars, Vint Cerf explained what happened next.

“It was deliberately misunderstood by firebrands in the IETF [Internet Engineering Task Force] that we are traitors by adopting OSI,” he said. “They raised a gigantic hoo-hah. The IAB was deposed, and the authority in the system flipped. IAB used to be the decision makers, but the fight flips it, and IETF becomes the standard maker.”

To calm everybody down, Cerf performed a striptease at a meeting of the IETF in 1992. He revealed a T-shirt that said “IP ON EVERYTHING.” At the same meeting, David Clark summarized the feelings of the IETF by saying, “We reject kings, presidents, and voting. We believe in rough consensus and running code.”

Vint Cerf strips down to the bare essentials. Credit: Boardwatch and Light Reading

The fate of the Internet

The split design of TCP/IP, which was a small technical choice at the time, had long-lasting political implications. In 2001, David Clark and Marjory Blumenthal wrote a paper that looked back on the Protocol War. They noted that the Internet’s complex functions were performed at the endpoints, while the network itself ran only the IP part and was concerned simply with moving data from place to place. These “end-to-end principles” formed the basis of “… the ‘Internet Philosophy’: freedom of action, user empowerment, end-user responsibility for actions undertaken, and lack of controls ‘in’ the Net that limit or regulate what users can do,” they said.

In other words, the battle between TCP/IP and OSI wasn’t just about two competing sets of acronyms. On the one hand, you had a small group of computer scientists who had spent many years building a relatively open network and wanted to see it continue under their own benevolent guidance. On the other hand, you had a huge collective of powerful organizations that believed they should be in charge of the future of the Internet—and maybe the behavior of everyone on it.

But this impossible argument and the ultimate fate of the Internet was about to be decided, and not by governments, committees, or even the IETF. The world was changed forever by the actions of one man. He was a mild-mannered computer scientist, born in England and working for a physics research institute in Switzerland.

That’s the story covered in the next article in our series.

Photo of Jeremy Reimer

I’m a writer and web developer. I specialize in the obscure and beautiful, like the Amiga and newLISP.

A history of the Internet, part 1: An ARPA dream takes form Read More »

how-the-malleus-maleficarum-fueled-the-witch-trial-craze

How the Malleus maleficarum fueled the witch trial craze


Invention of printing press, influence of nearby cities created perfect conditions for social contagion.

Between 1400 and 1775, a significant upsurge of witch trials swept across early-modern Europe, resulting in the execution of an estimated 40,000–60,000 accused witches. Historians and social scientists have long studied this period in hopes of learning more about how large-scale social changes occur. Some have pointed to the invention of the printing press and the publication of witch-hunting manuals—most notably the highly influential Malleus maleficarum—as a major factor, making it easier for the witch-hunting hysteria to spread across the continent.

The abrupt emergence of the craze and its rapid spread, resulting in a pronounced shift in social behaviors—namely, the often brutal persecution of suspected witches—is consistent with a theory of social change dubbed “ideational diffusion,” according to a new paper published in the journal Theory and Society. There is the introduction of new ideas, reinforced by social networks, that eventually take root and lead to widespread behavioral changes in a society.

The authors had already been thinking about cultural change and the driving forces by which it occurs, including social contagion—especially large cultural shifts like the Reformation and the Counter-Reformation, for example. One co-author, Steve Pfaff, a sociologist at Chapman University, was working on a project about witch trials in Scotland and was particularly interested in the role the Malleus maleficarum might have played.

“Plenty of other people have written about witch trials, specific trials or places or histories,” co-author Kerice Doten-Snitker, a social scientist with the Santa Fe Institute, told Ars. “We’re interested in building a general theory about change and wanted to use that as a particular opportunity. We realized that the printing of the Mallleus maleficarum was something we could measure, which is useful when you want to do empirical work, not just theoretical work.”

Ch-ch-ch-changes…

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker. shows a woman in a courtroom, in the dock with arms outstretched before a judge and jury.

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker.

Credit: Public domain

The Witch, No. 1, c. 1892 lithograph by Joseph E. Baker. Credit: Public domain

Modeling how sweeping cultural change happens has been a hot research topic for decades, hitting the cultural mainstream with the publication of Malcolm Gladwell’s 2000 bestseller The Tipping Point. Researchers continue to make advances in this area. University of Pennsylvania sociologist Damon Centola, for instance, published How Behavior Spreads: the Science of Complex Contagions in 2018, in which he applied new lessons learned in epidemiology—on how viral epidemics spread—to our understanding of how social networks can broadly alter human behavior. But while epidemiological modeling might be useful for certain simple forms of social contagion—people come into contact with something and it spreads rapidly, like a viral meme or hit song—other forms of social contagion are more complicated, per Doten-Snitker.

Doten-Snitker et al.’s ideational diffusion model differs from Centola’s in some critical respects. For cases like the spread of witch trials, “It’s not just that people are coming into contact with a new idea, but that there has to be something cognitively that is happening,” said Doten-Snitker. “People have to grapple with the ideas and undergo some kind of idea adoption. We talk about this as reinterpreting the social world. They have to rethink what’s happening around them in ways that make them think that not only are these attractive new ideas, but also those new ideas prescribe different types of behavior. You have to act differently because of what you’re encountering.”

The authors chose to focus on social networks and trade routes for their analysis of the witch trials, building on prior research that prioritized broader economic and environmental factors. Cultural elites were already exchanging ideas through letters, but published books added a new dimension to those exchanges. Researchers studying 21st century social contagion can download massive amounts of online data from social networks. That kind of data is sparse from the medieval era. “We don’t have the same archives of communication,” said Doten-Snitker. “There’s this dual thing happening: the book itself, and people sharing information, arguing back and forth with each other” about new ideas.

Graph showing the stages of the ideation diffusion model

The stages of the ideation diffusion model.

Credit: K. Dooten-Snitker et al., 2024

The stages of the ideation diffusion model. Credit: K. Dooten-Snitker et al., 2024

So she and her co-authors et al. turned to trade routes to determine which cities were more central and thus more likely to be focal points of new ideas and information. “The places that are more central in these trade networks have more stuff passing through and are more likely to come into contact with new ideas from multiple directions—specifically ideas about witchcraft,” said Doten-Snitker. Then they looked at which of 553 cities in Central Europe held their first witch trials, and when, as well as those where the Malleus maleficarum and similar manuals had been published.

Social contagion

They found that each new published edition of the Malleus maleficarum corresponded with a subsequent increase in witch trials. But that wasn’t the only contributing factor; trends in neighboring cities also influenced the increase, resulting in a slow-moving ripple effect that spread across the continent. “What’s the behavior of neighboring cities?” said Doten-Snitker. “Are they having witch trials? That makes your city more likely to have a witch trial when you have the opportunity.”

In epidemiological models like Centola’s, the pattern of change is a slow start with early adoption that then picks up speed and spreads before slowing down again as a saturation point is reached, because most people have now adopted the new idea or technology. That doesn’t happen with witch trials or other complex social processes such as the spread of medieval antisemitism. “Most things don’t actually spread that widely; they don’t reach complete saturation,” said Doten-Snitker. “So we need to have theories that build that in as well.”

In the case of witch trials, the publication of the Malleus maleficarum helped shift medieval attitudes toward witchcraft, from something that wasn’t viewed as a particularly pressing problem to something evil that was menacing society. The tome also offered practical advice on what should be done about it. “So there’s changing ideas about witchcraft and this gets coupled with, well, you need to do something about it,” said Doten-Snitker. “Not only is witchcraft bad, but it’s a threat. So you have a responsibility as a community to do something about witches.”

The term “witch hunt” gets bandied about frequently in modern times, particularly on social media, and is generally understood to reference a mob mentality unleashed on a given target. But Doten-Snitker emphasizes that medieval witch trials were not “mob justice”; they were organized affairs, with official accusations to an organized local judiciary that collected and evaluated evidence, using the Malleus malficarum and similar treatises as a guide. The process, she said, is similar to how today’s governments adopt new policies.

Why conspiracy theories take hold

Graphic showing cities where witch trials did and did not take place in Central EuropeWitch trials in Central Europe, 1400–1679, as well as those that printed the Malleus Maleficarum.

Cities where witch trials did and did not take place in Central Europe, 1400–1679, as well as those with printed copies of the Malleus Maleficarum.

Credit: K. Doten-Snitker et al., 2024

Cities where witch trials did and did not take place in Central Europe, 1400–1679, as well as those with printed copies of the Malleus Maleficarum. Credit: K. Doten-Snitker et al., 2024

The authors developed their model using the witch trials as a useful framework, but there are contemporary implications, particularly with regard to the rampant spread of misinformation and conspiracy theories via social media. These can also lead to changes in real-world behavior, including violent outbreaks like the January 6, 2021, attack on the US Capitol or, more recently, threats aimed at FEMA workers in the wake of Hurricane Helene. Doten-Snitker thinks their model could help identify the emergence of certain telltale patterns, notably the combination of the spread of misinformation or conspiracy theories on social media along with practical guidelines for responding.

“People have talked about the ways that certain conspiracy theories end up making sense to people,” said Doten-Snitker. “It’s because they’re constructing new ways of thinking about their world. This is why people start with one conspiracy theory belief that is then correlated with belief in others. It’s because you’ve already started rebuilding your image of what’s happening in the world around you and that serves as a basis for how you should act.”

On the plus side, “It’s actually hard for something that feels compelling to certain people to spread throughout the whole population,” she said. “We should still be concerned about ideas that spread that could be socially harmful. We just need to figure out where it might be most likely to happen and focus our efforts in those places rather than assuming it is a global threat.”

There was a noticeable sharp decline in both the frequency and intensity of witch trial persecutions in 1679 and onward, raising the question of how such cultural shifts eventually ran their course. That aspect is not directly addressed by their model, according to Doten-Snitker, but it does provide a framework for the kinds of things that might signal a similar major shift, such as people starting to push back against extreme responses or practices.  In the case of the tail end of the witch trials craze, for instance, there was increased pressure to prioritize clear and consistent judicial practices that excluded extreme measures such as extracting confessions via torture, for example, or excluding dreams as evidence of witchcraft.

“That then supplants older ideas about what is appropriate and how you should behave in the world and you could have a de-escalation of some of the more extremist tendencies,” said Doten-Snitker. “It’s not enough to simply say those ideas or practices are wrong. You have to actually replace it with something. And that is something that is in our model. You have to get people to re-interpret what’s happening around them and what they should do in response. If you do that, then you are undermining a worldview rather than just criticizing it.”

Theory and Society, 2024. DOI: 10.1007/s11186-024-09576-1  (About DOIs).

Photo of Jennifer Ouellette

Jennifer is a senior reporter at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban.

How the Malleus maleficarum fueled the witch trial craze Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

natgeo-documents-salvage-of-tuskegee-airman’s-lost-wwii-plane-wreckage

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage

Remembering a hero this Juneteenth —

The Real Red Tails investigates the fatal crash of 2nd Lt. Frank Moody in 1944.

Michigan's State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the wreckage.

Enlarge / Michigan’s State Maritime Archaeologist Wayne R. Lusardi takes notes underwater at the Lake Huron WWII wreckage of 2nd Lt. Frank Moody’s P-39 Airacobra. Moody, one of the famed Tuskagee Airmen, fatally crashed in 1944.

National Geographic

In April 1944, a pilot with the Tuskegee Airmen, Second Lieutenant Frank Moody, was on a routine training mission when his plane malfunctioned. Moody lost control of the aircraft and plunged to his death in the chilly waters of Lake Huron. His body was recovered two months later, but the airplane was left at the bottom of the lake—until now. Over the last few years, a team of divers working with the Tuskegee Airmen National Historical Museum in Detroit has been diligently recovering the various parts of Moody’s plane to determine what caused the pilot’s fatal crash.

That painstaking process is the centerpiece of The Real Red Tails, a new documentary from National Geographic narrated by Sheryl Lee Ralph (Abbot Elementary). The documentary features interviews with the underwater archaeologists working to recover the plane, as well as firsthand accounts from Moody’s fellow airmen and stunning underwater footage from the wreck itself.

The Tuskegee Airmen were the first Black military pilots in the US Armed Forces and helped pave the way for the desegregation of the military. The men painted the tails of their P-47 planes red, earning them the nickname the Red Tails. (They initially flew Bell P-39 Airacobras like Moody’s downed plane, and later flew P-51 Mustangs.) It was then-First Lady Eleanor Roosevelt who helped tip popular opinion in favor of the fledgling unit when she flew with the Airmen’s chief instructor, C. Alfred Anderson, in March 1941. The Airmen earned praise for their skill and bravery in combat during World War II, with members being awarded three Distinguished Unit Citations, 96 Distinguished Flying Crosses, 14 Bronze Stars, 60 Purple Hearts, and at least one Silver Star.

  • 2nd Lt. Frank Moody’s official military portrait.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) Harry T. Stewart.

    National Geographic/Rob Lyall

  • Stewart’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Tuskegee Airman Lt. Col. (Ret.) James H. Harvey.

    National Geographic/Rob Lyall

  • Harvey’s official portrait as a US Army Air Force pilot.

    National Archives and Records Administration

  • Stewart and Harvey (second and third, l-r).

    James Harvey

  • Stewart stands next to a restored WWII Mustang airplane at the Tuskegee Airmen National Museum in Detroit.

    National Geographic/Rob Lyall

A father-and-son team, David and Drew Losinski, discovered the wreckage of Moody’s plane in 2014 during cleanup efforts for a sunken barge. They saw what looked like a car door lying on the lake bed that turned out to be a door from a WWII-era P-39. The red paint on the tail proved it had been flown by a “Red Tail” and it was eventually identified as Moody’s plane. The Losinskis then joined forces with Wayne Lusardi, Michigan’s state maritime archaeologist, to explore the remarkably well-preserved wreckage. More than 600 pieces have been recovered thus far, including the engine, the propeller, the gearbox, machine guns, and the main 37mm cannon.

Ars caught up with Lusardi to learn more about this fascinating ongoing project.

Ars Technica: The area where Moody’s plane was found is known as Shipwreck Alley. Why have there been so many wrecks—of both ships and airplanes—in that region?

Wayne Lusardi: Well, the Great Lakes are big, and if you haven’t been on them, people don’t really understand they’re literally inland seas. Consequently, there has been a lot of maritime commerce on the lakes for hundreds of years. Wherever there’s lots of ships, there’s usually lots of accidents. It’s just the way it goes. What we have in the Great Lakes, especially around some places in Michigan, are really bad navigation hazards: hidden reefs, rock piles that are just below the surface that are miles offshore and right near the shipping lanes, and they often catch ships. We have bad storms that crop up immediately. We have very chaotic seas. All of those combined to take out lots of historic vessels. In Michigan alone, there are about 1,500 shipwrecks; in the Great Lakes, maybe close to 10,000 or so.

One of the biggest causes of airplanes getting lost offshore here is fog. Especially before they had good navigation systems, pilots got lost in the fog and sometimes crashed into the lake or just went missing altogether. There are also thunderstorms, weather conditions that impact air flight here, and a lot of ice and snow storms.

Just like commercial shipping, the aviation heritage of the Great Lakes is extensive; a lot of the bigger cities on the Eastern Seaboard extend into the Great Lakes. It’s no surprise that they populated the waterfront, the shorelines first, and in the early part of the 20th century, started connecting them through aviation. The military included the Great Lakes in their training regimes because during World War I, the conditions that you would encounter in the Great Lakes, like flying over big bodies of water, or going into remote areas to strafe or to bomb, mimicked what pilots would see in the European theater during the first World War. When Selfridge Field near Detroit was developed by the Army Air Corps in 1917, it was the farthest northern military air base in the United States, and it trained pilots to fly in all-weather conditions to prepare them for Europe.

NatGeo documents salvage of Tuskegee Airman’s lost WWII plane wreckage Read More »

shackleton-died-on-board-the-quest;-ship’s-wreckage-has-just-been-found

Shackleton died on board the Quest; ship’s wreckage has just been found

A ship called Quest —

“His final voyage kind of ended that Heroic Age of Exploration.”

Ghostly historical black and white photo of a ship breaking in two in the process of sinking

Enlarge / Ernest Shackleton died on board the Quest in 1922. Forty years later, the ship sank off Canada’s Atlantic Coast.

Tore Topp/Royal Canadian Geographical Society

Famed polar explorer Ernest Shackleton famously defied the odds to survive the sinking of his ship, Endurance, which became trapped in sea ice in 1914. His luck ran out on his follow-up expedition; he died unexpectedly of a heart attack in 1922 on board a ship called Quest. The ship survived that expedition and sailed for another 40 years, eventually sinking in 1962 after its hull was pierced by ice on a seal-hunting run. Shipwreck hunters have now located the remains of the converted Norwegian sealer in the Labrador Sea, off the coast of Newfoundland, Canada. The wreckage of Endurance was found in pristine condition in 2022 at the bottom of the Weddell Sea.

The Quest expedition’s relatively minor accomplishments might lack the nail-biting drama of the Endurance saga, but the wreck is nonetheless historically significant. “His final voyage kind of ended that Heroic Age of Exploration, of polar exploration, certainly in the south,” renowned shipwreck hunter David Mearns told the BBC. “Afterwards, it was what you would call the scientific age. In the pantheon of polar ships, Quest is definitely an icon.”

As previously reported, Endurance set sail from Plymouth, Massachusetts, on August 6, 1914, with Shackleton joining his crew in Buenos Aires, Argentina. By January 1915, the ship had become hopelessly locked in sea ice, unable to continue its voyage. For 10 months, the crew endured the freezing conditions, waiting for the ice to break up. The ship’s structure remained intact, but by October 25, Shackleton realized Endurance was doomed. He and his men opted to camp out on the ice some two miles (3.2 km) away, taking as many supplies as they could with them.

Compacted ice and snow continued to fill the ship until a pressure wave hit on November 13, crushing the bow and splitting the main mast—all of which was captured on camera by crew photographer Frank Hurley. Another pressure wave hit in late afternoon November 21, lifting the ship’s stern. The ice floes parted just long enough for Endurance to finally sink into the ocean, before closing again to erase any trace of the wreckage.

When the sea ice finally disintegrated in April 1916, the crew launched lifeboats and managed to reach Elephant Island five days later. Shackleton and five of his men set off for South Georgia the next month to get help—a treacherous 720-mile journey by open boat. A storm blew them off course, and they ended up landing on the unoccupied southern shore. So Shackleton left three men behind while he and a companion navigated dangerous mountain terrain to reach the whaling station at Stromness on May 2. A relief ship collected the other three men and finally arrived back on Elephant Island in August. Miraculously, Shackleton’s entire crew was still alive.

Endurance, which sank off the coast of Antarctica in 1915 after being crushed by pack ice. An expedition located the shipwreck in pristine condition in 2022 after nearly 107 years. ” height=”424″ src=”https://cdn.arstechnica.net/wp-content/uploads/2022/03/endurance2CROP-640×424.jpg” width=”640″>

Enlarge / This is the stern of the good ship Endurance, which sank off the coast of Antarctica in 1915 after being crushed by pack ice. An expedition located the shipwreck in pristine condition in 2022 after nearly 107 years.

Falklands Maritime Heritage Trust/NatGeo

Shackleton’s last voyage

By the time Shackleton got back to England, the country was embroiled in World War I, and many of his men enlisted. Shackleton was considered too old for active service. He was also deeply in debt from the Endurance expedition, earning a living on the lecture circuit. But he still dreamed of making another expedition to the Arctic Ocean north of Alaska to explore the Beaufort Sea. He got seed money (and eventually full funding) from an old school chum, John Quillier Rowett. Shackleton purchased a wooden Norwegian whaler, Foca I, which his wife Emily renamed Quest.

Shackleton died on board the Quest; ship’s wreckage has just been found Read More »

gaming-historians-preserve-what’s-likely-nintendo’s-first-us-commercial

Gaming historians preserve what’s likely Nintendo’s first US commercial

A Mega Mego find —

Mego’s “Time Out” spot pitched Nintendo’s Game & Watch handhelds under a different name.

Enlarge / “So slim you can play it anywhere.”

Gamers of a certain age may remember Nintendo’s Game & Watch line, which predated the cartridge-based Game Boy by offering simple, single-serving LCD games that can fetch a pretty penny at auction today. But even most ancient gamers probably don’t remember Mego’s “Time Out” line, which took the internal of Nintendo’s early Game & Watch titles and rebranded them for an American audience that hadn’t yet heard of the Japanese game maker.

Now, the Video Game History Foundation (VGHF) has helped preserve the original film of an early Mego Time Out commercial, marking the recovered, digitized video as “what we believe is the first commercial for a Nintendo product in the United States.” The 30-second TV spot—which is now available in a high-quality digital transfer for the first time—provides a fascinating glimpse into how marketers positioned some of Nintendo’s earliest games to a public that still needed to be sold on the very idea of portable gaming.

Imagine an “electronic sport”

A 1980 Mego catalog sells Nintendo's Game & Watch games under the toy company's

Enlarge / A 1980 Mego catalog sells Nintendo’s Game & Watch games under the toy company’s “Time Out” branding.

Founded in the 1950s, Mego made a name for itself in the 1970s with licensed movie action figures and early robotic toys like the 2-XL (a childhood favorite of your humble author). In 1980, though, Mego branched out to partner with a brand-new, pre-Donkey Kong Nintendo of America to release rebranded versions of four early Game & Watch titles: Ball (which became Mego’s “Toss-Up”), Vermin (“Exterminator”), Fire (“Fireman Fireman”), and Flagman (“Flag Man”).

While Mego would go out of business by 1983 (long before a 2018 brand revival), in 1980, the company had the pleasure and responsibility of introducing America to Nintendo games for the first time, even if they were being sold under the Mego name. And while home systems like the Atari VCS and Intellivision were already popular with the American public at the time, Mego had to sell the then-new idea of simple black-and-white games you could play away from the living room TV (Milton Bradley Microvision notwithstanding).

The 1980 Mego spot that introduced Nintendo games to the US, now preserved in high-resolution.

That’s where a TV spot from Durona Productions came in. If you were watching TV in the early ’80s, you might have heard an announcer doing a bad Howard Cosell impression selling the Time Out line as “the new electronic sport,” suitable as a pastime for athletes who have been injured jogging or playing tennis or basketball.

The ad also had to introduce even extremely basic gaming functions like “an easy game and a hard game,” high score tracking, and the ability to “tell time” (as Douglas Adams noted, humans were “so amazingly primitive that they still [thought] digital watches [were] a pretty neat idea”). And the ad made a point of highlighting that the game is “so slim you can play it anywhere,” complete with a close-up of the unit fitting in the back pocket of a rollerskater’s tight shorts.

Preserved for all time

This early Nintendo ad wasn’t exactly “lost media” before now; you could find fuzzy, video-taped versions online, including variations that talk up the pocket-sized games as sports “where size and strength won’t help.” But the Video Game History Foundation has now digitized and archived a much higher quality version of the ad, courtesy of an original film reel discovered in an online auction by game collector (and former game journalist) Chris Kohler. Kohler acquired the rare 16 mm film and provided it to VGHF, which in turn reached out to film restoration experts at Movette Film Transfer to help color-correct the faded, 40-plus-year-old print and encode it in full 2K resolution for the first time.

This important historical preservation work is as good an excuse as any to remember a time when toy companies were still figuring out how to convince the public that Nintendo’s newfangled portable games were something that could fit into their everyday life. As VGHF’s Phil Salvador writes, “it feels laser-targeted to the on-the-go yuppie generation of the ’80s with disposable income to spend on electronic toys. There’s shades of how Nintendo would focus on young, trendy, mobile demographics in their more recent marketing campaigns… but we’ve never seen an ad where someone plays Switch in the hospital.”

Gaming historians preserve what’s likely Nintendo’s first US commercial Read More »

can-an-online-library-of-classic-video-games-ever-be-legal?

Can an online library of classic video games ever be legal?

Legal eagles —

Preservationists propose access limits, but industry worries about a free “online arcade.”

The Q*Bert's so bright, I gotta wear shades.

Enlarge / The Q*Bert’s so bright, I gotta wear shades.

Aurich Lawson | Getty Images | Gottlieb

For years now, video game preservationists, librarians, and historians have been arguing for a DMCA exemption that would allow them to legally share emulated versions of their physical game collections with researchers remotely over the Internet. But those preservationists continue to face pushback from industry trade groups, which worry that an exemption would open a legal loophole for “online arcades” that could give members of the public free, legal, and widespread access to copyrighted classic games.

This long-running argument was joined once again earlier this month during livestreamed testimony in front of the Copyright Office, which is considering new DMCA rules as part of its regular triennial process. During that testimony, representatives of the Software Preservation Network and the Library Copyright Alliance defended their proposal for a system of “individualized human review” to help ensure that temporary remote game access would be granted “primarily for the purposes of private study, scholarship, teaching, or research.”

Lawyer Steve Englund, who represented the ESA at the Copyright Office hearing.

Enlarge / Lawyer Steve Englund, who represented the ESA at the Copyright Office hearing.

Speaking for the Entertainment Software Association trade group, though, lawyer Steve Englund said the new proposal was “not very much movement” on the part of the proponents and was “at best incomplete.” And when pressed on what would represent “complete” enough protections to satisfy the ESA, Englund balked.

“I don’t think there is at the moment any combination of limitations that ESA members would support to provide remote access,” Englund said. “The preservation organizations want a great deal of discretion to handle very valuable intellectual property. They have yet to… show a willingness on their part in a way that might be comforting to the owners of that IP.”

Getting in the way of research

Research institutions can currently offer remote access to digital copies of works like books, movies, and music due to specific DMCA exemptions issued by the Copyright Office. However, there is no similar exemption that allows for sending temporary digital copies of video games to interested researchers. That means museums like the Strong Museum of Play can only provide access to their extensive game archives if a researcher physically makes the trip to their premises in Rochester, New York.

Currently, the only way for researchers to access these games in the Strong Museum's collection is to visit Rochester, New York, in person.

Enlarge / Currently, the only way for researchers to access these games in the Strong Museum’s collection is to visit Rochester, New York, in person.

During the recent Copyright Office hearing, industry lawyer Robert Rothstein tried to argue that this amounts to more of a “travel problem” than a legal problem that requires new rule-making. But NYU professor Laine Nooney argued back that the need for travel represents “a significant financial and logistical impediment to doing research.”

For Nooney, getting from New York City to the Strong Museum in Rochester would require a five- to six-hour drive “on a good day,” they said, as well as overnight accommodations for any research that’s going to take more than a small part of one day. Because of this, Nooney has only been able to access the Strong collection twice in her career. For researchers who live farther afield—or for grad students and researchers who might not have as much funding—even a single research visit to the Strong might be out of reach.

“You don’t go there just to play a game for a couple of hours,” Nooney said. “Frankly my colleagues in literary studies or film history have pretty routine and regular access to digitized versions of the things they study… These impediments are real and significant and they do impede research in ways that are not equitable compared to our colleagues in other disciplines.”

Limited access

Lawyer Kendra Albert.

Enlarge / Lawyer Kendra Albert.

During the hearing, lawyer Kendra Albert said the preservationists had proposed the idea of human review of requests for remote access to “strike a compromise” between “concerns of the ESA and the need for flexibility that we’ve emphasized on behalf of preservation institutions.” They compared the proposed system to the one already used to grant access for libraries’ “special collections,” which are not made widely available to all members of the public.

But while preservation institutions may want to provide limited scholarly access, Englund argued that “out in the real world, people want to preserve access in order to play games for fun.” He pointed to public comments made to the Copyright Office from “individual commenters [who] are very interested in playing games recreationally” as evidence that some will want to exploit this kind of system.

Even if an “Ivy League” library would be responsible with a proposed DMCA exemption, Englund worried that less scrupulous organizations might simply provide an online “checkbox” for members of the public who could easily lie about their interest in “scholarly play.” If a human reviewed that checkbox affirmation, it could provide a legal loophole to widespread access to an unlimited online arcade, Englund argued.

Will any restrictions be enough?

VGHF Library Director Phil Salvador.

Enlarge / VGHF Library Director Phil Salvador.

Phil Salvador of the Video Game History Foundation said that Englund’s concern about this score was overblown. “Building a video game collection is a specialized skill that most libraries do not have the human labor to do, or the expertise, or the resources, or even the interest,” he said.

Salvador estimated that the number of institutions capable of building a physical collection of historical games is in the “single digits.” And that’s before you account for the significant resources needed to provide remote access to those collections; Rhizome Preservation Director Dragan Espenschied said it costs their organization “thousands of dollars a month” to run the sophisticated cloud-based emulation infrastructure needed for a few hundred users to access their Emulation as a Service art archives and gaming retrospectives.

Salvador also made reference to last year’s VGHF study that found a whopping 87 percent of games ever released are out of print, making it difficult for researchers to get access to huge swathes of video game history without institutional help. And the games of most interest to researchers are less likely to have had modern re-releases since they tend to be the “more primitive” early games with “less popular appeal,” Salvador said.

The Copyright Office is expected to rule on the preservation community’s proposed exemption later this year. But for the moment, there is some frustration that the industry has not been at all receptive to the significant compromises the preservation community feels it has made on these potential concerns.

“None of that is ever going to be sufficient to reassure these rights holders that it will not cause harm,” Albert said at the hearing. “If we’re talking about practical realities, I really want to emphasize the fact that proponents have continually proposed compromises that allow preservation institutions to provide the kind of access that is necessary for researchers. It’s not clear to me that it will ever be enough.”

Can an online library of classic video games ever be legal? Read More »

explore-a-digitized-collection-of-doomed-everest-climber’s-letters-home

Explore a digitized collection of doomed Everest climber’s letters home

“Because it’s there” —

Collection includes three letters found on Mallory’s body in 1999, preserved for 75 years.

the final letter from George Mallory from Camp I, Everest, to Ruth Mallory, 27 May 1924

Enlarge / The final letter from George Mallory from Camp I, Mount Everest, to his wife Ruth Mallory, May 27, 1924.

The Master and Fellows of Magdalene College, Cambridge

In June 1924, a British mountaineer named George Leigh Mallory and a young engineering student named Andrew “Sandy” Irvine set off for the summit of Mount Everest and disappeared—just two casualties of a peak that has claimed over 300 lives to date. Mallory was an alumnus of Magdalene College at the University of Cambridge, which maintains a collection of his personal correspondence, much of it between Mallory and his wife, Ruth. The college has now digitized the entire collection for public access. The letters can be accessed and downloaded here.

“It has been a real pleasure to work with these letters,” said Magdalene College archivist Katy Green. “Whether it’s George’s wife Ruth writing about how she was posting him plum cakes and a grapefruit to the trenches (he said the grapefruit wasn’t ripe enough), or whether it’s his poignant last letter where he says the chances of scaling Everest are ’50 to 1 against us,’ they offer a fascinating insight into the life of this famous Magdalene alumnus.”

As previously reported, Mallory is the man credited with uttering the famous line “because it’s there” in response to a question about why he would risk his life repeatedly to summit Everest. An avid mountaineer, Mallory had already been to the mountain twice before the 1924 expedition: once in 1921 as part of a reconnaissance expedition to produce the first accurate maps of the region and again in 1922—his first serious attempt to summit, although he was forced to turn back on all three attempts. A sudden avalanche killed seven Sherpas on his third try, sparking accusations of poor judgement on Mallory’s part.

Undeterred, Mallory was back in 1924 for the fated Everest expedition that would claim his life at age 37. He aborted his first summit attempt, but on June 4, he and Irvine left Advanced Base Camp (21,330 feet/6,500 meters). They reached Camp 5 on June 6, and Camp 6 the following day, before heading out for the summit on June 8. Team member Noel Odell reported seeing the two men climbing either the First or Second Step around 1 pm before they were “enveloped in a cloud once more.”

Nobody ever saw Mallory and Irvine again, although their spent oxygen tanks were found just below the First Step. Climbers also found Irvine’s ice axe in 1933. Mallory’s body wasn’t found until 1999, when an expedition partially sponsored by Nova and the BBC found the remains on the mountain’s north face, at 26,760 feet (8,157 meters)—just below where Irvine’s axe had been found. The name tags on the clothing read “G. Leigh Mallory.” Personal artifacts confirmed the identity: an altimeter, a pocket knife, snow goggles, a letter, and a bill for climbing equipment from a London supplier. Irvine’s body has yet to be found, despite the best efforts of a 2019 National Geographic expedition, detailed in the riveting 2020 documentary Lost on Everest.

Final page of letter from Ruth Mallory to George Mallory, March 3, 1924.

Enlarge / Final page of letter from Ruth Mallory to George Mallory, March 3, 1924.

The Master and Fellows of Magdalene College, Cambridge

The collection makes for some fascinating reading; Mallory led an adventurous life. Among the highlights of the Magdalene College collection is the final letter Mallory wrote to Ruth before attempting his fateful last summit attempt:

“Darling I wish you the best I can—that your anxiety will be at an end before you get this—with the best news. Which will also be the quickest. It is 50 to 1 against us but we’ll have a whack yet & do ourselves proud. Great love to you. Ever your loving, George.”

Three of the letters were found in Mallory’s jacket pocket 75 years after his disappearance when his body was discovered, exceptionally well-preserved. Other letters detailed his experiences at the Battle of the Somme during World War II; his first reconnaissance expedition to Everest; and the aforementioned second Everest expedition in which seven Sherpas were lost. On a lighter note are letters describing his adventures during a 1923 trip to the Prohibition Era US. (He would ask for milk at speakeasies and get whiskey served to him through a secret hatch.) There are also letters from Ruth—including her only surviving letter to Mallory during his Everest explorations—and from Mallory’s sister, Mary Brooke.

Explore a digitized collection of doomed Everest climber’s letters home Read More »

take-a-trip-through-gaming-history-with-this-charming-gdc-display

Take a trip through gaming history with this charming GDC display

Remember when —

Come for the retro Will Wright photo, stay for the game with a pack-in harmonica.

  • Only the most dedicated “Carmen” fans—or North Dakotan educators of a certain age—are likely to have this one in their collections.

    Kyle Orland / VGHF

  • These “pretty cool stickers” came from a “Carmen Day” kit the producer Broderbund sent to school to encourage themed edutainment activities that went beyond the screen.

    Kyle Orland / VGHF

  • As a nearby placard laments: “When female human characters were depicted in early video games, they often fell into stereotypical roles”—nature-loving girls or sexualized adults being chief among them.

    Kyle Orland / VGHF

  • Despite the lack of diverse female representation in early games, early game ads were often equal-opportunity affairs.

    Kyle Orland / VGHF

  • Don’t be fooled by the wide variety of headshots on these boxes—you needed to invest in “Alter Ego: Female Version” to get the full suite of personas.

    Kyle Orland / VGHF

  • Kyle Orland / VGHF

  • We’re struggling to think of any other video games that came packaged with a harmonica.

    Kyle Orland / VGHF

  • A standard Game Boy Camera hooked up to USB-C output via a customized board. VGHF used the setup to trade customized postcards for donations (see some examples in the background).

    Kyle Orland / VGHF

  • “EXTREME CLOSE-UP IS EXTREMELY SIGNIFICANT.”

    Kyle Orland / VGHF

  • Be the coolest beachgoer in all of Zebes with these promotional sunglasses.

    Kyle Orland / VGHF

  • A ’90s photo of the Maxis team, including a downright baby-faced Will Wright (back row, second from left).

    Kyle Orland / VGHF

  • VGHF’s Phil Salvador told me that this cow was one of the top results when you searched for “’90s mousepad” on eBay.

    Kyle Orland / VGHF

  • The brief heyday of music-based CD-ROM “multimedia” experiences is rightly forgotten by most consumers, and rightly remembered by organizations like VGHF.

    Kyle Orland / VGHF

  • Ever wonder what specific pantone swatch to use for that perfect “Joker jacket purple”? Wonder no longer!

    Kyle Orland / VGHF

SAN FRANCISCO—Trade shows like the Game Developers Conference and the (dearly departed) E3 are a great chance to see what’s coming down the pike for the game industry. But they can also be a great place to celebrate gaming’s history, as we’ve shown you with any number of on-site photo galleries in years past.

The history display tucked away in a corner of this year’s Game Developers Conference—the first one arranged by the Video Game History Foundation—was a little different. Rather than simply laying out a parcel of random collectibles, as past history-focused booths have, VGHF took a more curated approach, with mini-exhibits focused on specific topics like women in gaming, oddities of gaming music, and an entire case devoted to a little-known entry in a famous edutainment series.

Then there was the central case, devoted to the idea that all sorts of ephemera—from design docs to photos to pre-release prototypes to newsletters to promotional items—were all an integral part of video game history. The organization is practically begging developers, journalists, and fan hoarders of all stripes not to throw out even items that seem like they have no value. After all, today’s trash might be tomorrow’s important historic relic.

As we wrap up GDC (and get to work assembling what we’ve seen into future coverage), please enjoy this gallery of some of the more interesting historical specimens that the VGHF had at this year’s show.

Listing image by Kyle Orland / VGHF

Take a trip through gaming history with this charming GDC display Read More »

darwin-online-has-virtually-reassembled-the-naturalist’s-personal-library

Darwin Online has virtually reassembled the naturalist’s personal library

“I have bought many books….” —

Previous catalogs only listed about 15 percent of the naturalist’s extensive collection.

Oil painting by Victor Eustaphieff of Darwin in his study at Down House with one of his bookcases that made up his extensive personal library reflected in the mirror.

Enlarge / Oil painting by Victor Eustaphieff of Charles Darwin in his study at Down House. One of the many bookcases that made up his extensive personal library is reflected in the mirror.

State Darwin Museum, Moscow

Famed naturalist Charles Darwin amassed an impressive personal library over the course of his life, much of which was preserved and cataloged upon his death in 1882. But many other items were lost, including more ephemeral items like unbound volumes, pamphlets, journals, clippings, and so forth, often only vaguely referenced in Darwin’s own records.

For the last 18 years, the Darwin Online project has painstakingly scoured all manner of archival records to reassemble a complete catalog of Darwin’s personal library virtually. The project released its complete 300-page online catalog—consisting of 7,400 titles across 13,000 volumes, with links to electronic copies of the works—to mark Darwin’s 215th birthday on February 12.

“This unprecedentedly detailed view of Darwin’s complete library allows one to appreciate more than ever that he was not an isolated figure working alone but an expert of his time building on the sophisticated science and studies and other knowledge of thousands of people,” project leader John van Wyhe of the National University of Singapore said. “Indeed, the size and range of works in the library makes manifest the extraordinary extent of Darwin’s research into the work of others.”

Darwin was a notoriously voracious reader, and Down House was packed with books, scientific journals pamphlets, and magazine clippings that caught his interest. He primarily kept his personal library in his study: an “Old Study” and, after an 1877 addition to the west end of the house, a “New Study.” A former governess named Louise Buob described how Darwin’s books and papers inevitably spilled “into the hall and corridors, whose walls are covered with books.”

The French literary critic Francisque Sarcey remarked in 1880 that the walls of the New Study were concealed “top to bottom” with books, as well as two bookcases in the middle of the study—one filled with books, the other with scientific instruments. This was very much a working library, with well-worn and often tattered books, as opposed to fine leather-bound volumes designed for display. After Darwin died, an appraiser valued the scientific library at just 30 pounds (about 2,000 pounds today) and the entire collection of books at a mere 66 pounds (about 4,400 pounds today). Collectors now pay a good deal more for a single book that once belonged to Darwin.

An issue of a German scientific periodical sent to Darwin in 1877; it contained the first published photographs of bacteria.

Enlarge / An issue of a German scientific periodical sent to Darwin in 1877; it contained the first published photographs of bacteria.

Public domain

The two main collections of Darwin’s books—amounting to some 1,480 titles—are housed at the University of Cambridge and Down House, respectively, but that number does not include the more ephemeral items referred to in Darwin’s own records. According to the folks at Darwin Online, tracking down every single obscure reference to a publication was a case study in diligent detective work since Darwin often only hurriedly jotted down a few notes, with crucial information like author, date, or even the source of a clipping often missing.

Many of these have now been identified for the first time. One of the project’s major sources was a handwritten 426-page compilation from 1875, whose abbreviated entries eventually yielded 440 previously unknown titles originally in Darwin’s library. They also scoured Darwin’s reading notebooks, Emma Darwin’s diaries, a 1908 catalog of books donated to Cambridge, and the Darwin Correspondence (30 volumes in all), as well as historic auction and rare book catalogs.

The newly discovered items in Darwin’s library include works by philosophers John Stuart Mill and Auguste Comte, as well as Charles Babbage and what was at the time a controversial book on gorillas: Paul du Chaillu’s Explorations and Adventures in Equatorial Africa. The naturalist also owned a copy of an 1826 article on the habits of the turkey buzzard by ornithologist John James Audubon. His personal library also included less heady fare, like a coffee table book of heliotrope illustrations, an 1832 road atlas of England and Wales, an 1852 treatise on investments, a book about chess, an illustrated 1821 book on the Nomenclature of Colours, and a book on the “water cure” for chronic disease. (Darwin was a devotee of the water cure—not to be confused with the method of torture—for his many ailments.)

As impressive as the Darwin Online catalog currently is, the project is still ongoing. “There can be no doubt that further works that belonged to Darwin and his family remain to be recorded here,” the folks at Darwin Online wrote, and the project welcomes any information that leads to those missing works.

Principles of Geology, volume 1 by Charles Lyell, a book from which Darwin drew inspiration to explain how species change over time.” height=”1039″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/02/darwin3-640×1039.jpg” width=”640″>

Enlarge / The frontispiece of the Principles of Geology, volume 1 by Charles Lyell, a book from which Darwin drew inspiration to explain how species change over time.

Public domain

Darwin Online has virtually reassembled the naturalist’s personal library Read More »

fans-preserve-and-emulate-sega’s-extremely-rare-‘80s-“ai-computer”

Fans preserve and emulate Sega’s extremely rare ‘80s “AI computer”

Welcome back to the stage of history. —

Prolog-based Japanese education hardware sported an early touch-panel, speech synthesizer.

Enlarge / “Expanding the Possibilities…..with Artificial Intelligence”

Even massive Sega fans would be forgiven for not being too familiar with the Sega AI Computer. After all, the usually obsessive documentation over at Sega Retro includes only the barest stub of an information page for the quixotic, education-focused 1986 hardware.

Thankfully, the folks at the self-described “Sega 8-bit preservation and fanaticism” site SMS Power have been able to go a little deeper. The site’s recently posted deep dive on the Sega AI Computer includes an incredible amount of well-documented information on this historical oddity, including ROMs for dozens of previously unpreserved pieces of software that can now be partially run on MAME.

An ’80s vision of AI’s future

The Sega AI Computer hardware, complete with keyboard and Sound Box accessories.

Enlarge / The Sega AI Computer hardware, complete with keyboard and Sound Box accessories.

The Sega AI Computer sports a 16-bit NEC chip running at a blazing 5 Mhz and a massive 128KB of RAM (those adjectives were much less ironic when the computer was released in 1986). SMS Power’s research suggests the device was “mostly sold to Japanese schools” between 1986 and 1989, which helps explain its overall obscurity decades later. Ads from the time suggest a US version was briefly planned but never released.

Despite the Japan-only release, the Sega AI Computer’s casing includes an English-language message stressing its support for the AI-focused Prolog language and a promise that it will “bring you into the world of artificial intelligence.” Indeed, a 1986 article in Electronics magazine (preserved by SMS Power) describes what sounds like a kind of simple and wholesome early progenitor of today’s world of generative AI creations:

In the prompt mode, the child is asked about his or her activities during the day and replies with one- and two-word answers. The computer program then writes a grammatically correct diary entry based on those replies. In more advanced CAI applications, the computer is more flexible than previous systems. It can parse a user’s natural-language inputs and evaluate the person’s ability level. It can then proceed to material of appropriate difficulty, rather than simply advancing one level at a time.

Besides its unique focus on an ’80s-era vision of AI, the Sega AI Computer is also notable for its use of a controller that features a large rectangular touch surface that could be customized with overlays included in the software to make a brand-new interface. The system also features a built-in speech synthesizer that could re-create basic Japanese phonemes and an FM audio chip that could play back samples like those stored on some of the system’s cassette-tape software.

A preservation mountain climb

These '80s-era Japanese schoolchildren are ready to learn about AI with Sega's help!

Enlarge / These ’80s-era Japanese schoolchildren are ready to learn about AI with Sega’s help!

While the general existence of the Sega AI Computer has been known in certain circles for a while, detailed information about its workings and software was extremely hard to come by, especially in the English-speaking world. That began to change in 2014 when a rare Yahoo Auctions listing offered a boxed AI Computer along with 15 pieces of software. The site was able to crowdfund the winning bid on that auction (which reportedly ran the equivalent of $1,100) and later obtained a keyboard and more software from the winner of a 2022 auction.

SMS Power notes that the majority of the software it has uncovered “had zero information about them on the Internet prior to us publishing them: no screenshots, no photos or scans of actual software.” Now, the site’s community has taken the trouble to preserve all those ROMs and create a new MAME driver that already allows for “partial emulation” of the system (which doesn’t yet include a keyboard, tape drive, or speech emulation support).

The title screen of a Gulliver's Travels-themed piece of software for the Sega AI Computer.

The title screen of a Gulliver’s Travels-themed piece of software for the Sega AI Computer.

That dumped software is all “educational and mostly aimed at kids,” SMS Power notes, and is laden with Japanese text that will make it hard for many foreigners to even tinker with. That means this long-lost emulation release probably won’t set the MAME world on fire as 2022’s surprise dump of Marble Madness II did.

Still, it’s notable how much effort the community has put in to fill a formerly black hole in our understanding of this corner of Sega history. SMS Power’s write-up of its findings is well worth a full look, as is the site’s massive Google Drive, which is filled with documentation, screenshots, photos, contemporaneous articles and ads, and much more.

Fans preserve and emulate Sega’s extremely rare ‘80s “AI computer” Read More »

aaarr-matey!-life-on-a-17th-century-pirate-ship-was-less-chaotic-than-you-think

Aaarr matey! Life on a 17th century pirate ship was less chaotic than you think

On the sixth day of Christmas —

Ars chats with historian Rebecca Simon about her most recent book, The Pirates’ Code.

white skull and crossbones on black background

There’s rarely time to write about every cool science-y story that comes our way. So this year, we’re once again running a special Twelve Days of Christmas series of posts, highlighting one science story that fell through the cracks in 2020, each day from December 25 through January 5. Today: Pirates! Specifically, an interview with historian Rebecca Simon on the real-life buccaneer bylaws that shaped every aspect of a pirate’s life.

One of the many amusing scenes in the 2003 film Pirates of the Caribbean: The Curse of the Black Pearl depicts Elizabeth Swann (Keira Knightley) invoking the concept of “parley” in the pirate code to negotiate a cease of hostilities with pirate captain Hector Barbossa (Geoffrey Rush). “The code is more what you’d call guidelines than actual rules,” he informs her. Rebecca Simon, a historian at Santa Monica College, delves into the real, historical set of rules and bylaws that shaped every aspect of a pirate’s life with her latest book. The Pirates’ Code: Laws and Life Aboard Ship.

Simon is the author of such books as Why We Love Pirates: The Hunt for Captain Kidd and How He Changed Piracy Forever and Pirate Queens: The Lives of Anne Bonny and Mary Read. Her PhD thesis research focused on pirate trails and punishment. She had been reading a book about Captain Kidd and the war against the pirates, and was curious as to why he had been executed in an East London neighborhood called Wapping, at Execution Dock on the Thames. People were usually hung at Tyburn in modern day West London at Marble Arch. “Why was Captain Kidd taken to a different place? What was special about that?” Simon told Ars. “Nothing had been written much about it at all, especially in connection to piracy. So I began researching how pirate trials and executions were done in London. I consider myself to be a legal historian of crime and punishment through the lens of piracy.”

Ars sat down with Simon to learn more.

Adventure Galley, in New York Harbor. (right) Captain Kidd, gibbeted near Tilbury in Essex following his execution in 1701.” height=”427″ src=”https://cdn.arstechnica.net/wp-content/uploads/2023/12/pirate2-640×427.jpg” width=”640″>

Enlarge / (left) Fanciful painting of Kidd and his ship, Adventure Galley, in New York Harbor. (right) Captain Kidd, gibbeted near Tilbury in Essex following his execution in 1701.

Public domain

Ars Technica: How did the idea of a pirates’ code come about?

Rebecca Simon: Two of the pirates that I mention in the book—Ned Low and Bartholomew Roberts—their code was actually published in newspapers in London. I don’t where they got it. Maybe it was made up for the sake of readership because that is getting towards the tail end of the Golden Age of Piracy, the 1720s. But we find examples of other codes in A General History of the Pyrates written by a man named Captain Charles Johnson in 1724. It included many pirate biographies and a lot of it was very largely fictionalized. So we take it with a grain of salt. But we do know that pirates did have a notion of law and order and regulations and ritual based on survivor accounts.

You had to be very organized. You had to have very specific rules because as a pirate, you’re facing death every second of the day, more so than if you are a merchant or a fisherman or a member of the Royal Navy.  Pirates go out and attack to get the goods that they want. In order to survive all that, they have to be very meticulously prepared. Everyone has to know their exact role and everyone has to have a game plan going in. Pirates didn’t attack willy-nilly out of control. No way. They all had a role.

Ars Technica: Is it challenging to find primary sources about this? You rely a lot trial transcripts, as well as eyewitness accounts and maritime logs.

Rebecca Simon: It’s probably one of the best ways to learn about how pirates lived on the ship, especially through their own words, because pirates didn’t leave records. These trial transcripts were literal transcriptions of the back and forth between the lawyer and the pirate, answering very specific questions in very specific detail. They were transcribed verbatim and they sold for profit. People found them very interesting. It’s really the only place where we really get to hear the pirate’s voice. So to me that was always one of the best ways to find information about pirates, because anything else you’re looking at is the background or the periphery around the pirates: arrest records, or observations of how the pirate seemed to be acting and what the pirate said. We have to take that with a grain of salt because  we’re only hearing it from a third party.

Ars Technica: Some of the pirate codes seemed surprisingly democratic. They divided the spoils equally according to rank, so there was a social hierarchy. But there was also a sense of fairness.

Rebecca Simon: You needed to have a sense of order on a pirate ship. One of the big draws that pirates used to recruit hostages to officially join them into piracy was to tell them they’d get an equal share. This was quite rare on many other ships. where payment was based per person, or maybe just a flat rate across the board. A lot of times your wages might get withheld or you wouldn’t necessarily get the wages you were promised. On a pirate ship, everyone had the amount of money they were going to get based on the hierarchy and based on their skill level. The quartermaster was in charge of doling out all of the spoils or the stolen goods. If someone was caught taking more of their share, that was a huge deal.

You could get very severely punished perhaps by marooning or being jailed below the hold. The punishment had to be decided by the whole crew, so it didn’t seem like the captain was being unfair or overly brutal. Pirates could also vote out their captain if they felt the captain was doing a bad job, such as not going after enough ships, taking too much of his share, being too harsh in punishment, or not listening to the crew. Again, this is all to keep order. You had to keep morale very high, you had to make sure there was very little discontent or infighting.

Pirates of the Caribbean: The Curse of the Black Pearl (2003).” data-height=”900″ data-width=”1200″ href=”https://cdn.arstechnica.net/wp-content/uploads/2023/12/code5.jpg”>Pirates of the Caribbean: The Curse of the Black Pearl (2003).” height=”480″ src=”https://cdn.arstechnica.net/wp-content/uploads/2023/12/code5-640×480.jpg” width=”640″>

Enlarge / “The code is more like guidelines than actual rules”: Geoffrey Rush as Captain Hector Barbossa in Pirates of the Caribbean: The Curse of the Black Pearl (2003).

Walt Disney Pictures

Ars Technica: Pirates have long been quite prominent in popular culture. What explains their enduring appeal? 

Rebecca Simon: During the 1700s, when pirates were very active, they fascinated people in London and England because they were very far removed from piracy, more so than those who traded a lot for a living in North America and the Caribbean. But it used to be that you were born into your social class and there was no social mobility. You’re born poor because your father was poor, your grandfather was poor, your children will be poor, your grandchildren will be poor. Most pirates started out as poor sailors but as pirates they could become wealthy. If a pirate was lucky, they could make enough in one or two years and then retire and live comfortably. People also have a morbid fascination for these brutal people committing crimes. Think about all the true crime podcasts and  true crime documentaries on virtually every streaming service today. We’re just attracted to that. It was the same with piracy.

Going into the 19th century, we have the publication of the book Treasure Island, an adventure story harking back to this idea of piracy in a way that generations hadn’t seen before. This is during a time period where there was sort of a longing for adventure in general and Treasure Island fed into this. That is what spawned the pop culture pirate going into the 20th century. Everything people know about pirates, for the most part, they’re getting from Treasure Island. The whole treasure map, X marks the spot, the eye patch, the peg leg, the speech. Pirate popularity has ebbed and flowed in the 20th and 21st centuries. Of course, the Pirates of the Caribbean franchise was a smash hit. And I think during the pandemic, people were feeling very confined and upset with leadership. Pirates were appealing because they cast all that off and we got shows like Black Sails and Our Flag Means Death.

Ars Technica: Much of what you do is separate fact from fiction, such as the legend of Captain Kidd’s buried treasure. What are some of the common misconceptions that you find yourself correcting, besides buried treasure?

Rebecca Simon:  A lot of people ask me about the pirate accent: “Aaarr matey!” That accent we think of comes from the actor Robert Newton who played Long John Silver in the 1950 film Treasure Island. In reality, it just depended on where they were born. At the end of the day, pirates were sailors. People ask about what they wore, what they ate, thinking it’s somehow different. But the reality is it was the same as other sailors. They might have had better clothes and better food because of how often they robbed other ships.

Another misconception is that pirates were after gold and jewels and treasure. In the 17th and 18th centuries, “treasure” just meant “valuable.” They wanted goods they could sell. So about 50 percent was stuff they kept to replenish their own ship and their stores. The other 50 percent were goods they could sell: textiles, wine, rum, sugar, and (unfortunately) the occasional enslaved person counted as cargo. There’s also a big misconception that pirates were all about championing the downtrodden:they hated slavery and they freed enslaved people. They hated corrupt authority. That’s not the reality. They were still people of their time. Blackbeard, aka Edward Teach, did capture a slave ship and he did include those slaves in his crew. But he later sold them at a slave port.

Female pirates Anne Bonny and Mary Read were a deadly duo who plundered their way to infamy.

Enlarge / Female pirates Anne Bonny and Mary Read were a deadly duo who plundered their way to infamy.

Public domain

Thanks to Our Flag Means Death and Black Sails, people sometimes assume that all pirates were gay or bisexual. That’s also not true. The concept of homosexuality as we think of it just didn’t exist back then. It was more situational homosexuality arising from confined close quarters and being very isolated for a long period of time. And it definitely was not all pirates. There was about the same percentage of gay or bisexual pirates as your own workplace, but it was not discussed and it was considered to be a crime. There’s this idea that pirate ships had gay marriage; that wasn’t necessarily a thing. They practiced something called matelotage, a formal agreement where you would be legally paired with someone because if they died, it was a way to ensure their goods went to somebody. It was like a civil union. Were some of these done romantically? It’s possible. We just don’t know because that sort of stuff was never, ever recorded.

Ars Technica:  Your prior book, Pirate Queens, focused on female pirates like Anne Bonny and Mary Read. It must have been challenging for a woman to pass herself off as a man on a pirate ship.

Rebecca Simon: You’d have to take everything in consideration, the way you dressed, the way you walked, the way you talked.  A lot of women who would be on a pirate ship were probably very wiry, having been maids who hauled buckets of coal and water and goods and did a lot of physical activity all day. They could probably pass themselves off as boys or adolescents who were not growing facial hair. So it probably wasn’t too difficult. Going to the bathroom was a a big thing. Men would pee over the edge of the ship. How’s a woman going to do this? You put a funnel under the pirate dress and pee through the funnel, which can create a stream going over the side of the ship. When it’s really crowded, men aren’t exactly going to be looking at that very carefully.

The idea of Anne Bonny and Mary Read being lesbians is a 20th century concept, originating with an essay by a feminist writer in the 1970s. There’s no evidence for it. There’s no historical documentation about them before they entered into piracy. According to Captain Charles Johnson’s highly fictionalized account, Mary disguised herself as a male sailor. Anne fell in love with this male sailor on the ship and tried to seduce him, only to discover he was a woman. Anne was “disappointed.” There’s no mention of Anne and Mary actually getting together. Anne was the lover of Calico Jack Rackham, Mary was married to a crew member. This was stated in the trial. And when both women were put on trial and found guilty of piracy, they both revealed they were pregnant.

The Pirates’ Code: Laws and Life Aboard Ships/” height=”427″ src=”https://cdn.arstechnica.net/wp-content/uploads/2023/12/pirate1-640×427.jpg” width=”640″>

Enlarge / Rebecca Simon is the author of The Pirates’ Code: Laws and Life Aboard Ships/

University of Chicago Press/Rebecca Simon

Ars Technica: Pirates had notoriously short careers: about two years on average. Why would they undertake all that risk for such a short time?

Rebecca Simon: There’s the idea that you can get wealthy quickly. There were a lot of people who became pirates because they had no other choice. Maybe they were criminals or work was not available to them. Pirate ships were extremely diverse. You did have black people as crew members, maybe freed enslaved or escaped enslaved people. They usually had the most menial jobs, but they did exist on ships. Some actively chose it because working conditions on merchant ships and naval ships were very tough and they didn’t always have access to good food or medical care. And many people were forced into it, captured as hostages to replace pirates who had been killed in battle.

Ars Technica: What were the factors that led to the end of what we call the Golden Age of Piracy?

Rebecca Simon: There were several reasons why piracy really began to die down in the 1720s. One was an increase in the Royal Navy presence so the seas were a lot more heavily patrolled and it was becoming more difficult to make a living as a pirate. Colonial governors and colonists were no longer supporting pirates the way they once had, so a lot of pirates were now losing their alliances and protections. A lot of major pirate leaders who had been veterans of the War of the Spanish Succession as privateers had been killed in battle by the 1720s: people like Charles Vane, Edward Teach, Benjamin Hornigold, Henry Jennings, and Sam Bellamy.

It was just becoming too risky. And by 1730 a lot more wars were breaking out, which required people who could sail and fight. Pirates were offered pardons if they agreed to become a privateer, basically a government-sanctioned mercenary at sea where they were contracted to attack specific enemies. As payment they got to keep about 80 percent of what they stole. A lot of pirates decided that was more lucrative and more stable.

Ars Technica: What was the most surprising thing that you learned while you were researching and writing this book?

Rebecca Simon: Stuff about food, oddly enough. I was really surprised by how much people went after turtles as food. Apparently turtles are very high in vitamin C and had long been believed to cure all kinds of illnesses and impotence. Also, pirates weren’t really religious, but Bartholomew Roberts would dock at shore so his crew could celebrate Christmas—perhaps as an appeasement. When pirates were put on trial, they always said they were forced into it. The lawyers would ask if they took their share after the battle ended. If they said yes, the law deemed them a pirate. You therefore participated; it doesn’t matter if they forced you.  Finally, my PhD thesis was on crime and the law and executions. People would ask me about ships but I didn’t study ships at all. So this book really branched out my maritime knowledge and helped me understand how ships worked and how the people on board operated.

Aaarr matey! Life on a 17th century pirate ship was less chaotic than you think Read More »