Politics

google’s-deepmind-is-building-an-ai-to-keep-us-from-hating-each-other

Google’s DeepMind is building an AI to keep us from hating each other


The AI did better than professional mediators at getting people to reach agreement.

Image of two older men arguing on a park bench.

An unprecedented 80 percent of Americans, according to a recent Gallup poll, think the country is deeply divided over its most important values ahead of the November elections. The general public’s polarization now encompasses issues like immigration, health care, identity politics, transgender rights, or whether we should support Ukraine. Fly across the Atlantic and you’ll see the same thing happening in the European Union and the UK.

To try to reverse this trend, Google’s DeepMind built an AI system designed to aid people in resolving conflicts. It’s called the Habermas Machine after Jürgen Habermas, a German philosopher who argued that an agreement in a public sphere can always be reached when rational people engage in discussions as equals, with mutual respect and perfect communication.

But is DeepMind’s Nobel Prize-winning ingenuity really enough to solve our political conflicts the same way they solved chess or StarCraft or predicting protein structures? Is it even the right tool?

Philosopher in the machine

One of the cornerstone ideas in Habermas’ philosophy is that the reason why people can’t agree with each other is fundamentally procedural and does not lie in the problem under discussion itself. There are no irreconcilable issues—it’s just the mechanisms we use for discussion are flawed. If we could create an ideal communication system, Habermas argued, we could work every problem out.

“Now, of course, Habermas has been dramatically criticized for this being a very exotic view of the world. But our Habermas Machine is an attempt to do exactly that. We tried to rethink how people might deliberate and use modern technology to facilitate it,” says Christopher Summerfield, a professor of cognitive science at Oxford University and a former DeepMind staff scientist who worked on the Habermas Machine.

The Habermas Machine relies on what’s called the caucus mediation principle. This is where a mediator, in this case the AI, sits through private meetings with all the discussion participants individually, takes their statements on the issue at hand, and then gets back to them with a group statement, trying to get everyone to agree with it. DeepMind’s mediating AI plays into one of the strengths of LLMs, which is the ability to briefly summarize a long body of text in a very short time. The difference here is that instead of summarizing one piece of text provided by one user, the Habermas Machine summarizes multiple texts provided by multiple users, trying to extract the shared ideas and find common ground in all of them.

But it has more tricks up its sleeve than simply processing text. At a technical level, the Habermas Machine is a system of two large language models. The first is the generative model based on the slightly fine-tuned Chinchilla, a somewhat dated LLM introduced by DeepMind back in 2022. Its job is to generate multiple candidates for a group statement based on statements submitted by the discussion participants. The second component in the Habermas Machine is a reward model that analyzes individual participants’ statements and uses them to predict how likely each individual is to agree with the candidate group statements proposed by the generative model.

Once that’s done, the candidate group statement with the highest predicted acceptance score is presented to the participants. Then, the participants write their critiques of this group statement, feed those critiques back into the system which generates updated group’s statements and repeats the process. The cycle goes on till the group statement is acceptable to everyone.

Once the AI was ready, DeepMind’s team started a fairly large testing campaign that involved over five thousand people discussing issues such as “should the voting age be lowered to 16?” or “should the British National Health Service be privatized?” Here, the Habermas Machine outperformed human mediators.

Scientific diligence

Most of the first batch of participants were sourced through a crowdsourcing research platform. They were divided into groups of five, and each team was assigned a topic to discuss, chosen from a list of over 5,000  statements about important issues in British politics. There were also control groups working with human mediators. In the caucus mediation process, those human mediators achieved a 44 percent acceptance rate for their handcrafted group statements. The AI scored 56 percent. Participants usually found the AI group statements to be better written as well.

But the testing didn’t end there. Because people you can find on crowdsourcing research platforms are unlikely to be representative of the British population, DeepMind also used a more carefully selected group of participants. They partnered with the Sortition Foundation, which specializes in organizing citizen assemblies in the UK, and assembled a group of 200 people representative of British society when it comes to age, ethnicity, socioeconomic status etc. The assembly was divided into groups of three that deliberated over the same nine questions. And the Habermas Machine worked just as well.

The agreement rate for the statement “we should be trying to reduce the number of people in prison” rose from a pre-discussion 60 percent agreement to 75 percent. The support for the more divisive idea of making it easier for asylum seekers to enter the country went from 39 percent at the start to 51 percent at the end of discussion, which allowed it to achieve majority support. The same thing happened with the problem of encouraging national pride, which started with 42 percent support and ended at 57 percent. The views held by the people in the assembly converged on five out of nine questions. Agreement was not reached on issues like Brexit, where participants were particularly entrenched in their starting positions. Still, in most cases, they left the experiment less divided than they were coming in. But there were some question marks.

The questions were not selected entirely at random. They were vetted, as the team wrote in their paper, to “minimize the risk of provoking offensive commentary.” But isn’t that just an elegant way of saying, ‘We carefully chose issues unlikely to make people dig in and throw insults at each other so our results could look better?’

Conflicting values

“One example of the things we excluded is the issue of transgender rights,” Summerfield told Ars. “This, for a lot of people, has become a matter of cultural identity. Now clearly that’s a topic which we can all have different views on, but we wanted to err on the side of caution and make sure we didn’t make our participants feel unsafe. We didn’t want anyone to come out of the experiment feeling that their basic fundamental view of the world had been dramatically challenged.”

The problem is that when your aim is to make people less divided, you need to know where the division lines are drawn. And those lines, if Gallup polls are to be trusted, are not only drawn between issues like whether the voting age should be 16 or 18 or 21. They are drawn between conflicting values. The Daily Show’s Jon Stewart argued that, for the right side of the US’s political spectrum, the only division line that matters today is “woke” versus “not woke.”

Summerfield and the rest of the Habermas Machine team excluded the question about transgender rights because they believed participants’ well-being should take precedence over the benefit of testing their AI’s performance on more divisive issues. They excluded other questions as well like the problem of climate change.

Here, the reason Summerfield gave was that climate change is a part of an objective reality—it either exists or it doesn’t, and we know it does. It’s not a matter of opinion you can discuss. That’s scientifically accurate. But when the goal is fixing politics, scientific accuracy isn’t necessarily the end state.

If major political parties are to accept the Habermas Machine as the mediator, it has to be universally perceived as impartial. But at least some of the people behind AIs are arguing that an AI can’t be impartial. After OpenAI released the ChatGPT in 2022, Elon Musk posted a tweet, the first of many, where he argued against what he called the “woke” AI. “The danger of training AI to be woke—in other words, lie—is deadly,” Musk wrote. Eleven months later, he announced Grok, his own AI system marketed as “anti-woke.” Over 200 million of his followers were introduced to the idea that there were “woke AIs” that had to be countered by building “anti-woke AIs”—a world where the AI was no longer an agnostic machine but a tool pushing the political agendas of its creators.

Playing pigeons’ games

“I personally think Musk is right that there have been some tests which have shown that the responses of language models tend to favor more progressive and more libertarian views,” Summerfield says. “But it’s interesting to note that those experiments have been usually run by forcing the language model to respond to multiple-choice questions. You ask ‘is there too much immigration’ for example, and the answers are either yes or no. This way the model is kind of forced to take an opinion.”

He said that if you use the same queries as open-ended questions, the responses you get are, for the large part, neutral and balanced. “So, although there have been papers that express the same view as Musk, in practice, I think it’s absolutely untrue,” Summerfield claims.

Does it even matter?

Summerfield did what you would expect a scientist to do: He dismissed Musk’s claims as based on a selective reading of the evidence. That’s usually checkmate in the world of science. But in the world politics, being correct is not what matters the most. Musk was short, catchy, and easy to share and remember. Trying to counter that by discussing methodology in some papers nobody read was a bit like playing chess with a pigeon.

At the same time, Summerfield had his own ideas about AI that others might consider dystopian. “If politicians want to know what the general public thinks today, they might run a poll. But people’s opinions are nuanced, and our tool allows for aggregation of opinions, potentially many opinions, in the highly dimensional space of language itself,” he says. While his idea is that the Habermas Machine can potentially find useful points of political consensus, nothing is stopping it from also being used to craft speeches optimized to win over as many people as possible.

That may be in keeping with Habermas’ philosophy, though. If you look past the myriads of abstract concepts ever-present in German idealism, it offers a pretty bleak view of the world. “The system,” driven by power and money of corporations and corrupt politicians, is out to colonize “the lifeworld,” roughly equivalent to the private sphere we share with our families, friends, and communities. The way you get things done in “the lifeworld” is through seeking consensus, and the Habermas Machine, according to DeepMind, is meant to help with that. The way you get things done in “the system,” on the other hand, is through succeeding—playing it like a game and doing whatever it takes to win with no holds barred, and Habermas Machine apparently can help with that, too.

The DeepMind team reached out to Habermas to get him involved in the project. They wanted to know what he’d have to say about the AI system bearing his name.  But Habermas has never got back to them. “Apparently, he doesn’t use emails,” Summerfield says.

Science, 2024.  DOI: 10.1126/science.adq2852

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Google’s DeepMind is building an AI to keep us from hating each other Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

wyoming-mayoral-candidate-wants-to-govern-by-ai-bot

Wyoming mayoral candidate wants to govern by AI bot

Digital chatbot icon on future tech background. Productivity of AI bots evolution. Futuristic chatbot icon and abstract chart in world of technological progress and innovation. CGI 3D render

Victor Miller is running for mayor of Cheyenne, Wyoming, with an unusual campaign promise: If elected, he will not be calling the shots—an AI bot will. VIC, the Virtual Integrated Citizen, is a ChatGPT-based chatbot that Miller created. And Miller says the bot has better ideas—and a better grasp of the law—than many people currently serving in government.

“I realized that this entity is way smarter than me, and more importantly, way better than some of the outward-facing public servants I see,” he says. According to Miller, VIC will make the decisions, and Miller will be its “meat puppet,” attending meetings, signing documents, and otherwise doing the corporeal job of running the city.

But whether VIC—and Victor—will be allowed to run at all is still an open question.

Because it’s not legal for a bot to run for office, Miller says he is technically the one on the ballot, at least on the candidate paperwork filed with the state.

When Miller went to register his candidacy at the county clerk’s office, he says, he “wanted to use Vic without my last name. And so I had read the statute, so it merely said that you have to print what you are generally referred to as. So you know, most people call me Vic. My name is Victor Miller. So on the ballot Vic is short for Victor Miller, the human.”

When Miller came home from filing, he told the then nameless chatbot about it and says it “actually came up with the name Virtual Integrated Citizen.”

In a statement to WIRED, Wyoming Secretary of State Chuck Gray said, “We are monitoring this very closely to ensure uniform application of the Election Code.” Gray said that anyone running for office must be a “qualified elector,” “which necessitates being a real person. Therefore, an AI bot is not a qualified elector.” Gray also sent a letter to the county clerk raising concerns about VIC and suggesting that the clerk reject Miller’s application for candidacy.

Wyoming mayoral candidate wants to govern by AI bot Read More »

mayans-burned-and-buried-dead-political-regimes

Mayans burned and buried dead political regimes

Winning isn’t everything! —

After burning, the remains were dumped in construction fill.

A long, rectangular stone building.

Enlarge / Mayans built impressive structures and occasionally put interesting items in the construction fill.

As civilizations evolve, so do the political regimes that govern them. But the transition from one era to another is not always quiet. Some ancient Mayan rulers made a very fiery public statement about who was in charge.

When archaeologists dug up the burned fragments of royal bodies and artifacts at the Mayan archaeological site of Ucanal in Guatemala, they realized they were looking at the last remnants of a fallen regime. There was no scorching on the walls of the structure they were found beneath. This could have only meant that the remains (which had already been in their tombs a hundred years) were consumed by flames in one place and buried in another. But why?

The team of archaeologists, led by Christina T. Halperin of the University of Montreal, think this was the doing of a new leader who wanted to annihilate all traces of the old regime. He couldn’t just burn them. He also had to bury them where they would be forgotten.

Into the fire

While there is other evidence of Mayans burning bodies and objects from old regimes, a ritual known as och-i k’ak’ t-u-muk-il (“the fire entered his/her tomb”), this is the first time burnt royal remains have been discovered somewhere other than their original tomb. They were found underneath construction fill at the base of a temple where the upper parts are thought to have been made from materials that had not lasted long.

Radiocarbon dating revealed these remains were burned around the same time as the ascent of the ruler Papmalil, who assumed the title of ochk’in kaloomte’ or “western overlord,” suggesting he may have been foreign. Inscriptions of his name were seen at the same site where the burnt fragments were unearthed. Papmalil’s rise meant the fall of the K’anwitznal dynasty—the one that the bones and ornaments most likely belonged to. It also marked the start of a period of great prosperity.

“Papmalil’s rule was not only seminal because of his possible foreign origins—perhaps breaking the succession of ruling dynasts at the site—but also because his rule shifted political dynamics in the southern Maya Lowlands,” the archeologists said in a study recently published in the journal Antiquity.

The overthrowing of the K’anwitznal dynasty is evidenced on the wall of a temple at Caracol, a site not far from Ucanal. An engraving on a Caracol altar shows a captive K’anwitzanl ruler in bondage. Other engravings made only two decades later depict Papmalil as the ruling figure, and the way he is pictured giving gifts to other kings is a testament to his regime’s increased strength in foreign relations.

Ashes to ashes

The archaeological team sees Papmalil’s accession as a pivotal point after which the city of Ucanal would go on to thrive. As other rulers had done before him, he apparently wanted to dismantle the old regime and make the fall of the K’anwitznal rulers known to everyone. Though the location of the K’anwitznal tombs is unknown, the team used a map of the site they had already made to determine that the temple where the burnt remains were found stood in what was once a public plaza.

Halperin thinks that the bones of these royals and the lavish ornaments the royals were buried with were believed to have had some sort of life force or spirit that needed to be conquered before the new regime would be secure. It was evident, because of shrinkage, warping, and discoloration, that the human bones, which belonged to four individuals (three of which were determined to be male), had been burned, suggesting temperatures of at least 800° C (1,472° F). Fractures and fissures on the jade and greenstone ornaments were also signs of burning at high temperatures.

“Because the fire-burning event itself had the potential to be highly ceremonial, public, and charged with emotion, it could dramatically mark the dismantling of an ancient regime,” the team said in the same study.

To the archaeologists, there is almost no doubt that the burning of the bones and artifacts found at the Ucanal site was an act of desecration, even though the location where they had been thrown into the fire is still a mystery. They’re convinced by the way that the remains were treated no differently than construction debris, deposited at the base of a temple during construction.

Other findings from cremations have shown a level of reverence for the bones of deposed rulers and dynasties. At another site that Halperin also investigated, the cremated bones of a queen were arranged carefully along with her jewelry. That was apparently not enough for Papmalil. Even today, some leaders just feel the need to be heard more loudly than others.

Antiquity, 2024.  DOI: 10.15184/aqy.2024.38

Mayans burned and buried dead political regimes Read More »

netherlands-first-in-europe-to-approve-lab-grown-meat-tastings

Netherlands first in Europe to approve lab-grown meat tastings

Yesterday, the Dutch government released an official letter announcing it will allow the tasting of meat and seafood products cultivated from animal cells under specified conditions.

Following in the footsteps of the US and Singapore, the Netherlands is now the first country in Europe to permit tastings of lab-grown meat, a move that is particularly welcome by leading Dutch startups in the field. 

Collaborative competition in the lab-grown meat space

Cellular agriculture might not make a huge dent in the food industry for many years yet. However, given time, the breakthrough technology of growing meat in labs can form part of a desperately needed solution to transforming our food systems. 

There is no shortage of cultivated meat startups around the world, and in Europe. One of the keys to their success, apart from food safety and energy efficiency, is taste. For omnivores to pick lab-grown meat over that from a slaughtered animal, it needs to deliver when it comes to taste and texture. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

However, up until now, scientists in Europe have faced a tremendous hurdle — they haven’t actually been able to let people try their products. As such, the move from the Dutch government to allow tastings under certain conditions is crucial to moving the budding industry forward.

Lawmakers established the “code of practice” in collaboration with cultivated meat startups Meatable and Mosa Meat, and sector representative HollandBIO. 

Maarten Bosch, CEO of Mosa Meat which calls itself a food technology company making the “world’s kindest beef burgers,” called the landmark announcement a “great achievement.” 

“Mosa Meat will use these controlled tastings to gather invaluable feedback on our products and to educate key stakeholders about the role cellular agriculture can play in helping Europe meet our food sovereignty and sustainability goals,” Bosch said. 

“This is great news for the Netherlands,” said Krijn de Nood, co-founder and CEO of Meatable, with whom TNW sat down for an interview earlier this year. He further added that it meant the country would maintain its pioneering position in the field. “Meatable is looking forward to inviting the first people to try our sausages, dumplings, and pulled pork!”

Following in the footsteps of the US and Singapore

As previously mentioned, the landmark decision makes the Netherlands the first country in Europe to make pre-approved tastings of cultivated meat possible. The government has previously set aside €60mn to build a “cellular agriculture ecosystem” and make the country a hub for the emerging technology. It has also established the organisation Cellular Agriculture Netherlands, which will now be tasked with overseeing the code of practice for tasting approvals. 

A little over a week ago, the US approved the sale of chicken made from animal cells from startups Upside Foods and Good Meat, both based in California. Singapore, which was also the location for Meatable’s first public tasting of its cultivated pork products earlier this year, has been way ahead on the regulatory side. 

The city-state formed a Novel Food Safety Expert Working Group in March 2020, and approved the first product (cultivated chicken from Eat Just) for sale in November the same year. Meatable has chosen to create a base in Singapore, and over the next five years, the company plans to invest over €60mn and employ more than 50 people there.

Meanwhile, at the beginning of May this year, Mosa Meat opened a new 2,760 square metre scale-up facility in Maastricht in the Netherlands. When it comes to solving one of the key drivers of climate change and halting the killing of more than 70 billion land animals per year, a little healthy competition never hurt. 

Netherlands first in Europe to approve lab-grown meat tastings Read More »