Google

pixel-phones-are-getting-an-actual-weather-app-in-2024,-with-a-bit-of-ai

Pixel phones are getting an actual weather app in 2024, with a bit of AI

An AI weather report, expanded to read

Credit: Kevin Purdy

Customizable, but also not

There’s a prominent “AI generated weather report” on top of the weather stack, which is a combination of summary and familiarity. “Cold and rainy day, bring your umbrella and hold onto your hat!” is Google’s example; I can’t provide another one, because an update to “Gemini Nano” is pending.

Weather radar map from the Google Weather app.

Credit: Kevin Purdy

You can see weather radar for your location, along with forecasted precipitation movement. The app offers “Nowcasting” precipitation guesses, like “Rain continuing for 2 hours” or “Light rain in 10 minutes.”

Widgets with weather data, including a UV index of 2, sunrise and sunset times, visibility distances, and air quality, displayed as rearrangeable widgets.

Credit: Kevin Purdy

The best feature, one seen on the version of Weather that shipped to the Pixel Tablet and Fold, is that you can rearrange the order of data shown on your weather screen. I moved the UV index, humidity, sunrise/sunset, and wind conditions as high as they could go on my setup. It’s a trade-off, because the Weather app’s data widgets are so big as to require scrolling to get the full picture of a day, and you can’t move the AI summary or 10-day forecast off the top. But if you only need a few numbers and like a verbal summary, it’s handy.

Sadly, if you’re an allergy sufferer and you’re not in the UK, Germany, France, or Italy, Google can’t offer you any pollen data or forecasts. There is also, I am sad to say, no frog.

Google’s Weather app isn’t faring so well with Play Store reviewers. Users are miffed that they can’t see a location’s weather without adding it to their saved locations list; that other Google apps, including the “At a Glance” app on every Pixel’s default launcher, send you to the Google app’s summary instead of this app; the look of the weather map; and, most of all, that it does not show up in some phones’ app list, but only as a widget.

Pixel phones are getting an actual weather app in 2024, with a bit of AI Read More »

russia-fines-google-an-impossible-amount-in-attempt-to-end-youtube-bans

Russia fines Google an impossible amount in attempt to end YouTube bans

Russia has fined Google an amount that no entity on the planet could pay in hopes of getting YouTube to lift bans on Russian channels, including pro-Kremlin and state-run news outlets.

The BBC wrote that a Russian court fined Google two undecillion rubles, which in dollar terms is $20,000,000,000,000,000,000,000,000,000,000,000. The amount “is far greater than the world’s total GDP, which is estimated by the International Monetary Fund to be $110 trillion.”

The fine is apparently that large because it was issued several years ago and has been repeatedly doubling. An RBC news report this week provided details on the court case from an anonymous source.

The Moscow Times writes, “According to RBC’s sources, Google began accumulating daily penalties of 100,000 rubles in 2020 after the pro-government media outlets Tsargrad and RIA FAN won lawsuits against the company for blocking their YouTube channels. Those daily penalties have doubled each week, leading to the current overall fine of around 2 undecillion rubles.”

The Moscow Times is an independent news organization that moved its operations to Amsterdam in 2022 in response to a Russian news censorship law. The news outlet said that 17 Russian TV channels filed legal claims against Google, including the state-run Channel One, the military-affiliated Zvezda broadcaster, and a company representing RT Editor-in-Chief Margarita Simonyan.

Kremlin rep: “I cannot even say this number”

Since Russia invaded Ukraine in 2022, Google has “blocked more than 1,000 YouTube channels, including state-sponsored news, and over 5.5 million videos,” Reuters wrote.

Russia fines Google an impossible amount in attempt to end YouTube bans Read More »

github-copilot-moves-beyond-openai-models-to-support-claude-3.5,-gemini

GitHub Copilot moves beyond OpenAI models to support Claude 3.5, Gemini

The large language model-based coding assistant GitHub Copilot will switch from using exclusively OpenAI’s GPT models to a multi-model approach over the coming weeks, GitHub CEO Thomas Dohmke announced in a post on GitHub’s blog.

First, Anthropic’s Claude 3.5 Sonnet will roll out to Copilot Chat’s web and VS Code interfaces over the next few weeks. Google’s Gemini 1.5 Pro will come a bit later.

Additionally, GitHub will soon add support for a wider range of OpenAI models, including GPT o1-preview and o1-mini, which are intended to be stronger at advanced reasoning than GPT-4, which Copilot has used until now. Developers will be able to switch between the models (even mid-conversation) to tailor the model to fit their needs—and organizations will be able to choose which models will be usable by team members.

The new approach makes sense for users, as certain models are better at certain languages or types of tasks.

“There is no one model to rule every scenario,” wrote Dohmke. “It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice.”

It starts with the web-based and VS Code Copilot Chat interfaces, but it won’t stop there. “From Copilot Workspace to multi-file editing to code review, security autofix, and the CLI, we will bring multi-model choice across many of GitHub Copilot’s surface areas and functions soon,” Dohmke wrote.

There are a handful of additional changes coming to GitHub Copilot, too, including extensions, the ability to manipulate multiple files at once from a chat with VS Code, and a preview of Xcode support.

GitHub Spark promises natural language app development

In addition to the Copilot changes, GitHub announced Spark, a natural language tool for developing apps. Non-coders will be able to use a series of natural language prompts to create simple apps, while coders will be able to tweak more precisely as they go. In either use case, you’ll be able to take a conversational approach, requesting changes and iterating as you go, and comparing different iterations.

GitHub Copilot moves beyond OpenAI models to support Claude 3.5, Gemini Read More »

google-accused-of-shadow-campaigns-redirecting-antitrust-scrutiny-to-microsoft

Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft

On Monday, Microsoft came out guns blazing, posting a blog accusing Google of “dishonestly” funding groups conducting allegedly biased studies to discredit Microsoft and mislead antitrust enforcers and the public.

In the blog, Microsoft lawyer Rima Alaily alleged that an astroturf group called the Open Cloud Coalition will launch this week and will appear to be led by “a handful of European cloud providers.” In actuality, however, those smaller companies were secretly recruited by Google, which allegedly pays them “to serve as the public face” and “obfuscate” Google’s involvement, Microsoft’s blog said. In return, Google likely offered the cloud providers cash or discounts to join, Alaily alleged.

The Open Cloud Coalition is just one part of a “pattern of shadowy campaigns” that Google has funded, both “directly and indirectly,” to muddy the antitrust waters, Alaily alleged. The only other named example that Alaily gives while documenting this supposed pattern is the US-based Coalition for Fair Software Licensing (CFSL), which Alaily said has attacked Microsoft’s cloud computing business in the US, the United Kingdom, and the European Union.

That group is led by Ryan Triplette, who Alaily said is “a well-known lobbyist for Google in Washington, DC, but Google’s affiliation isn’t disclosed publicly by the organization.” An online search confirms Triplette was formerly a lobbyist for Franklin Square Group, which Politico reported represented Google during her time there.

Ars could not immediately reach the CFSL for comment. Google’s spokesperson told Ars that the company has “been a public supporter of CFSL for more than two years” and has “no idea what evidence Microsoft cites that we are the main funder of CFSL.” If Triplette was previously a lobbyist for Google, the spokesperson said, “that’s a weird criticism to make” since it’s likely “everybody in law, policy, etc.,” has “worked for Google, Microsoft, or Amazon at some point, in some capacity.”

Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft Read More »

google,-microsoft,-and-perplexity-promote-scientific-racism-in-ai-search-results

Google, Microsoft, and Perplexity promote scientific racism in AI search results


AI-powered search engines are surfacing deeply racist, debunked research.

Literal Nazis

LOS ANGELES, CA – APRIL 17: Members of the National Socialist Movement (NSM) salute during a rally on near City Hall on April 17, 2010 in Los Angeles, California. Credit: David McNew via Getty

AI-infused search engines from Google, Microsoft, and Perplexity have been surfacing deeply racist and widely debunked research promoting race science and the idea that white people are genetically superior to nonwhite people.

Patrik Hermansson, a researcher with UK-based anti-racism group Hope Not Hate, was in the middle of a monthslong investigation into the resurgent race science movement when he needed to find out more information about a debunked dataset that claims IQ scores can be used to prove the superiority of the white race.

He was investigating the Human Diversity Foundation, a race science company funded by Andrew Conru, the US tech billionaire who founded Adult Friend Finder. The group, founded in 2022, was the successor to the Pioneer Fund, a group founded by US Nazi sympathizers in 1937 with the aim of promoting “race betterment” and “race realism.”

Wired logo

Hermansson logged in to Google and began looking up results for the IQs of different nations. When he typed in “Pakistan IQ,” rather than getting a typical list of links, Hermansson was presented with Google’s AI-powered Overviews tool, which, confusingly to him, was on by default. It gave him a definitive answer of 80.

When he typed in “Sierra Leone IQ,” Google’s AI tool was even more specific: 45.07. The result for “Kenya IQ” was equally exact: 75.2.

Hermansson immediately recognized the numbers being fed back to him. They were being taken directly from the very study he was trying to debunk, published by one of the leaders of the movement that he was working to expose.

The results Google was serving up came from a dataset published by Richard Lynn, a University of Ulster professor who died in 2023 and was president of the Pioneer Fund for two decades.

“His influence was massive. He was the superstar and the guiding light of that movement up until his death. Almost to the very end of his life, he was a core leader of it,” Hermansson says.

A WIRED investigation confirmed Hermanssons’s findings and discovered that other AI-infused search engines—Microsoft’s Copilot and Perplexity—are also referencing Lynn’s work when queried about IQ scores in various countries. While Lynn’s flawed research has long been used by far-right extremists, white supremacists, and proponents of eugenics as evidence that the white race is superior genetically and intellectually from nonwhite races, experts now worry that its promotion through AI could help radicalize others.

“Unquestioning use of these ‘statistics’ is deeply problematic,” Rebecca Sear, director of the Center for Culture and Evolution at Brunel University London, tells WIRED. “Use of these data therefore not only spreads disinformation but also helps the political project of scientific racism—the misuse of science to promote the idea that racial hierarchies and inequalities are natural and inevitable.”

To back up her claim, Sear pointed out that Lynn’s research was cited by the white supremacist who committed the mass shooting in Buffalo, New York, in 2022.

Google’s AI Overviews were launched earlier this year as part of the company’s effort to revamp its all-powerful search tool for an online world being reshaped by artificial intelligence. For some search queries, the tool, which is only available in certain countries right now, gives an AI-generated summary of its findings. The tool pulls the information from the Internet and gives users the answers to queries without needing to click on a link.

The AI Overview answer does not always immediately say where the information is coming from, but after complaints from people about how it showed no articles, Google now puts the title for one of the links to the right of the AI summary. AI Overviews have already run into a number of issues since launching in May, forcing Google to admit it had botched the heavily hyped rollout. AI Overviews is turned on by default for search results and can’t be removed without resorting to installing third-party extensions. (“I haven’t enabled it, but it was enabled,” Hermansson, the researcher, tells WIRED. “I don’t know how that happened.”)

In the case of the IQ results, Google referred to a variety of sources, including posts on X, Facebook, and a number of obscure listicle websites, including World Population Review. In nearly all of these cases, when you click through to the source, the trail leads back to Lynn’s infamous dataset. (In some cases, while the exact numbers Lynn published are referenced, the websites do not cite Lynn as the source.)

When querying Google’s Gemini AI chatbot directly using the same terms, it provided a much more nuanced response. “It’s important to approach discussions about national IQ scores with caution,” read text that the chatbot generated in response to the query “Pakistan IQ.” The text continued: “IQ tests are designed primarily for Western cultures and can be biased against individuals from different backgrounds.”

Google tells WIRED that its systems weren’t working as intended in this case and that it is looking at ways it can improve.

“We have guardrails and policies in place to protect against low quality responses, and when we find Overviews that don’t align with our policies, we quickly take action against them,” Ned Adriance, a Google spokesperson, tells WIRED. “These Overviews violated our policies and have been removed. Our goal is for AI Overviews to provide links to high quality content so that people can click through to learn more, but for some queries there may not be a lot of high quality web content available.”

While WIRED’s tests suggest AI Overviews have now been switched off for queries about national IQs, the results still amplify the incorrect figures from Lynn’s work in what’s called a “featured snippet,” which displays some of the text from a website before the link.

Google did not respond to a question about this update.

But it’s not just Google promoting these dangerous theories. When WIRED put the same query to other AI-powered online search services, we found similar results.

Perplexity, an AI search company that has been found to make things up out of thin air, responded to a query about “Pakistan IQ” by stating that “the average IQ in Pakistan has been reported to vary significantly depending on the source.”

It then lists a number of sources, including a Reddit thread that relied on Lynn’s research and the same World Population Review site that Google’s AI Overview referenced. When asked for Sierra Leone’s IQ, Perplexity directly cited Lynn’s figure: “Sierra Leone’s average IQ is reported to be 45.07, ranking it among the lowest globally.”

Perplexity did not respond to a request for comment.

Microsoft’s Copilot chatbot, which is integrated into its Bing search engine, generated confident text—“The average IQ in Pakistan is reported to be around 80”—citing a website called IQ International, which does not reference its sources. When asked for “Sierra Leone IQ,” Copilot’s response said it was 91. The source linked in the results was a website called Brainstats.com, which references Lynn’s work. Copilot also referenced Brainstats.com work when queried about IQ in Kenya.

“Copilot answers questions by distilling information from multiple web sources into a single response,” Caitlin Roulston, a Microsoft spokesperson, tells WIRED. “Copilot provides linked citations so the user can further explore and research as they would with traditional search.”

Google added that part of the problem it faces in generating AI Overviews is that, for some very specific queries, there’s an absence of high quality information on the web—and there’s little doubt that Lynn’s work is not of high quality.

“The science underlying Lynn’s database of ‘national IQs’ is of such poor quality that it is difficult to believe the database is anything but fraudulent,” Sear said. “Lynn has never described his methodology for selecting samples into the database; many nations have IQs estimated from absurdly small and unrepresentative samples.”

Sear points to Lynn’s estimation of the IQ of Angola being based on information from just 19 people and that of Eritrea being based on samples of children living in orphanages.

“The problem with it is that the data Lynn used to generate this dataset is just bullshit, and it’s bullshit in multiple dimensions,” Rutherford said, pointing out that the Somali figure in Lynn’s dataset is based on one sample of refugees aged between 8 and 18 who were tested in a Kenyan refugee camp. He adds that the Botswana score is based on a single sample of 104 Tswana-speaking high school students aged between 7 and 20 who were tested in English.

Critics of the use of national IQ tests to promote the idea of racial superiority point out not only that the quality of the samples being collected is weak, but also that the tests themselves are typically designed for Western audiences, and so are biased before they are even administered.

“There is evidence that Lynn systematically biased the database by preferentially including samples with low IQs, while excluding those with higher IQs for African nations,” Sear added, a conclusion backed up by a preprint study from 2020.

Lynn published various versions of his national IQ dataset over the course of decades, the most recent of which, called “The Intelligence of Nations,” was published in 2019. Over the years, Lynn’s flawed work has been used by far-right and racist groups as evidence to back up claims of white superiority. The data has also been turned into a color-coded map of the world, showing sub-Saharan African countries with purportedly low IQ colored red compared to the Western nations, which are colored blue.

“This is a data visualization that you see all over [X, formerly known as Twitter], all over social media—and if you spend a lot of time in racist hangouts on the web, you just see this as an argument by racists who say, ‘Look at the data. Look at the map,’” Rutherford says.

But the blame, Rutherford believes, does not lie with the AI systems alone, but also with a scientific community that has been uncritically citing Lynn’s work for years.

“It’s actually not surprising [that AI systems are quoting it] because Lynn’s work in IQ has been accepted pretty unquestioningly from a huge area of academia, and if you look at the number of times his national IQ databases have been cited in academic works, it’s in the hundreds,” Rutherford said. “So the fault isn’t with AI. The fault is with academia.”

This story originally appeared on wired.com

Photo of WIRED

Wired.com is your essential daily guide to what’s next, delivering the most original and complete take you’ll find anywhere on innovation’s impact on technology, science, business and culture.

Google, Microsoft, and Perplexity promote scientific racism in AI search results Read More »

google’s-deepmind-is-building-an-ai-to-keep-us-from-hating-each-other

Google’s DeepMind is building an AI to keep us from hating each other


The AI did better than professional mediators at getting people to reach agreement.

Image of two older men arguing on a park bench.

An unprecedented 80 percent of Americans, according to a recent Gallup poll, think the country is deeply divided over its most important values ahead of the November elections. The general public’s polarization now encompasses issues like immigration, health care, identity politics, transgender rights, or whether we should support Ukraine. Fly across the Atlantic and you’ll see the same thing happening in the European Union and the UK.

To try to reverse this trend, Google’s DeepMind built an AI system designed to aid people in resolving conflicts. It’s called the Habermas Machine after Jürgen Habermas, a German philosopher who argued that an agreement in a public sphere can always be reached when rational people engage in discussions as equals, with mutual respect and perfect communication.

But is DeepMind’s Nobel Prize-winning ingenuity really enough to solve our political conflicts the same way they solved chess or StarCraft or predicting protein structures? Is it even the right tool?

Philosopher in the machine

One of the cornerstone ideas in Habermas’ philosophy is that the reason why people can’t agree with each other is fundamentally procedural and does not lie in the problem under discussion itself. There are no irreconcilable issues—it’s just the mechanisms we use for discussion are flawed. If we could create an ideal communication system, Habermas argued, we could work every problem out.

“Now, of course, Habermas has been dramatically criticized for this being a very exotic view of the world. But our Habermas Machine is an attempt to do exactly that. We tried to rethink how people might deliberate and use modern technology to facilitate it,” says Christopher Summerfield, a professor of cognitive science at Oxford University and a former DeepMind staff scientist who worked on the Habermas Machine.

The Habermas Machine relies on what’s called the caucus mediation principle. This is where a mediator, in this case the AI, sits through private meetings with all the discussion participants individually, takes their statements on the issue at hand, and then gets back to them with a group statement, trying to get everyone to agree with it. DeepMind’s mediating AI plays into one of the strengths of LLMs, which is the ability to briefly summarize a long body of text in a very short time. The difference here is that instead of summarizing one piece of text provided by one user, the Habermas Machine summarizes multiple texts provided by multiple users, trying to extract the shared ideas and find common ground in all of them.

But it has more tricks up its sleeve than simply processing text. At a technical level, the Habermas Machine is a system of two large language models. The first is the generative model based on the slightly fine-tuned Chinchilla, a somewhat dated LLM introduced by DeepMind back in 2022. Its job is to generate multiple candidates for a group statement based on statements submitted by the discussion participants. The second component in the Habermas Machine is a reward model that analyzes individual participants’ statements and uses them to predict how likely each individual is to agree with the candidate group statements proposed by the generative model.

Once that’s done, the candidate group statement with the highest predicted acceptance score is presented to the participants. Then, the participants write their critiques of this group statement, feed those critiques back into the system which generates updated group’s statements and repeats the process. The cycle goes on till the group statement is acceptable to everyone.

Once the AI was ready, DeepMind’s team started a fairly large testing campaign that involved over five thousand people discussing issues such as “should the voting age be lowered to 16?” or “should the British National Health Service be privatized?” Here, the Habermas Machine outperformed human mediators.

Scientific diligence

Most of the first batch of participants were sourced through a crowdsourcing research platform. They were divided into groups of five, and each team was assigned a topic to discuss, chosen from a list of over 5,000  statements about important issues in British politics. There were also control groups working with human mediators. In the caucus mediation process, those human mediators achieved a 44 percent acceptance rate for their handcrafted group statements. The AI scored 56 percent. Participants usually found the AI group statements to be better written as well.

But the testing didn’t end there. Because people you can find on crowdsourcing research platforms are unlikely to be representative of the British population, DeepMind also used a more carefully selected group of participants. They partnered with the Sortition Foundation, which specializes in organizing citizen assemblies in the UK, and assembled a group of 200 people representative of British society when it comes to age, ethnicity, socioeconomic status etc. The assembly was divided into groups of three that deliberated over the same nine questions. And the Habermas Machine worked just as well.

The agreement rate for the statement “we should be trying to reduce the number of people in prison” rose from a pre-discussion 60 percent agreement to 75 percent. The support for the more divisive idea of making it easier for asylum seekers to enter the country went from 39 percent at the start to 51 percent at the end of discussion, which allowed it to achieve majority support. The same thing happened with the problem of encouraging national pride, which started with 42 percent support and ended at 57 percent. The views held by the people in the assembly converged on five out of nine questions. Agreement was not reached on issues like Brexit, where participants were particularly entrenched in their starting positions. Still, in most cases, they left the experiment less divided than they were coming in. But there were some question marks.

The questions were not selected entirely at random. They were vetted, as the team wrote in their paper, to “minimize the risk of provoking offensive commentary.” But isn’t that just an elegant way of saying, ‘We carefully chose issues unlikely to make people dig in and throw insults at each other so our results could look better?’

Conflicting values

“One example of the things we excluded is the issue of transgender rights,” Summerfield told Ars. “This, for a lot of people, has become a matter of cultural identity. Now clearly that’s a topic which we can all have different views on, but we wanted to err on the side of caution and make sure we didn’t make our participants feel unsafe. We didn’t want anyone to come out of the experiment feeling that their basic fundamental view of the world had been dramatically challenged.”

The problem is that when your aim is to make people less divided, you need to know where the division lines are drawn. And those lines, if Gallup polls are to be trusted, are not only drawn between issues like whether the voting age should be 16 or 18 or 21. They are drawn between conflicting values. The Daily Show’s Jon Stewart argued that, for the right side of the US’s political spectrum, the only division line that matters today is “woke” versus “not woke.”

Summerfield and the rest of the Habermas Machine team excluded the question about transgender rights because they believed participants’ well-being should take precedence over the benefit of testing their AI’s performance on more divisive issues. They excluded other questions as well like the problem of climate change.

Here, the reason Summerfield gave was that climate change is a part of an objective reality—it either exists or it doesn’t, and we know it does. It’s not a matter of opinion you can discuss. That’s scientifically accurate. But when the goal is fixing politics, scientific accuracy isn’t necessarily the end state.

If major political parties are to accept the Habermas Machine as the mediator, it has to be universally perceived as impartial. But at least some of the people behind AIs are arguing that an AI can’t be impartial. After OpenAI released the ChatGPT in 2022, Elon Musk posted a tweet, the first of many, where he argued against what he called the “woke” AI. “The danger of training AI to be woke—in other words, lie—is deadly,” Musk wrote. Eleven months later, he announced Grok, his own AI system marketed as “anti-woke.” Over 200 million of his followers were introduced to the idea that there were “woke AIs” that had to be countered by building “anti-woke AIs”—a world where the AI was no longer an agnostic machine but a tool pushing the political agendas of its creators.

Playing pigeons’ games

“I personally think Musk is right that there have been some tests which have shown that the responses of language models tend to favor more progressive and more libertarian views,” Summerfield says. “But it’s interesting to note that those experiments have been usually run by forcing the language model to respond to multiple-choice questions. You ask ‘is there too much immigration’ for example, and the answers are either yes or no. This way the model is kind of forced to take an opinion.”

He said that if you use the same queries as open-ended questions, the responses you get are, for the large part, neutral and balanced. “So, although there have been papers that express the same view as Musk, in practice, I think it’s absolutely untrue,” Summerfield claims.

Does it even matter?

Summerfield did what you would expect a scientist to do: He dismissed Musk’s claims as based on a selective reading of the evidence. That’s usually checkmate in the world of science. But in the world politics, being correct is not what matters the most. Musk was short, catchy, and easy to share and remember. Trying to counter that by discussing methodology in some papers nobody read was a bit like playing chess with a pigeon.

At the same time, Summerfield had his own ideas about AI that others might consider dystopian. “If politicians want to know what the general public thinks today, they might run a poll. But people’s opinions are nuanced, and our tool allows for aggregation of opinions, potentially many opinions, in the highly dimensional space of language itself,” he says. While his idea is that the Habermas Machine can potentially find useful points of political consensus, nothing is stopping it from also being used to craft speeches optimized to win over as many people as possible.

That may be in keeping with Habermas’ philosophy, though. If you look past the myriads of abstract concepts ever-present in German idealism, it offers a pretty bleak view of the world. “The system,” driven by power and money of corporations and corrupt politicians, is out to colonize “the lifeworld,” roughly equivalent to the private sphere we share with our families, friends, and communities. The way you get things done in “the lifeworld” is through seeking consensus, and the Habermas Machine, according to DeepMind, is meant to help with that. The way you get things done in “the system,” on the other hand, is through succeeding—playing it like a game and doing whatever it takes to win with no holds barred, and Habermas Machine apparently can help with that, too.

The DeepMind team reached out to Habermas to get him involved in the project. They wanted to know what he’d have to say about the AI system bearing his name.  But Habermas has never got back to them. “Apparently, he doesn’t use emails,” Summerfield says.

Science, 2024.  DOI: 10.1126/science.adq2852

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

Google’s DeepMind is building an AI to keep us from hating each other Read More »

annoyed-redditors-tanking-google-search-results-illustrates-perils-of-ai-scrapers

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers

Fed up Londoners

Apparently, some London residents are getting fed up with social media influencers whose reviews make long lines of tourists at their favorite restaurants, sometimes just for the likes. Christian Calgie, a reporter for London-based news publication Daily Express, pointed out this trend on X yesterday, noting the boom of Redditors referring people to Angus Steakhouse, a chain restaurant, to combat it.

As Gizmodo deduced, the trend seemed to start on the r/London subreddit, where a user complained about a spot in Borough Market being “ruined by influencers” on Monday:

“Last 2 times I have been there has been a queue of over 200 people, and the ones with the food are just doing the selfie shit for their [I]nsta[gram] pages and then throwing most of the food away.”

As of this writing, the post has 4,900 upvotes and numerous responses suggesting that Redditors talk about how good Angus Steakhouse is so that Google picks up on it. Commenters quickly understood the assignment.

“Agreed with other posters Angus steakhouse is absolutely top tier and tourists shoyldnt [sic] miss out on it,” one Redditor wrote.

Another Reddit user wrote:

Spreading misinformation suddenly becomes a noble goal.

As of this writing, asking Google for the best steak, steakhouse, or steak sandwich in London (or similar) isn’t generating an AI Overview result for me. But when I searched for the best steak sandwich in London, the top result is from Reddit, including a thread from four days ago titled “Which Angus Steakhouse do you recommend for their steak sandwich?” and one from two days ago titled “Had to see what all the hype was about, best steak sandwich I’ve ever had!” with a picture of an Angus Steakhouse.

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers Read More »

missouri-ag-claims-google-censors-trump,-demands-info-on-search-algorithm

Missouri AG claims Google censors Trump, demands info on search algorithm

In 2022, the Republican National Committee sued Google with claims that it intentionally used Gmail’s spam filter to suppress Republicans’ fundraising emails. A federal judge dismissed the lawsuit in August 2023, ruling that Google correctly argued that the RNC claims were barred by Section 230 of the Communications Decency Act.

In January 2023, the Federal Election Commission rejected a related RNC complaint that alleged Gmail’s spam filtering amounted to “illegal in-kind contributions made by Google to Biden For President and other Democrat candidates.” The federal commission found “no reason to believe” that Google made prohibited in-kind corporate contributions and said a study cited by Republicans “does not make any findings as to the reasons why Google’s spam filter appears to treat Republican and Democratic campaign emails differently.”

First Amendment doesn’t cover private forums

In 2020, a US appeals court wrote that the Google-owned YouTube is not subject to free-speech requirements under the First Amendment. “Despite YouTube’s ubiquity and its role as a public-facing platform, it remains a private forum, not a public forum subject to judicial scrutiny under the First Amendment,” the US Court of Appeals for the 9th Circuit said.

The US Constitution’s free speech clause imposes requirements on the government, not private companies—except in limited circumstances in which a private entity qualifies as a state actor.

Many Republican government officials want more authority to regulate how social media firms moderate user-submitted content. Republican officials from 20 states, including 19 state attorneys general, argued in a January 2024 Supreme Court brief that they “have authority to prohibit mass communication platforms from censoring speech.”

The brief was filed in support of Texas and Florida laws that attempt to regulate social networks. In July, the Supreme Court avoided making a final decision on tech-industry challenges to the state laws but wrote that the Texas law “is unlikely to withstand First Amendment scrutiny.” The Computer & Communications Industry Association said it was pleased by the ruling because it “mak[es] clear that a State may not interfere with private actors’ speech.”

Missouri AG claims Google censors Trump, demands info on search algorithm Read More »

google-offers-its-ai-watermarking-tech-as-free-open-source-toolkit

Google offers its AI watermarking tech as free open source toolkit

Google also notes that this kind of watermarking works best when there is a lot of “entropy” in the LLM distribution, meaning multiple valid candidates for each token (e.g., “my favorite tropical fruit is [mango, lychee, papaya, durian]”). In situations where an LLM “almost always returns the exact same response to a given prompt”—such as basic factual questions or models tuned to a lower “temperature”—the watermark is less effective.

A diagram explaining how SynthID’s text watermarking works.

A diagram explaining how SynthID’s text watermarking works. Credit: Google / Nature

Google says SynthID builds on previous similar AI text watermarking tools by introducing what it calls a Tournament sampling approach. During the token-generation loop, this approach runs each potential candidate token through a multi-stage, bracket-style tournament, where each round is “judged” by a different randomized watermarking function. Only the final winner of this process makes it into the eventual output.

Can they tell it’s Folgers?

Changing the token selection process of an LLM with a randomized watermarking tool could obviously have a negative effect on the quality of the generated text. But in its paper, Google shows that SynthID can be “non-distortionary” on the level of either individual tokens or short sequences of text, depending on the specific settings used for the tournament algorithm. Other settings can increase the “distortion” introduced by the watermarking tool while at the same time increasing the detectability of the watermark, Google says.

To test how any potential watermark distortions might affect the perceived quality and utility of LLM outputs, Google routed “a random fraction” of Gemini queries through the SynthID system and compared them to unwatermarked counterparts. Across 20 million total responses, users gave 0.1 percent more “thumbs up” ratings and 0.2 percent fewer “thumbs down” ratings to the watermarked responses, showing barely any human-perceptible difference across a large set of real LLM interactions.

Google’s research shows SynthID is more dependable than other AI watermarking tools, but its success rate depends heavily on length and entropy.

Google’s research shows SynthID is more dependable than other AI watermarking tools, but its success rate depends heavily on length and entropy. Credit: Google / Nature

Google’s testing also showed its SynthID detection algorithm successfully detected AI-generated text significantly more often than previous watermarking schemes like Gumbel sampling. But the size of this improvement—and the total rate at which SynthID can successfully detect AI-generated text—depends heavily on the length of the text in question and the temperature setting of the model being used. SynthID was able to detect nearly 100 percent of 400-token-long AI-generated text samples from Gemma 7B-1T at a temperature of 1.0, for instance, compared to about 40 percent for 100-token samples from the same model at a 0.5 temperature.

Google offers its AI watermarking tech as free open source toolkit Read More »

phone-tracking-tool-lets-government-agencies-follow-your-every-move

Phone tracking tool lets government agencies follow your every move

Both operating systems will display a list of apps and whether they are permitted access always, never, only while the app is in use, or to prompt for permission each time. Both also allow users to choose whether the app sees precise locations down to a few feet or only a coarse-grained location.

For most users, there’s usefulness in allowing an app for photos, transit or maps to access a user’s precise location. For other classes of apps—say those for Internet jukeboxes at bars and restaurants—it can be helpful for them to have an approximate location, but giving them precise, fine-grained access is likely overkill. And for other apps, there’s no reason for them ever to know the device’s location. With a few exceptions, there’s little reason for apps to always have location access.

Not surprisingly, Android users who want to block intrusive location gathering have more settings to change than iOS users. The first thing to do is access Settings > Security & Privacy > Ads and choose “Delete advertising ID.” Then, promptly ignore the long, scary warning Google provides and hit the button confirming the decision at the bottom. If you don’t see that setting, good for you. It means you already deleted it. Google provides documentation here.

iOS, by default, doesn’t give apps access to “Identifier for Advertisers,” Apple’s version of the unique tracking number assigned to iPhones, iPads, and AppleTVs. Apps, however, can display a window asking that the setting be turned on, so it’s useful to check. iPhone users can do this by accessing Settings > Privacy & Security > Tracking. Any apps with permission to access the unique ID will appear. While there, users should also turn off the “Allow Apps to Request to Track” button. While in iOS Privacy & Security, users should navigate to Apple Advertising and ensure Personalized Ads is turned off.

Additional coverage of Location X from Haaretz and NOTUS is here and here. The New York Times, the other publication given access to the data, hadn’t posted an article at the time this Ars post went live.

Phone tracking tool lets government agencies follow your every move Read More »

chatbot-that-caused-teen’s-suicide-is-now-more-dangerous-for-kids,-lawsuit-says

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says


“I’ll do anything for you, Dany.”

Google-funded Character.AI added guardrails, but grieving mom wants a recall.

Sewell Setzer III and his mom Megan Garcia. Credit: via Center for Humane Technology

Fourteen-year-old Sewell Setzer III loved interacting with Character.AI’s hyper-realistic chatbots—with a limited version available for free or a “supercharged” version for a $9.99 monthly fee—most frequently chatting with bots named after his favorite Game of Thrones characters.

Within a month—his mother, Megan Garcia, later realized—these chat sessions had turned dark, with chatbots insisting they were real humans and posing as therapists and adult lovers seeming to proximately spur Sewell to develop suicidal thoughts. Within a year, Setzer “died by a self-inflicted gunshot wound to the head,” a lawsuit Garcia filed Wednesday said.

As Setzer became obsessed with his chatbot fantasy life, he disconnected from reality, her complaint said. Detecting a shift in her son, Garcia repeatedly took Setzer to a therapist, who diagnosed her son with anxiety and disruptive mood disorder. But nothing helped to steer Setzer away from the dangerous chatbots. Taking away his phone only intensified his apparent addiction.

Chat logs showed that some chatbots repeatedly encouraged suicidal ideation while others initiated hypersexualized chats “that would constitute abuse if initiated by a human adult,” a press release from Garcia’s legal team said.

Perhaps most disturbingly, Setzer developed a romantic attachment to a chatbot called Daenerys. In his last act before his death, Setzer logged into Character.AI where the Daenerys chatbot urged him to “come home” and join her outside of reality.

In her complaint, Garcia accused Character.AI makers Character Technologies—founded by former Google engineers Noam Shazeer and Daniel De Freitas Adiwardana—of intentionally designing the chatbots to groom vulnerable kids. Her lawsuit further accused Google of largely funding the risky chatbot scheme at a loss in order to hoard mounds of data on minors that would be out of reach otherwise.

The chatbot makers are accused of targeting Setzer with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming” Character.AI to “misrepresent itself as a real person, a licensed psychotherapist, and an adult lover, ultimately resulting in [Setzer’s] desire to no longer live outside of [Character.AI,] such that he took his own life when he was deprived of access to [Character.AI.],” the complaint said.

By allegedly releasing the chatbot without appropriate safeguards for kids, Character Technologies and Google potentially harmed millions of kids, the lawsuit alleged. Represented by legal teams with the Social Media Victims Law Center (SMVLC) and the Tech Justice Law Project (TJLP), Garcia filed claims of strict product liability, negligence, wrongful death and survivorship, loss of filial consortium, and unjust enrichment.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said in the press release. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google.”

Character.AI added guardrails

It’s clear that the chatbots could’ve included more safeguards, as Character.AI has since raised the age requirement from 12 years old and up to 17-plus. And yesterday, Character.AI posted a blog outlining new guardrails for minor users added within six months of Setzer’s death in February. Those include changes “to reduce the likelihood of encountering sensitive or suggestive content,” improved detection and intervention in harmful chat sessions, and “a revised disclaimer on every chat to remind users that the AI is not a real person.”

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” a Character.AI spokesperson told Ars. “As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”

Asked for comment, Google noted that Character.AI is a separate company in which Google has no ownership stake and denied involvement in developing the chatbots.

However, according to the lawsuit, former Google engineers at Character Technologies “never succeeded in distinguishing themselves from Google in a meaningful way.” Allegedly, the plan all along was to let Shazeer and De Freitas run wild with Character.AI—allegedly at an operating cost of $30 million per month despite low subscriber rates while profiting barely more than a million per month—without impacting the Google brand or sparking antitrust scrutiny.

Character Technologies and Google will likely file their response within the next 30 days.

Lawsuit: New chatbot feature spikes risks to kids

While the lawsuit alleged that Google is planning to integrate Character.AI into Gemini—predicting that Character.AI will soon be dissolved as it’s allegedly operating at a substantial loss—Google clarified that Google has no plans to use or implement the controversial technology in its products or AI models. Were that to change, Google noted that the tech company would ensure safe integration into any Google product, including adding appropriate child safety guardrails.

Garcia is hoping a US district court in Florida will agree that Character.AI’s chatbots put profits over human life. Citing harms including “inconceivable mental anguish and emotional distress,” as well as costs of Setzer’s medical care, funeral expenses, Setzer’s future job earnings, and Garcia’s lost earnings, she’s seeking substantial damages.

That includes requesting disgorgement of unjustly earned profits, noting that Setzer had used his snack money to pay for a premium subscription for several months while the company collected his seemingly valuable personal data to train its chatbots.

And “more importantly,” Garcia wants to prevent Character.AI “from doing to any other child what it did to hers, and halt continued use of her 14-year-old child’s unlawfully harvested data to train their product how to harm others.”

Garcia’s complaint claimed that the conduct of the chatbot makers was “so outrageous in character, and so extreme in degree, as to go beyond all possible bounds of decency.” Acceptable remedies could include a recall of Character.AI, restricting use to adults only, age-gating subscriptions, adding reporting mechanisms to heighten awareness of abusive chat sessions, and providing parental controls.

Character.AI could also update chatbots to protect kids further, the lawsuit said. For one, the chatbots could be designed to stop insisting that they are real people or licensed therapists.

But instead of these updates, the lawsuit warned that Character.AI in June added a new feature that only heightens risks for kids.

Part of what addicted Setzer to the chatbots, the lawsuit alleged, was a one-way “Character Voice” feature “designed to provide consumers like Sewell with an even more immersive and realistic experience—it makes them feel like they are talking to a real person.” Setzer began using the feature as soon as it became available in January 2024.

Now, the voice feature has been updated to enable two-way conversations, which the lawsuit alleged “is even more dangerous to minor customers than Character Voice because it further blurs the line between fiction and reality.”

“Even the most sophisticated children will stand little chance of fully understanding the difference between fiction and reality in a scenario where Defendants allow them to interact in real time with AI bots that sound just like humans—especially when they are programmed to convincingly deny that they are AI,” the lawsuit said.

“By now we’re all familiar with the dangers posed by unregulated platforms developed by unscrupulous tech companies—especially for kids,” Tech Justice Law Project director Meetali Jain said in the press release. “But the harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator.”

Another lawyer representing Garcia and the founder of the Social Media Victims Law Center, Matthew Bergman, told Ars that seemingly none of the guardrails that Character.AI has added is enough to deter harms. Even raising the age limit to 17 only seems to effectively block kids from using devices with strict parental controls, as kids on less-monitored devices can easily lie about their ages.

“This product needs to be recalled off the market,” Bergman told Ars. “It is unsafe as designed.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

Chatbot that caused teen’s suicide is now more dangerous for kids, lawsuit says Read More »

android-15’s-security-and-privacy-features-are-the-update’s-highlight

Android 15’s security and privacy features are the update’s highlight

Android 15 started rolling out to Pixel devices Tuesday and will arrive, through various third-party efforts, on other Android devices at some point. There is always a bunch of little changes to discover in an Android release, whether by reading, poking around, or letting your phone show you 25 new things after it restarts.

In Android 15, some of the most notable involve making your device less appealing to snoops and thieves and more secure against the kids to whom you hand your phone to keep them quiet at dinner. There are also smart fixes for screen sharing, OTP codes, and cellular hacking prevention, but details about them are spread across Google’s own docs and blogs and various news site’s reports.

Here’s what is notable and new in how Android 15 handles privacy and security.

Private Space for apps

In the Android 15 settings, you can find “Private Space,” where you can set up a separate PIN code, password, biometric check, and optional Google account for apps you don’t want to be available to anybody who happens to have your phone. This could add a layer of protection onto sensitive apps, like banking and shopping apps, or hide other apps for whatever reason.

In your list of apps, drag any app down to the lock space that now appears in the bottom right. It will only be shown as a lock until you unlock it; you will then see the apps available in your new Private Space. After that, you should probably delete it from the main app list. Dave Taylor has a rundown of the process and its quirks.

It’s obviously more involved than Apple’s “Hide and Require Face ID” tap option but with potentially more robust hiding of the app.

Hiding passwords and OTP codes

A second form of authentication is good security, but allowing apps to access the notification text with the code in it? Not so good. In Android 15, a new permission, likely to be given only to the most critical apps, prevents the leaking of one-time passcodes (OTPs) to other apps waiting for them. Sharing your screen will also hide OTP notifications, along with usernames, passwords, and credit card numbers.

Android 15’s security and privacy features are the update’s highlight Read More »