deepmind

political-deepfakes-are-the-most-popular-way-to-misuse-ai

Political deepfakes are the most popular way to misuse AI

This is not going well —

Study from Google’s DeepMind lays out nefarious ways AI is being used.

Political deepfakes are the most popular way to misuse AI

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video, and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

The most common goal of actors misusing generative AI was to shape or influence public opinion, the analysis, conducted with the search group’s research and development unit Jigsaw, found. That accounted for 27 percent of uses, feeding into fears over how deepfakes might influence elections globally this year.

Deepfakes of UK Prime Minister Rishi Sunak, as well as other global leaders, have appeared on TikTok, X, and Instagram in recent months. UK voters go to the polls next week in a general election.

Concern is widespread that, despite social media platforms’ efforts to label or remove such content, audiences may not recognize these as fake, and dissemination of the content could sway voters.

Ardi Janjeva, research associate at The Alan Turing Institute, called “especially pertinent” the paper’s finding that the contamination of publicly accessible information with AI-generated content could “distort our collective understanding of sociopolitical reality.”

Janjeva added: “Even if we are uncertain about the impact that deepfakes have on voting behavior, this distortion may be harder to spot in the immediate term and poses long-term risks to our democracies.”

The study is the first of its kind by DeepMind, Google’s AI unit led by Sir Demis Hassabis, and is an attempt to quantify the risks from the use of generative AI tools, which the world’s biggest technology companies have rushed out to the public in search of huge profits.

As generative products such as OpenAI’s ChatGPT and Google’s Gemini become more widely used, AI companies are beginning to monitor the flood of misinformation and other potentially harmful or unethical content created by their tools.

In May, OpenAI released research revealing operations linked to Russia, China, Iran, and Israel had been using its tools to create and spread disinformation.

“There had been a lot of understandable concern around quite sophisticated cyber attacks facilitated by these tools,” said Nahema Marchal, lead author of the study and researcher at Google DeepMind. “Whereas what we saw were fairly common misuses of GenAI [such as deepfakes that] might go under the radar a little bit more.”

Google DeepMind and Jigsaw’s researchers analyzed around 200 observed incidents of misuse between January 2023 and March 2024, taken from social media platforms X and Reddit, as well as online blogs and media reports of misuse.

Ars Technica

The second most common motivation behind misuse was to make money, whether offering services to create deepfakes, including generating naked depictions of real people, or using generative AI to create swaths of content, such as fake news articles.

The research found that most incidents use easily accessible tools, “requiring minimal technical expertise,” meaning more bad actors can misuse generative AI.

Google DeepMind’s research will influence how it improves its evaluations to test models for safety, and it hopes it will also affect how its competitors and other stakeholders view how “harms are manifesting.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Political deepfakes are the most popular way to misuse AI Read More »

deepmind-adds-a-diffusion-engine-to-latest-protein-folding-software

DeepMind adds a diffusion engine to latest protein-folding software

Added complexity —

Major under-the-hood changes let AlphaFold handle protein-DNA complexes and more.

image of a complicated mix of lines and ribbons arranged in a complicated 3D structure.

Enlarge / Prediction of the structure of a coronavirus Spike protein from a virus that causes the common cold.

Google DeepMind

Most of the activities that go on inside cells—the activities that keep us living, breathing, thinking animals—are handled by proteins. They allow cells to communicate with each other, run a cell’s basic metabolism, and help convert the information stored in DNA into even more proteins. And all of that depends on the ability of the protein’s string of amino acids to fold up into a complicated yet specific three-dimensional shape that enables it to function.

Up until this decade, understanding that 3D shape meant purifying the protein and subjecting it to a time- and labor-intensive process to determine its structure. But that changed with the work of DeepMind, one of Google’s AI divisions, which released Alpha Fold in 2021, and a similar academic effort shortly afterward. The software wasn’t perfect; it struggled with larger proteins and didn’t offer high-confidence solutions for every protein. But many of its predictions turned out to be remarkably accurate.

Even so, these structures only told half of the story. To function, almost every protein has to interact with something else—other proteins, DNA, chemicals, membranes, and more. And, while the initial version of AlphaFold could handle some protein-protein interactions, the rest remained black boxes. Today, DeepMind is announcing the availability of version 3 of AlphaFold, which has seen parts of its underlying engine either heavily modified or replaced entirely. Thanks to these changes, the software now handles various additional protein interactions and modifications.

Changing parts

The original AlphaFold relied on two underlying software functions. One of those took evolutionary limits on a protein into account. By looking at the same protein in multiple species, you can get a sense for which parts are always the same, and therefore likely to be central to its function. That centrality implies that they’re always likely to be in the same location and orientation in the protein’s structure. To do this, the original AlphaFold found as many versions of a protein as it could and lined up their sequences to look for the portions that showed little variation.

Doing so, however, is computationally expensive since the more proteins you line up, the more constraints you have to resolve. In the new version, the AlphaFold team still identified multiple related proteins but switched to largely performing alignments using pairs of protein sequences from within the set of related ones. This probably isn’t as information-rich as a multi-alignment, but it’s far more computationally efficient, and the lost information doesn’t appear to be critical to figuring out protein structures.

Using these alignments, a separate software module figured out the spatial relationships among pairs of amino acids within the target protein. Those relationships were then translated into spatial coordinates for each atom by code that took into account some of the physical properties of amino acids, like which portions of an amino acid could rotate relative to others, etc.

In AlphaFold 3, the prediction of atomic positions is handled by a diffusion module, which is trained by being given both a known structure and versions of that structure where noise (in the form of shifting the positions of some atoms) has been added. This allows the diffusion module to take the inexact locations described by relative positions and convert them into exact predictions of the location of every atom in the protein. It doesn’t need to be told the physical properties of amino acids, because it can figure out what they normally do by looking at enough structures.

(DeepMind had to train on two different levels of noise to get the diffusion module to work: one in which the locations of atoms were shifted while the general structure was left intact and a second where the noise involved shifting the large-scale structure of the protein, thus affecting the location of lots of atoms.)

During training, the team found that it took about 20,000 instances of protein structures for AlphaFold 3 to get about 97 percent of a set of test structures right. By 60,000 instances, it started getting protein-protein interfaces correct at that frequency, too. And, critically, it started getting proteins complexed with other molecules right, as well.

DeepMind adds a diffusion engine to latest protein-folding software Read More »

deepmind-co-founder-mustafa-suleyman-will-run-microsoft’s-new-consumer-ai-unit

DeepMind co-founder Mustafa Suleyman will run Microsoft’s new consumer AI unit

Minding deeply —

Most staffers from Suleyman’s startup, Inflection, will join Microsoft as well.

Mustafa Suleyman, talks on Day 1 of the AI Safety Summit at Bletchley Park at Bletchley Park on November 1, 2023 in Bletchley, England.

Enlarge / Mustafa Suleyman, talks on Day 1 of the AI Safety Summit at Bletchley Park at Bletchley Park on November 1, 2023 in Bletchley, England.

Microsoft has hired Mustafa Suleyman, the co-founder of Google’s DeepMind and chief executive of artificial intelligence start-up Inflection, to run a new consumer AI unit.

Suleyman, a British entrepreneur who co-founded DeepMind in London in 2010, will report to Microsoft chief executive Satya Nadella, the company announced on Tuesday. He will launch a division of Microsoft that brings consumer-facing products including Microsoft’s Copilot, Bing, Edge, and GenAI under one team called Microsoft AI.

It is the latest move by Microsoft to capitalize on the boom in generative AI. It has invested $13 billion in OpenAI, the maker of ChatGPT, and rapidly integrated its technology into Microsoft products.

Microsoft’s investment in OpenAI has given it an early lead in Silicon Valley’s race to deploy AI, leaving its biggest rival, Google, struggling to catch up. It also has invested in other AI startups, including French developer Mistral.

It has been rolling out an AI assistant in its products such as Windows, Office software, and cyber security tools. Suleyman’s unit will work on projects including integrating an AI version of Copilot into its Windows operating system and enhancing the use of generative AI in its Bing search engine.

Nadella said in a statement on Tuesday: “I’ve known Mustafa for several years and have greatly admired him as a founder of both DeepMind and Inflection, and as a visionary, product maker and builder of pioneering teams that go after bold missions.”

DeepMind was acquired by Google in 2014 for $500 million, one of the first large bets by a big tech company on a startup AI lab. The company faced controversy a few years later over some of its projects, including its work for the UK healthcare sector, which was found by a government watchdog to have been granted inappropriate access to patient records.

Suleyman, who was the main public face for the company, was placed on leave in 2019. DeepMind workers had complained that he had an overly aggressive management style. Addressing staff complaints at the time, Suleyman said: “I really screwed up. I was very demanding and pretty relentless.”

He moved to Google months later, where he led AI product management. In 2022, he joined Silicon Valley venture capital firm Greylock and launched Inflection later that year.

Microsoft will also hire most of Inflection’s staff, including Karén Simonyan, cofounder and chief scientist of Inflection, who will be chief scientist of the AI group. Microsoft did not clarify the number of employees moving over but said it included AI engineers, researchers, and large language model builders who have designed and co-authored “many of the most important contributions in advancing AI over the last five years.”

Inflection, a rival to OpenAI, will switch its focus from its consumer chatbot, Pi, and instead move to sell enterprise AI software to businesses, according to a statement on its website. Sean White, who has held various technology roles, has joined as its new chief executive.

Inflection’s third cofounder, Reid Hoffman, the founder and executive chair of LinkedIn, will remain on Inflection’s board. Inflection had raised $1.3 billion in June, valuing the group at about $4 billion, in one of the largest fundraisings by an AI start-up amid an explosion of interest in the sector.

The new unit marks a big organizational shift at Microsoft. Mikhail Parakhin, its president of web services, will move along with his entire team to report to Suleyman.

“We have a real shot to build technology that was once thought impossible and that lives up to our mission to ensure the benefits of AI reach every person and organization on the planet, safely and responsibly,” Nadella said.

Competition regulators in the US and Europe have been scrutinising the relationship between Microsoft and OpenAI amid a broader inquiry into AI investments.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

DeepMind co-founder Mustafa Suleyman will run Microsoft’s new consumer AI unit Read More »

deepmind-ai-rivals-the-world’s-smartest-high-schoolers-at-geometry

DeepMind AI rivals the world’s smartest high schoolers at geometry

Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGO, attends the AI Safety Summit at Bletchley Park on November 2, 2023 in Bletchley, England.

Enlarge / Demis Hassabis, CEO of DeepMind Technologies and developer of AlphaGO, attends the AI Safety Summit at Bletchley Park on November 2, 2023 in Bletchley, England.

A system developed by Google’s DeepMind has set a new record for AI performance on geometry problems. DeepMind’s AlphaGeometry managed to solve 25 of the 30 geometry problems drawn from the International Mathematical Olympiad between 2000 and 2022.

That puts the software ahead of the vast majority of young mathematicians and just shy of IMO gold medalists. DeepMind estimates that the average gold medalist would have solved 26 out of 30 problems. Many view the IMO as the world’s most prestigious math competition for high school students.

“Because language models excel at identifying general patterns and relationships in data, they can quickly predict potentially useful constructs, but often lack the ability to reason rigorously or explain their decisions,” DeepMind writes. To overcome this difficulty, DeepMind paired a language model with a more traditional symbolic deduction engine that performs algebraic and geometric reasoning.

The research was led by Trieu Trinh, a computer scientist who recently earned his PhD from New York University. He was a resident at DeepMind between 2021 and 2023.

Evan Chen, a former Olympiad gold medalist who evaluated some of AlphaGeometry’s output, praised it as “impressive because it’s both verifiable and clean.” Whereas some earlier software generated complex geometry proofs that were hard for human reviewers to understand, the output of AlphaGeometry is similar to what a human mathematician would write.

AlphaGeometry is part of DeepMind’s larger project to improve the reasoning capabilities of large language models by combining them with traditional search algorithms. DeepMind has published several papers in this area over the last year.

How AlphaGeometry works

Let’s start with a simple example shown in the AlphaGeometry paper, which was published by Nature on Wednesday:

The goal is to prove that if a triangle has two equal sides (AB and AC), then the angles opposite those sides will also be equal. We can do this by creating a new point D at the midpoint of the third side of the triangle (BC). It’s easy to show that all three sides of triangle ABD are the same length as the corresponding sides of triangle ACD. And two triangles with equal sides always have equal angles.

Geometry problems from the IMO are much more complex than this toy problem, but fundamentally, they have the same structure. They all start with a geometric figure and some facts about the figure like “side AB is the same length as side AC.” The goal is to generate a sequence of valid inferences that conclude with a given statement like “angle ABC is equal to angle BCA.”

For many years, we’ve had software that can generate lists of valid conclusions that can be drawn from a set of starting assumptions. Simple geometry problems can be solved by “brute force”: mechanically listing every possible fact that can be inferred from the given assumption, then listing every possible inference from those facts, and so on until you reach the desired conclusion.

But this kind of brute-force search isn’t feasible for an IMO-level geometry problem because the search space is too large. Not only do harder problems require longer proofs, but sophisticated proofs often require the introduction of new elements to the initial figure—as with point D in the above proof. Once you allow for these kinds of “auxiliary points,” the space of possible proofs explodes and brute-force methods become impractical.

DeepMind AI rivals the world’s smartest high schoolers at geometry Read More »