ethical ai

claude-ai-to-process-secret-government-data-through-new-palantir-deal

Claude AI to process secret government data through new Palantir deal

An ethical minefield

Since its founders started Anthropic in 2021, the company has marketed itself as one that takes an ethics- and safety-focused approach to AI development. The company differentiates itself from competitors like OpenAI by adopting what it calls responsible development practices and self-imposed ethical constraints on its models, such as its “Constitutional AI” system.

As Futurism points out, this new defense partnership appears to conflict with Anthropic’s public “good guy” persona, and pro-AI pundits on social media are noticing. Frequent AI commentator Nabeel S. Qureshi wrote on X, “Imagine telling the safety-concerned, effective altruist founders of Anthropic in 2021 that a mere three years after founding the company, they’d be signing partnerships to deploy their ~AGI model straight to the military frontlines.

Anthropic's

Anthropic’s “Constitutional AI” logo.

Credit: Anthropic / Benj Edwards

Anthropic’s “Constitutional AI” logo. Credit: Anthropic / Benj Edwards

Aside from the implications of working with defense and intelligence agencies, the deal connects Anthropic with Palantir, a controversial company which recently won a $480 million contract to develop an AI-powered target identification system called Maven Smart System for the US Army. Project Maven has sparked criticism within the tech sector over military applications of AI technology.

It’s worth noting that Anthropic’s terms of service do outline specific rules and limitations for government use. These terms permit activities like foreign intelligence analysis and identifying covert influence campaigns, while prohibiting uses such as disinformation, weapons development, censorship, and domestic surveillance. Government agencies that maintain regular communication with Anthropic about their use of Claude may receive broader permissions to use the AI models.

Even if Claude is never used to target a human or as part of a weapons system, other issues remain. While its Claude models are highly regarded in the AI community, they (like all LLMs) have the tendency to confabulate, potentially generating incorrect information in a way that is difficult to detect.

That’s a huge potential problem that could impact Claude’s effectiveness with secret government data, and that fact, along with the other associations, has Futurism’s Victor Tangermann worried. As he puts it, “It’s a disconcerting partnership that sets up the AI industry’s growing ties with the US military-industrial complex, a worrying trend that should raise all kinds of alarm bells given the tech’s many inherent flaws—and even more so when lives could be at stake.”

Claude AI to process secret government data through new Palantir deal Read More »

40-years-later,-the-terminator-still-shapes-our-view-of-ai

40 years later, The Terminator still shapes our view of AI

Countries, including the US, specify the need for human operators to “exercise appropriate levels of human judgment over the use of force” when operating autonomous weapon systems. In some instances, operators can visually verify targets before authorizing strikes and can “wave off” attacks if situations change.

AI is already being used to support military targeting. According to some, it’s even a responsible use of the technology since it could reduce collateral damage. This idea evokes Schwarzenegger’s role reversal as the benevolent “machine guardian” in the original film’s sequel, Terminator 2: Judgment Day.

However, AI could also undermine the role human drone operators play in challenging recommendations by machines. Some researchers think that humans have a tendency to trust whatever computers say.

“Loitering munitions”

Militaries engaged in conflicts are increasingly making use of small, cheap aerial drones that can detect and crash into targets. These “loitering munitions” (so named because they are designed to hover over a battlefield) feature varying degrees of autonomy.

As I’ve argued in research co-authored with security researcher Ingvild Bode, the dynamics of the Ukraine war and other recent conflicts in which these munitions have been widely used raises concerns about the quality of control exerted by human operators.

Ground-based military robots armed with weapons and designed for use on the battlefield might call to mind the relentless Terminators, and weaponized aerial drones may, in time, come to resemble the franchise’s airborne “hunter-killers.” But these technologies don’t hate us as Skynet does, and neither are they “super-intelligent.”

However, it’s crucially important that human operators continue to exercise agency and meaningful control over machine systems.

Arguably, The Terminator’s greatest legacy has been to distort how we collectively think and speak about AI. This matters now more than ever, because of how central these technologies have become to the strategic competition for global power and influence between the US, China, and Russia.

The entire international community, from superpowers such as China and the US to smaller countries, needs to find the political will to cooperate—and to manage the ethical and legal challenges posed by the military applications of AI during this time of geopolitical upheaval. How nations navigate these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator—even if we don’t see time-traveling cyborgs any time soon.The Conversation

Tom F.A Watts, Postdoctoral Fellow, Department of Politics, International Relations, and Philosophy, Royal Holloway University of London. This article is republished from The Conversation under a Creative Commons license. Read the original article.

40 years later, The Terminator still shapes our view of AI Read More »