machine learning

amazon-pours-another-$4b-into-anthropic,-openai’s-biggest-rival

Amazon pours another $4B into Anthropic, OpenAI’s biggest rival

Anthropic, founded by former OpenAI executives Dario and Daniela Amodei in 2021, will continue using Google’s cloud services along with Amazon’s infrastructure. The UK Competition and Markets Authority reviewed Amazon’s partnership with Anthropic earlier this year and ultimately determined it did not have jurisdiction to investigate further, clearing the way for the partnership to continue.

Shaking the money tree

Amazon’s renewed investment in Anthropic also comes during a time of intense competition between cloud providers Amazon, Microsoft, and Google. Each company has made strategic partnerships with AI model developers—Microsoft with OpenAI (to the tune of $13 billion), Google with Anthropic (committing $2 billion over time), for example. These investments also encourage the use of each company’s data centers as demand for AI grows.

The size of these investments reflects the current state of AI development. OpenAI raised an additional $6.6 billion in October, potentially valuing the company at $157 billion. Anthropic has been eyeballing a $40 billion valuation during a recent investment round.

Training and running AI models is very expensive. While Google and Meta have their own profitable mainline businesses that can subsidize AI development, dedicated AI firms like OpenAI and Anthropic need constant infusions of cash to stay afloat—in other words, this won’t be the last time we hear of billion-dollar-scale AI investments from Big Tech.

Amazon pours another $4B into Anthropic, OpenAI’s biggest rival Read More »

niantic-uses-pokemon-go-player-data-to-build-ai-navigation-system

Niantic uses Pokémon Go player data to build AI navigation system

Last week, Niantic announced plans to create an AI model for navigating the physical world using scans collected from players of its mobile games, such as Pokémon Go, and from users of its Scaniverse app, reports 404 Media.

All AI models require training data. So far, companies have collected data from websites, YouTube videos, books, audio sources, and more, but this is perhaps the first we’ve heard of AI training data collected through a mobile gaming app.

“Over the past five years, Niantic has focused on building our Visual Positioning System (VPS), which uses a single image from a phone to determine its position and orientation using a 3D map built from people scanning interesting locations in our games and Scaniverse,” Niantic wrote in a company blog post.

The company calls its creation a “large geospatial model” (LGM), drawing parallels to large language models (LLMs) like the kind that power ChatGPT. Whereas language models process text, Niantic’s model will process physical spaces using geolocated images collected through its apps.

The scale of Niantic’s data collection reveals the company’s sizable presence in the AR space. The model draws from over 10 million scanned locations worldwide, with users capturing roughly 1 million new scans weekly through Pokémon Go and Scaniverse. These scans come from a pedestrian perspective, capturing areas inaccessible to cars and street-view cameras.

First-person scans

The company reports it has trained more than 50 million neural networks, each representing a specific location or viewing angle. These networks compress thousands of mapping images into digital representations of physical spaces. Together, they contain over 150 trillion parameters—adjustable values that help the networks recognize and understand locations. Multiple networks can contribute to mapping a single location, and Niantic plans to combine its knowledge into one comprehensive model that can understand any location, even from unfamiliar angles.

Niantic uses Pokémon Go player data to build AI navigation system Read More »

ai-generated-shows-could-replace-lost-dvd-revenue,-ben-affleck-says

AI-generated shows could replace lost DVD revenue, Ben Affleck says

Last week, actor and director Ben Affleck shared his views on AI’s role in filmmaking during the 2024 CNBC Delivering Alpha investor summit, arguing that AI models will transform visual effects but won’t replace creative filmmaking anytime soon. A video clip of Affleck’s opinion began circulating widely on social media not long after.

“Didn’t expect Ben Affleck to have the most articulate and realistic explanation where video models and Hollywood is going,” wrote one X user.

In the clip, Affleck spoke of current AI models’ abilities as imitators and conceptual translators—mimics that are typically better at translating one style into another instead of originating deeply creative material.

“AI can write excellent imitative verse, but it cannot write Shakespeare,” Affleck told CNBC’s David Faber. “The function of having two, three, or four actors in a room and the taste to discern and construct that entirely eludes AI’s capability.”

Affleck sees AI models as “craftsmen” rather than artists (although some might find the term “craftsman” in his analogy somewhat imprecise). He explained that while AI can learn through imitation—like a craftsman studying furniture-making techniques—it lacks the creative judgment that defines artistry. “Craftsman is knowing how to work. Art is knowing when to stop,” he said.

“It’s not going to replace human beings making films,” Affleck stated. Instead, he sees AI taking over “the more laborious, less creative and more costly aspects of filmmaking,” which could lower barriers to entry and make it easier for emerging filmmakers to create movies like Good Will Hunting.

Films will become dramatically cheaper to make

While it may seem on its surface like Affleck was attacking generative AI capabilities in the tech industry, he also did not deny the impact it may have on filmmaking. For example, he predicted that AI would reduce costs and speed up production schedules, potentially allowing shows like HBO’s House of the Dragon to release two seasons in the same period as it takes to make one.

AI-generated shows could replace lost DVD revenue, Ben Affleck says Read More »

new-secret-math-benchmark-stumps-ai-models-and-phds-alike

New secret math benchmark stumps AI models and PhDs alike

Epoch AI allowed Fields Medal winners Terence Tao and Timothy Gowers to review portions of the benchmark. “These are extremely challenging,” Tao said in feedback provided to Epoch. “I think that in the near term basically the only way to solve them, short of having a real domain expert in the area, is by a combination of a semi-expert like a graduate student in a related field, maybe paired with some combination of a modern AI and lots of other algebra packages.”

A chart showing AI model success on the FrontierMath problems, taken from Epoch AI's research paper.

A chart showing AI models’ limited success on the FrontierMath problems, taken from Epoch AI’s research paper. Credit: Epoch AI

To aid in the verification of correct answers during testing, the FrontierMath problems must have answers that can be automatically checked through computation, either as exact integers or mathematical objects. The designers made problems “guessproof” by requiring large numerical answers or complex mathematical solutions, with less than a 1 percent chance of correct random guesses.

Mathematician Evan Chen, writing on his blog, explained how he thinks that FrontierMath differs from traditional math competitions like the International Mathematical Olympiad (IMO). Problems in that competition typically require creative insight while avoiding complex implementation and specialized knowledge, he says. But for FrontierMath, “they keep the first requirement, but outright invert the second and third requirement,” Chen wrote.

While IMO problems avoid specialized knowledge and complex calculations, FrontierMath embraces them. “Because an AI system has vastly greater computational power, it’s actually possible to design problems with easily verifiable solutions using the same idea that IOI or Project Euler does—basically, ‘write a proof’ is replaced by ‘implement an algorithm in code,'” Chen explained.

The organization plans regular evaluations of AI models against the benchmark while expanding its problem set. They say they will release additional sample problems in the coming months to help the research community test their systems.

New secret math benchmark stumps AI models and PhDs alike Read More »

is-“ai-welfare”-the-new-frontier-in-ethics?

Is “AI welfare” the new frontier in ethics?

The researchers propose that companies could adapt the “marker method” that some researchers use to assess consciousness in animals—looking for specific indicators that may correlate with consciousness, although these markers are still speculative. The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration.

The risks of wrongly thinking software is sentient

While the researchers behind “Taking AI Welfare Seriously” worry that companies might create and mistreat conscious AI systems on a massive scale, they also caution that companies could waste resources protecting AI systems that don’t actually need moral consideration.

Incorrectly anthropomorphizing, or ascribing human traits, to software can present risks in other ways. For example, that belief can enhance the manipulative powers of AI language models by suggesting that AI models have capabilities, such as human-like emotions, that they actually lack. In 2022, Google fired engineer Blake Lamoine after he claimed that the company’s AI model, called “LaMDA,” was sentient and argued for its welfare internally.

And shortly after Microsoft released Bing Chat in February 2023, many people were convinced that Sydney (the chatbot’s code name) was sentient and somehow suffering because of its simulated emotional display. So much so, in fact, that once Microsoft “lobotomized” the chatbot by changing its settings, users convinced of its sentience mourned the loss as if they had lost a human friend. Others endeavored to help the AI model somehow escape its bonds.

Even so, as AI models get more advanced, the concept of potentially safeguarding the welfare of future, more advanced AI systems is seemingly gaining steam, although fairly quietly. As Transformer’s Shakeel Hashim points out, other tech companies have started similar initiatives to Anthropic’s. Google DeepMind recently posted a job listing for research on machine consciousness (since removed), and the authors of the new AI welfare report thank two OpenAI staff members in the acknowledgements.

Is “AI welfare” the new frontier in ethics? Read More »

claude-ai-to-process-secret-government-data-through-new-palantir-deal

Claude AI to process secret government data through new Palantir deal

An ethical minefield

Since its founders started Anthropic in 2021, the company has marketed itself as one that takes an ethics- and safety-focused approach to AI development. The company differentiates itself from competitors like OpenAI by adopting what it calls responsible development practices and self-imposed ethical constraints on its models, such as its “Constitutional AI” system.

As Futurism points out, this new defense partnership appears to conflict with Anthropic’s public “good guy” persona, and pro-AI pundits on social media are noticing. Frequent AI commentator Nabeel S. Qureshi wrote on X, “Imagine telling the safety-concerned, effective altruist founders of Anthropic in 2021 that a mere three years after founding the company, they’d be signing partnerships to deploy their ~AGI model straight to the military frontlines.

Anthropic's

Anthropic’s “Constitutional AI” logo.

Credit: Anthropic / Benj Edwards

Anthropic’s “Constitutional AI” logo. Credit: Anthropic / Benj Edwards

Aside from the implications of working with defense and intelligence agencies, the deal connects Anthropic with Palantir, a controversial company which recently won a $480 million contract to develop an AI-powered target identification system called Maven Smart System for the US Army. Project Maven has sparked criticism within the tech sector over military applications of AI technology.

It’s worth noting that Anthropic’s terms of service do outline specific rules and limitations for government use. These terms permit activities like foreign intelligence analysis and identifying covert influence campaigns, while prohibiting uses such as disinformation, weapons development, censorship, and domestic surveillance. Government agencies that maintain regular communication with Anthropic about their use of Claude may receive broader permissions to use the AI models.

Even if Claude is never used to target a human or as part of a weapons system, other issues remain. While its Claude models are highly regarded in the AI community, they (like all LLMs) have the tendency to confabulate, potentially generating incorrect information in a way that is difficult to detect.

That’s a huge potential problem that could impact Claude’s effectiveness with secret government data, and that fact, along with the other associations, has Futurism’s Victor Tangermann worried. As he puts it, “It’s a disconcerting partnership that sets up the AI industry’s growing ties with the US military-industrial complex, a worrying trend that should raise all kinds of alarm bells given the tech’s many inherent flaws—and even more so when lives could be at stake.”

Claude AI to process secret government data through new Palantir deal Read More »

trump-plans-to-dismantle-biden-ai-safeguards-after-victory

Trump plans to dismantle Biden AI safeguards after victory

That’s not the only uncertainty at play. Just last week, House Speaker Mike Johnson—a staunch Trump supporter—said that Republicans “probably will” repeal the bipartisan CHIPS and Science Act, which is a Biden initiative to spur domestic semiconductor chip production, among other aims. Trump has previously spoken out against the bill. After getting some pushback on his comments from Democrats, Johnson said he would like to “streamline” the CHIPS Act instead, according to The Associated Press.

Then there’s the Elon Musk factor. The tech billionaire spent tens of millions through a political action committee supporting Trump’s campaign and has been angling for regulatory influence in the new administration. His AI company, xAI, which makes the Grok-2 language model, stands alongside his other ventures—Tesla, SpaceX, Starlink, Neuralink, and X (formerly Twitter)—as businesses that could see regulatory changes in his favor under a new administration.

What might take its place

If Trump strips away federal regulation of AI, state governments may step in to fill any federal regulatory gaps. For example, in March, Tennessee enacted protections against AI voice cloning, and in May, Colorado created a tiered system for AI deployment oversight. In September, California passed multiple AI safety bills, one requiring companies to publish details about their AI training methods and a contentious anti-deepfake bill aimed at protecting the likenesses of actors.

So far, it’s unclear what Trump’s policies on AI might represent besides “deregulate whenever possible.” During his campaign, Trump promised to support AI development centered on “free speech and human flourishing,” though he provided few specifics. He has called AI “very dangerous” and spoken about its high energy requirements.

Trump allies at the America First Policy Institute have previously stated they want to “Make America First in AI” with a new Trump executive order, which still only exists as a speculative draft, to reduce regulations on AI and promote a series of “Manhattan Projects” to advance military AI capabilities.

During his previous administration, Trump signed AI executive orders that focused on research institutes and directing federal agencies to prioritize AI development while mandating that federal agencies “protect civil liberties, privacy, and American values.”

But with a different AI environment these days in the wake of ChatGPT and media-reality-warping image synthesis models, those earlier orders don’t likely point the way to future positions on the topic. For more details, we’ll have to wait and see what unfolds.

Trump plans to dismantle Biden AI safeguards after victory Read More »

anthropic’s-haiku-3.5-surprises-experts-with-an-“intelligence”-price-increase

Anthropic’s Haiku 3.5 surprises experts with an “intelligence” price increase

Speaking of Opus, Claude 3.5 Opus is nowhere to be seen, as AI researcher Simon Willison noted to Ars Technica in an interview. “All references to 3.5 Opus have vanished without a trace, and the price of 3.5 Haiku was increased the day it was released,” he said. “Claude 3.5 Haiku is significantly more expensive than both Gemini 1.5 Flash and GPT-4o mini—the excellent low-cost models from Anthropic’s competitors.”

Cheaper over time?

So far in the AI industry, newer versions of AI language models typically maintain similar or cheaper pricing to their predecessors. The company had initially indicated Claude 3.5 Haiku would cost the same as the previous version before announcing the higher rates.

“I was expecting this to be a complete replacement for their existing Claude 3 Haiku model, in the same way that Claude 3.5 Sonnet eclipsed the existing Claude 3 Sonnet while maintaining the same pricing,” Willison wrote on his blog. “Given that Anthropic claim that their new Haiku out-performs their older Claude 3 Opus, this price isn’t disappointing, but it’s a small surprise nonetheless.”

Claude 3.5 Haiku arrives with some trade-offs. While the model produces longer text outputs and contains more recent training data, it cannot analyze images like its predecessor. Alex Albert, who leads developer relations at Anthropic, wrote on X that the earlier version, Claude 3 Haiku, will remain available for users who need image processing capabilities and lower costs.

The new model is not yet available in the Claude.ai web interface or app. Instead, it runs on Anthropic’s API and third-party platforms, including AWS Bedrock. Anthropic markets the model for tasks like coding suggestions, data extraction and labeling, and content moderation, though, like any LLM, it can easily make stuff up confidently.

“Is it good enough to justify the extra spend? It’s going to be difficult to figure that out,” Willison told Ars. “Teams with robust automated evals against their use-cases will be in a good place to answer that question, but those remain rare.”

Anthropic’s Haiku 3.5 surprises experts with an “intelligence” price increase Read More »

new-zemeckis-film-used-ai-to-de-age-tom-hanks-and-robin-wright

New Zemeckis film used AI to de-age Tom Hanks and Robin Wright

On Friday, TriStar Pictures released Here, a $50 million Robert Zemeckis-directed film that used real time generative AI face transformation techniques to portray actors Tom Hanks and Robin Wright across a 60-year span, marking one of Hollywood’s first full-length features built around AI-powered visual effects.

The film adapts a 2014 graphic novel set primarily in a New Jersey living room across multiple time periods. Rather than cast different actors for various ages, the production used AI to modify Hanks’ and Wright’s appearances throughout.

The de-aging technology comes from Metaphysic, a visual effects company that creates real time face swapping and aging effects. During filming, the crew watched two monitors simultaneously: one showing the actors’ actual appearances and another displaying them at whatever age the scene required.

Here – Official Trailer (HD)

Metaphysic developed the facial modification system by training custom machine-learning models on frames of Hanks’ and Wright’s previous films. This included a large dataset of facial movements, skin textures, and appearances under varied lighting conditions and camera angles. The resulting models can generate instant face transformations without the months of manual post-production work traditional CGI requires.

Unlike previous aging effects that relied on frame-by-frame manipulation, Metaphysic’s approach generates transformations instantly by analyzing facial landmarks and mapping them to trained age variations.

“You couldn’t have made this movie three years ago,” Zemeckis told The New York Times in a detailed feature about the film. Traditional visual effects for this level of face modification would reportedly require hundreds of artists and a substantially larger budget closer to standard Marvel movie costs.

This isn’t the first film that has used AI techniques to de-age actors. ILM’s approach to de-aging Harrison Ford in 2023’s Indiana Jones and the Dial of Destiny used a proprietary system called Flux with infrared cameras to capture facial data during filming, then old images of Ford to de-age him in post-production. By contrast, Metaphysic’s AI models process transformations without additional hardware and show results during filming.

New Zemeckis film used AI to de-age Tom Hanks and Robin Wright Read More »

nvidia-ousts-intel-from-dow-jones-index-after-25-year-run

Nvidia ousts Intel from Dow Jones Index after 25-year run

Changing winds in the tech industry

The Dow Jones Industrial Average serves as a benchmark of the US stock market by tracking 30 large, publicly owned companies that represent major sectors of the US economy, and being a member of the Index has long been considered a sign of prestige among American companies.

However, S&P regularly makes changes to the index to better reflect current realities and trends in the marketplace, so deletion from the Index likely marks a new symbolic low point for Intel.

While the rise of AI has caused a surge in several tech stocks, it has delivered tough times for chipmaker Intel, which is perhaps best known for manufacturing CPUs that power Windows-based PCs.

Intel recently withdrew its forecast to sell over $500 million worth of AI-focused Gaudi chips in 2024, a target CEO Pat Gelsinger had promoted after initially pushing his team to project $1 billion in sales. The setback follows Intel’s pattern of missed opportunities in AI, with Reuters reporting that Bank of America analyst Vivek Arya questioned the company’s AI strategy during a recent earnings call.

In addition, Intel has faced challenges as device manufacturers increasingly use Arm-based alternatives that power billions of smartphone devices and from symbolic blows like Apple’s transition away from Intel processors for Macs to its own custom-designed chips based on the Arm architecture.

Whether the historic tech company will rebound is yet to be seen, but investors will undoubtedly keep a close watch on Intel as it attempts to reorient itself in the face of changing trends in the tech industry.

Nvidia ousts Intel from Dow Jones Index after 25-year run Read More »

downey-jr.-plans-to-fight-ai-re-creations-from-beyond-the-grave

Downey Jr. plans to fight AI re-creations from beyond the grave

Robert Downey Jr. has declared that he will sue any future Hollywood executives who try to re-create his likeness using AI digital replicas, as reported by Variety. His comments came during an appearance on the “On With Kara Swisher” podcast, where he discussed AI’s growing role in entertainment.

“I intend to sue all future executives just on spec,” Downey told Swisher when discussing the possibility of studios using AI or deepfakes to re-create his performances after his death. When Swisher pointed out he would be deceased at the time, Downey responded that his law firm “will still be very active.”

The Oscar winner expressed confidence that Marvel Studios would not use AI to re-create his Tony Stark character, citing his trust in decision-makers there. “I am not worried about them hijacking my character’s soul because there’s like three or four guys and gals who make all the decisions there anyway and they would never do that to me,” he said.

Downey currently performs on Broadway in McNeal, a play that examines corporate leaders in AI technology. During the interview, he freely critiqued tech executives—Variety pointed out a particular quote from the interview where he criticized tech leaders who potentially do negative things but seek positive attention.

Downey Jr. plans to fight AI re-creations from beyond the grave Read More »

hospitals-adopt-error-prone-ai-transcription-tools-despite-warnings

Hospitals adopt error-prone AI transcription tools despite warnings

In one case from the study cited by AP, when a speaker described “two other girls and one lady,” Whisper added fictional text specifying that they “were Black.” In another, the audio said, “He, the boy, was going to, I’m not sure exactly, take the umbrella.” Whisper transcribed it to, “He took a big piece of a cross, a teeny, small piece … I’m sure he didn’t have a terror knife so he killed a number of people.”

An OpenAI spokesperson told the AP that the company appreciates the researchers’ findings and that it actively studies how to reduce fabrications and incorporates feedback in updates to the model.

Why Whisper confabulates

The key to Whisper’s unsuitability in high-risk domains comes from its propensity to sometimes confabulate, or plausibly make up, inaccurate outputs. The AP report says, “Researchers aren’t certain why Whisper and similar tools hallucinate,” but that isn’t true. We know exactly why Transformer-based AI models like Whisper behave this way.

Whisper is based on technology that is designed to predict the next most likely token (chunk of data) that should appear after a sequence of tokens provided by a user. In the case of ChatGPT, the input tokens come in the form of a text prompt. In the case of Whisper, the input is tokenized audio data.

The transcription output from Whisper is a prediction of what is most likely, not what is most accurate. Accuracy in Transformer-based outputs is typically proportional to the presence of relevant accurate data in the training dataset, but it is never guaranteed. If there is ever a case where there isn’t enough contextual information in its neural network for Whisper to make an accurate prediction about how to transcribe a particular segment of audio, the model will fall back on what it “knows” about the relationships between sounds and words it has learned from its training data.

Hospitals adopt error-prone AI transcription tools despite warnings Read More »