AI

microsoft-cto-kevin-scott-thinks-llm-“scaling-laws”-will-hold-despite-criticism

Microsoft CTO Kevin Scott thinks LLM “scaling laws” will hold despite criticism

As the word turns —

Will LLMs keep improving if we throw more compute at them? OpenAI dealmaker thinks so.

Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media's 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California.

Enlarge / Kevin Scott, CTO and EVP of AI at Microsoft speaks onstage during Vox Media’s 2023 Code Conference at The Ritz-Carlton, Laguna Niguel on September 27, 2023 in Dana Point, California.

During an interview with Sequoia Capital’s Training Data podcast published last Tuesday, Microsoft CTO Kevin Scott doubled down on his belief that so-called large language model (LLM) “scaling laws” will continue to drive AI progress, despite some skepticism in the field that progress has leveled out. Scott played a key role in forging a $13 billion technology-sharing deal between Microsoft and OpenAI.

“Despite what other people think, we’re not at diminishing marginal returns on scale-up,” Scott said. “And I try to help people understand there is an exponential here, and the unfortunate thing is you only get to sample it every couple of years because it just takes a while to build supercomputers and then train models on top of them.”

LLM scaling laws refer to patterns explored by OpenAI researchers in 2020 showing that the performance of language models tends to improve predictably as the models get larger (more parameters), are trained on more data, and have access to more computational power (compute). The laws suggest that simply scaling up model size and training data can lead to significant improvements in AI capabilities without necessarily requiring fundamental algorithmic breakthroughs.

Since then, other researchers have challenged the idea of persisting scaling laws over time, but the concept is still a cornerstone of OpenAI’s AI development philosophy.

You can see Scott’s comments in the video below beginning around 46: 05:

Microsoft CTO Kevin Scott on how far scaling laws will extend

Scott’s optimism contrasts with a narrative among some critics in the AI community that progress in LLMs has plateaued around GPT-4 class models. The perception has been fueled by largely informal observations—and some benchmark results—about recent models like Google’s Gemini 1.5 Pro, Anthropic’s Claude Opus, and even OpenAI’s GPT-4o, which some argue haven’t shown the dramatic leaps in capability seen in earlier generations, and that LLM development may be approaching diminishing returns.

“We all know that GPT-3 was vastly better than GPT-2. And we all know that GPT-4 (released thirteen months ago) was vastly better than GPT-3,” wrote AI critic Gary Marcus in April. “But what has happened since?”

The perception of plateau

Scott’s stance suggests that tech giants like Microsoft still feel justified in investing heavily in larger AI models, betting on continued breakthroughs rather than hitting a capability plateau. Given Microsoft’s investment in OpenAI and strong marketing of its own Microsoft Copilot AI features, the company has a strong interest in maintaining the perception of continued progress, even if the tech stalls.

Frequent AI critic Ed Zitron recently wrote in a post on his blog that one defense of continued investment into generative AI is that “OpenAI has something we don’t know about. A big, sexy, secret technology that will eternally break the bones of every hater,” he wrote. “Yet, I have a counterpoint: no it doesn’t.”

Some perceptions of slowing progress in LLM capabilities and benchmarking may be due to the rapid onset of AI in the public eye when, in fact, LLMs have been developing for years prior. OpenAI continued to develop LLMs during a roughly three-year gap between the release of GPT-3 in 2020 and GPT-4 in 2023. Many people likely perceived a rapid jump in capability with GPT-4’s launch in 2023 because they had only become recently aware of GPT-3-class models with the launch of ChatGPT in late November 2022, which used GPT-3.5.

In the podcast interview, the Microsoft CTO pushed back against the idea that AI progress has stalled, but he acknowledged the challenge of infrequent data points in this field, as new models often take years to develop. Despite this, Scott expressed confidence that future iterations will show improvements, particularly in areas where current models struggle.

“The next sample is coming, and I can’t tell you when, and I can’t predict exactly how good it’s going to be, but it will almost certainly be better at the things that are brittle right now, where you’re like, oh my god, this is a little too expensive, or a little too fragile, for me to use,” Scott said in the interview. “All of that gets better. It’ll get cheaper, and things will become less fragile. And then more complicated things will become possible. That is the story of each generation of these models as we’ve scaled up.”

Microsoft CTO Kevin Scott thinks LLM “scaling laws” will hold despite criticism Read More »

openai-reportedly-nears-breakthrough-with-“reasoning”-ai,-reveals-progress-framework

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework

studies in hype-otheticals —

Five-level AI classification system probably best seen as a marketing exercise.

Illustration of a robot with many arms.

OpenAI recently unveiled a five-tier system to gauge its advancement toward developing artificial general intelligence (AGI), according to an OpenAI spokesperson who spoke with Bloomberg. The company shared this new classification system on Tuesday with employees during an all-hands meeting, aiming to provide a clear framework for understanding AI advancement. However, the system describes hypothetical technology that does not yet exist and is possibly best interpreted as a marketing move to garner investment dollars.

OpenAI has previously stated that AGI—a nebulous term for a hypothetical concept that means an AI system that can perform novel tasks like a human without specialized training—is currently the primary goal of the company. The pursuit of technology that can replace humans at most intellectual work drives most of the enduring hype over the firm, even though such a technology would likely be wildly disruptive to society.

OpenAI CEO Sam Altman has previously stated his belief that AGI could be achieved within this decade, and a large part of the CEO’s public messaging has been related to how the company (and society in general) might handle the disruption that AGI may bring. Along those lines, a ranking system to communicate AI milestones achieved internally on the path to AGI makes sense.

OpenAI’s five levels—which it plans to share with investors—range from current AI capabilities to systems that could potentially manage entire organizations. The company believes its technology (such as GPT-4o that powers ChatGPT) currently sits at Level 1, which encompasses AI that can engage in conversational interactions. However, OpenAI executives reportedly told staff they’re on the verge of reaching Level 2, dubbed “Reasoners.”

Bloomberg lists OpenAI’s five “Stages of Artificial Intelligence” as follows:

  • Level 1: Chatbots, AI with conversational language
  • Level 2: Reasoners, human-level problem solving
  • Level 3: Agents, systems that can take actions
  • Level 4: Innovators, AI that can aid in invention
  • Level 5: Organizations, AI that can do the work of an organization

A Level 2 AI system would reportedly be capable of basic problem-solving on par with a human who holds a doctorate degree but lacks access to external tools. During the all-hands meeting, OpenAI leadership reportedly demonstrated a research project using their GPT-4 model that the researchers believe shows signs of approaching this human-like reasoning ability, according to someone familiar with the discussion who spoke with Bloomberg.

The upper levels of OpenAI’s classification describe increasingly potent hypothetical AI capabilities. Level 3 “Agents” could work autonomously on tasks for days. Level 4 systems would generate novel innovations. The pinnacle, Level 5, envisions AI managing entire organizations.

This classification system is still a work in progress. OpenAI plans to gather feedback from employees, investors, and board members, potentially refining the levels over time.

Ars Technica asked OpenAI about the ranking system and the accuracy of the Bloomberg report, and a company spokesperson said they had “nothing to add.”

The problem with ranking AI capabilities

OpenAI isn’t alone in attempting to quantify levels of AI capabilities. As Bloomberg notes, OpenAI’s system feels similar to levels of autonomous driving mapped out by automakers. And in November 2023, researchers at Google DeepMind proposed their own five-level framework for assessing AI advancement, showing that other AI labs have also been trying to figure out how to rank things that don’t yet exist.

OpenAI’s classification system also somewhat resembles Anthropic’s “AI Safety Levels” (ASLs) first published by the maker of the Claude AI assistant in September 2023. Both systems aim to categorize AI capabilities, though they focus on different aspects. Anthropic’s ASLs are more explicitly focused on safety and catastrophic risks (such as ASL-2, which refers to “systems that show early signs of dangerous capabilities”), while OpenAI’s levels track general capabilities.

However, any AI classification system raises questions about whether it’s possible to meaningfully quantify AI progress and what constitutes an advancement (or even what constitutes a “dangerous” AI system, as in the case of Anthropic). The tech industry so far has a history of overpromising AI capabilities, and linear progression models like OpenAI’s potentially risk fueling unrealistic expectations.

There is currently no consensus in the AI research community on how to measure progress toward AGI or even if AGI is a well-defined or achievable goal. As such, OpenAI’s five-tier system should likely be viewed as a communications tool to entice investors that shows the company’s aspirational goals rather than a scientific or even technical measurement of progress.

OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework Read More »

“superhuman”-go-ais-still-have-trouble-defending-against-these-simple-exploits

“Superhuman” Go AIs still have trouble defending against these simple exploits

Man vs. machine —

Plugging up “worst-case” algorithmic holes is proving more difficult than expected.

Man vs. machine in a sea of stones.

Enlarge / Man vs. machine in a sea of stones.

Getty Images

In the ancient Chinese game of Go, state-of-the-art artificial intelligence has generally been able to defeat the best human players since at least 2016. But in the last few years, researchers have discovered flaws in these top-level AI Go algorithms that give humans a fighting chance. By using unorthodox “cyclic” strategies—ones that even a beginning human player could detect and defeat—a crafty human can often exploit gaps in a top-level AI’s strategy and fool the algorithm into a loss.

Researchers at MIT and FAR AI wanted to see if they could improve this “worst case” performance in otherwise “superhuman” AI Go algorithms, testing a trio of methods to harden the top-level KataGo algorithm‘s defenses against adversarial attacks. The results show that creating truly robust, unexploitable AIs may be difficult, even in areas as tightly controlled as board games.

Three failed strategies

In the pre-print paper “Can Go AIs be adversarially robust?”, the researchers aim to create a Go AI that is truly “robust” against any and all attacks. That means an algorithm that can’t be fooled into “game-losing blunders that a human would not commit” but also one that would require any competing AI algorithm to spend significant computing resources to defeat it. Ideally, a robust algorithm should also be able to overcome potential exploits by using additional computing resources when confronted with unfamiliar situations.

An example of the original cyclic attack in action.

Enlarge / An example of the original cyclic attack in action.

The researchers tried three methods to generate such a robust Go algorithm. In the first, they simply fine-tuned the KataGo model using more examples of the unorthodox cyclic strategies that previously defeated it, hoping that KataGo could learn to detect and defeat these patterns after seeing more of them.

This strategy initially seemed promising, letting KataGo win 100 percent of games against a cyclic “attacker.” But after the attacker itself was fine-tuned (a process that used much less computing power than KataGo’s fine-tuning), that win rate fell back down to 9 percent against a slight variation on the original attack.

For its second defense attempt, the researchers iterated a multi-round “arms race” where new adversarial models discover novel exploits and new defensive models seek to plug up those newly discovered holes. After 10 rounds of such iterative training, the final defending algorithm still only won 19 percent of games against a final attacking algorithm that had discovered previously unseen variation on the exploit. This was true even as the updated algorithm maintained an edge against earlier attackers that it had been trained against in the past.

Go AI if they know the right algorithm-exploiting strategy.” height=”427″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/GettyImages-109417607-640×427.jpg” width=”640″>

Enlarge / Even a child can beat a world-class Go AI if they know the right algorithm-exploiting strategy.

Getty Images

In their final attempt, researchers tried a completely new type of training using vision transformers, in an attempt to avoid what might be “bad inductive biases” found in the convolutional neural networks that initially trained KataGo. This method also failed, winning only 22 percent of the time against a variation on the cyclic attack that “can be replicated by a human expert,” the researchers wrote.

Will anything work?

In all three defense attempts, the KataGo-beating adversaries didn’t represent some new, previously unseen height in general Go-playing ability. Instead, these attacking algorithms were laser-focused on discovering exploitable weaknesses in an otherwise performant AI algorithm, even if those simple attack strategies would lose to most human players.

Those exploitable holes highlight the importance of evaluating “worst-case” performance in AI systems, even when the “average-case” performance can seem downright superhuman. On average, KataGo can dominate even high-level human players using traditional strategies. But in the worst case, otherwise “weak” adversaries can find holes in the system that make it fall apart.

It’s easy to extend this kind of thinking to other types of generative AI systems. LLMs that can succeed at some complex creative and reference tasks might still utterly fail when confronted with trivial math problems (or even get “poisoned” by malicious prompts). Visual AI models that can describe and analyze complex photos may nonetheless fail horribly when presented with basic geometric shapes.

If you can solve these kinds of puzzles, you may have better visual reasoning than state-of-the-art AIs.

Enlarge / If you can solve these kinds of puzzles, you may have better visual reasoning than state-of-the-art AIs.

Improving these kinds of “worst case” scenarios is key to avoiding embarrassing mistakes when rolling an AI system out to the public. But this new research shows that determined “adversaries” can often discover new holes in an AI algorithm’s performance much more quickly and easily than that algorithm can evolve to fix those problems.

And if that’s true in Go—a monstrously complex game that nonetheless has tightly defined rules—it might be even more true in less controlled environments. “The key takeaway for AI is that these vulnerabilities will be difficult to eliminate,” FAR CEO Adam Gleave told Nature. “If we can’t solve the issue in a simple domain like Go, then in the near-term there seems little prospect of patching similar issues like jailbreaks in ChatGPT.”

Still, the researchers aren’t despairing. While none of their methods were able to “make [new] attacks impossible” in Go, their strategies were able to plug up unchanging “fixed” exploits that had been previously identified. That suggests “it may be possible to fully defend a Go AI by training against a large enough corpus of attacks,” they write, with proposals for future research that could make this happen.

Regardless, this new research shows that making AI systems more robust against worst-case scenarios might be at least as valuable as chasing new, more human/superhuman capabilities.

“Superhuman” Go AIs still have trouble defending against these simple exploits Read More »

can-you-do-better-than-top-level-ai-models-on-these-basic-vision-tests?

Can you do better than top-level AI models on these basic vision tests?

A bit myopic —

Abstract analysis that is trivial for humans often stymies GPT-4o, Gemini, and Sonnet.

Whatever you do, don't ask the AI how many horizontal lines are in this image.

Enlarge / Whatever you do, don’t ask the AI how many horizontal lines are in this image.

Getty Images

In the last couple of years, we’ve seen amazing advancements in AI systems when it comes to recognizing and analyzing the contents of complicated images. But a new paper highlights how many state-of-the-art “vision learning Models” (VLMs) often fail at simple, low-level visual analysis tasks that are trivially easy for a human.

In the provocatively titled pre-print paper “Vision language models are blind (which has a PDF version that includes a dark sunglasses emoji in the title), researchers from Auburn University and the University of Alberta create eight simple visual acuity tests with objectively correct answers. These range from identifying how often two colored lines intersect to identifying which letter in a long word has been circled to counting how many nested shapes exist in an image (representative examples and results can be viewed on the research team’s webpage).

  • If you can solve these kinds of puzzles, you may have better visual reasoning than state-of-the-art AIs.

  • The puzzles on the right are like something out of Highlights magazine.

  • A representative sample shows AI models failing at a task that most human children would find trivial.

Crucially, these tests are generated by custom code and don’t rely on pre-existing images or tests that could be found on the public Internet, thereby “minimiz[ing] the chance that VLMs can solve by memorization,” according to the researchers. The tests also “require minimal to zero world knowledge” beyond basic 2D shapes, making it difficult for the answer to be inferred from “textual question and choices alone” (which has been identified as an issue for some other visual AI benchmarks).

Are you smarter than a fifth grader?

After running multiple tests across four different visual models—GPT-4o, Gemini-1.5 Pro, Sonnet-3, and Sonnet-3.5—the researchers found all four fell well short of the 100 percent accuracy you might expect for such simple visual analysis tasks (and which most sighted humans would have little trouble achieving). But the size of the AI underperformance varied greatly depending on the specific task. When asked to count the number of rows and columns in a blank grid, for instance, the best-performing model only gave an accurate answer less than 60 percent of the time. On the other hand, Gemini-1.5 Pro hit nearly 93 percent accuracy in identifying circled letters, approaching human-level performance.

  • For some reason, the models tend to incorrectly guess the “o” is circled a lot more often than all the other letters in this test.

  • The models performed perfectly in counting five interlocking circles, a pattern they might be familiar with from common images of the Olympic rings.

  • Do you have an easier time counting columns than rows in a grid? If so, you probably aren’t an AI.

Even small changes to the tasks could also lead to huge changes in results. While all four tested models were able to correctly identify five overlapping hollow circles, the accuracy across all models dropped to well below 50 percent when six to nine circles were involved. The researchers hypothesize that this “suggests that VLMs are biased towards the well-known Olympic logo, which has 5 circles.” In other cases, models occasionally hallucinated nonsensical answers, such as guessing “9,” “n”, or “©” as the circled letter in the word “Subdermatoglyphic.”

Overall, the results highlight how AI models that can perform well at high-level visual reasoning have some significant “blind spots” (sorry) when it comes to low-level abstract images. It’s all somewhat reminiscent of similar capability gaps that we often see in state-of-the-art large language models, which can create extremely cogent summaries of lengthy texts while at the same time failing extremely basic math and spelling questions.

These gaps in VLM capabilities could come down to the inability of these systems to generalize beyond the kinds of content they are explicitly trained on. Yet when the researchers tried fine-tuning a model using specific images drawn from one of their tasks (the “are two circles touching?” test), that model showed only modest improvement, from 17 percent accuracy up to around 37 percent. “The loss values for all these experiments were very close to zero, indicating that the model overfits the training set but fails to generalize,” the researchers write.

The researchers propose that the VLM capability gap may be related to the so-called “late fusion” of vision encoders onto pre-trained large language models. An “early fusion” training approach that integrates visual encoding alongside language training could lead to better results on these low-level tasks, the researchers suggest (without providing any sort of analysis of this question).

Can you do better than top-level AI models on these basic vision tests? Read More »

intuit’s-ai-gamble:-mass-layoff-of-1,800-paired-with-hiring-spree

Intuit’s AI gamble: Mass layoff of 1,800 paired with hiring spree

In the name of AI —

Intuit CEO: “Companies that aren’t prepared to take advantage of [AI] will fall behind.”

Signage for financial software company Intuit at the company's headquarters in the Silicon Valley town of Mountain View, California, August 24, 2016.

On Wednesday, Intuit CEO Sasan Goodarzi announced in a letter to the company that it would be laying off 1,800 employees—about 10 percent of its workforce of around 18,000—while simultaneously planning to hire the same number of new workers as part of a major restructuring effort purportedly focused on AI.

“As I’ve shared many times, the era of AI is one of the most significant technology shifts of our lifetime,” wrote Goodarzi in a blog post on Intuit’s website. “This is truly an extraordinary time—AI is igniting global innovation at an incredible pace, transforming every industry and company in ways that were unimaginable just a few years ago. Companies that aren’t prepared to take advantage of this AI revolution will fall behind and, over time, will no longer exist.”

The CEO says Intuit is in a position of strength and that the layoffs are not cost-cutting related, but they allow the company to “allocate additional investments to our most critical areas to support our customers and drive growth.” With new hires, the company expects its overall headcount to grow in its 2025 fiscal year.

Intuit’s layoffs (which collectively qualify as a “mass layoff” under the WARN act) hit various departments within the company, including closing Intuit’s offices in Edmonton, Canada, and Boise, Idaho, affecting over 250 employees. Approximately 1,050 employees will receive layoffs because they’re “not meeting expectations,” according to Goodarzi’s letter. Intuit has also eliminated more than 300 roles across the company to “streamline” operations and shift resources toward AI, and the company plans to consolidate 80 tech roles to “sites where we are strategically growing our technology teams and capabilities,” such as Atlanta, Bangalore, New York, Tel Aviv, and Toronto.

In turn, the company plans to accelerate investments in its AI-powered financial assistant, Intuit Assist, which provides AI-generated financial recommendations. The company also plans to hire new talent in engineering, product development, data science, and customer-facing roles, with a particular emphasis on AI expertise.

Not just about AI

Despite Goodarzi’s heavily AI-focused message, the restructuring at Intuit reveals a more complex picture. A closer look at the layoffs shows that many of the 1,800 job cuts stem from performance-based departures (such as the aforementioned 1,050). The restructuring also includes a 10 percent reduction in executive positions at the director level and above (“To continue increasing our velocity of decision making,” Goodarzi says).

These numbers suggest that the reorganization may also serve as an opportunity for Intuit to trim its workforce of underperforming staff, using the AI hype cycle as a compelling backdrop for a broader house-cleaning effort.

But as far as CEOs are concerned, it’s always a good time to talk about how they’re embracing the latest, hottest thing in technology: “With the introduction of GenAI,” Goodarzi wrote, “we are now delivering even more compelling customer experiences, increasing monetization potential, and driving efficiencies in how the work gets done within Intuit. But it’s just the beginning of the AI revolution.”

Intuit’s AI gamble: Mass layoff of 1,800 paired with hiring spree Read More »

court-ordered-penalties-for-15-teens-who-created-naked-ai-images-of-classmates

Court ordered penalties for 15 teens who created naked AI images of classmates

Real consequences —

Teens ordered to attend classes on sex education and responsible use of AI.

Court ordered penalties for 15 teens who created naked AI images of classmates

A Spanish youth court has sentenced 15 minors to one year of probation after spreading AI-generated nude images of female classmates in two WhatsApp groups.

The minors were charged with 20 counts of creating child sex abuse images and 20 counts of offenses against their victims’ moral integrity. In addition to probation, the teens will also be required to attend classes on gender and equality, as well as on the “responsible use of information and communication technologies,” a press release from the Juvenile Court of Badajoz said.

Many of the victims were too ashamed to speak up when the inappropriate fake images began spreading last year. Prior to the sentencing, a mother of one of the victims told The Guardian that girls like her daughter “were completely terrified and had tremendous anxiety attacks because they were suffering this in silence.”

The court confirmed that the teens used artificial intelligence to create images where female classmates “appear naked” by swiping photos from their social media profiles and superimposing their faces on “other naked female bodies.”

Teens using AI to sexualize and harass classmates has become an alarming global trend. Police have probed disturbing cases in both high schools and middle schools in the US, and earlier this year, the European Union proposed expanding its definition of child sex abuse to more effectively “prosecute the production and dissemination of deepfakes and AI-generated material.” Last year, US President Joe Biden issued an executive order urging lawmakers to pass more protections.

In addition to mental health impacts, victims have reported losing trust in classmates who targeted them and wanting to switch schools to avoid further contact with harassers. Others stopped posting photos online and remained fearful that the harmful AI images will resurface.

Minors targeting classmates may not realize exactly how far images can potentially spread when generating fake child sex abuse materials (CSAM); they could even end up on the dark web. An investigation by the United Kingdom-based Internet Watch Foundation (IWF) last year reported that “20,254 AI-generated images were found to have been posted to one dark web CSAM forum in a one-month period,” with more than half determined most likely to be criminal.

IWF warned that it has identified a growing market for AI-generated CSAM and concluded that “most AI CSAM found is now realistic enough to be treated as ‘real’ CSAM.” One “shocked” mother of a female classmate victimized in Spain agreed. She told The Guardian that “if I didn’t know my daughter’s body, I would have thought that image was real.”

More drastic steps to stop deepfakes

While lawmakers struggle to apply existing protections against CSAM to AI-generated images or to update laws to explicitly prosecute the offense, other more drastic solutions to prevent the harmful spread of deepfakes have been proposed.

In an op-ed for The Guardian today, journalist Lucia Osborne-Crowley advocated for laws restricting sites used to both generate and surface deepfake pornography, including regulating this harmful content when it appears on social media sites and search engines. And IWF suggested that, like jurisdictions that restrict sharing bomb-making information, lawmakers could also restrict guides instructing bad actors on how to use AI to generate CSAM.

The Malvaluna Association, which represented families of victims in Spain and broadly advocates for better sex education, told El Diario that beyond more regulations, more education is needed to stop teens motivated to use AI to attack classmates. Because the teens were ordered to attend classes, the association agreed to the sentencing measures.

“Beyond this particular trial, these facts should make us reflect on the need to educate people about equality between men and women,” the Malvaluna Association said. The group urged that today’s kids should not be learning about sex through pornography that “generates more sexism and violence.”

Teens sentenced in Spain were between the ages of 13 and 15. According to the Guardian, Spanish law prevented sentencing of minors under 14, but the youth court “can force them to take part in rehabilitation courses.”

Tech companies could also make it easier to report and remove harmful deepfakes. Ars could not immediately reach Meta for comment on efforts to combat the proliferation of AI-generated CSAM on WhatsApp, the private messaging app that was used to share fake images in Spain.

An FAQ said that “WhatsApp has zero tolerance for child sexual exploitation and abuse, and we ban users when we become aware they are sharing content that exploits or endangers children,” but it does not mention AI.

Court ordered penalties for 15 teens who created naked AI images of classmates Read More »

three-betas-in,-ios-18-testers-still-can’t-try-out-apple-intelligence-features

Three betas in, iOS 18 testers still can’t try out Apple Intelligence features

intel inside? —

Apple has said some features will be available to test “this summer.”

Three betas in, iOS 18 testers still can’t try out Apple Intelligence features

Apple

The beta-testing cycle for Apple’s latest operating system updates is in full swing—earlier this week, the third developer betas rolled out for iOS 18, iPadOS 18, macOS 15 Sequoia, and the rest of this fall’s updates. The fourth developer beta ought to be out in a couple of weeks, and it’s reasonably likely to coincide with the first betas that Apple offers to the full public (though the less-stable developer-only betas got significantly more public last year when Apple stopped making people pay for a developer account to access them).

Many of the new updates’ features are present and available to test, including cosmetic updates and under-the-hood improvements. But none of Apple’s much-hyped Apple Intelligence features are available to test in any form. MacRumors reports that Settings menus for the Apple Intelligence features have appeared in the Xcode Simulator for current versions of iOS 18 but, as of now, those settings still appear to be non-functional placeholders that don’t actually do anything.

That may change soon; Apple did say that the first wave of Apple Intelligence features would be available “this summer,” and I would wager a small amount of money on the first ones being available in the public beta builds later this month. But the current state of the betas does reinforce reporting from Bloomberg’s Mark Gurman that suggested Apple was “caught flat-footed” by the tech world’s intense interest in generative AI.

Even when they do arrive, the Apple Intelligence features will be rolled out gradually. Some will be available earlier than others—Gurman recently reported that the new Siri, specifically, might not be available for testing until January and might not actually be ready to launch until sometime in early 2025. The first wave of features will only work in US English, and only relatively recent Apple hardware will be capable of using most of them. For now, that means iPads and Macs with an M-series chip, or the iPhone 15 Pro, though presumably this year’s new crop of Pro and non-Pro iPhones will all be Apple Intelligence-compatible.

Apple’s relatively slow rollout of generative AI features isn’t necessarily a bad thing. Look at Microsoft, which has been repeatedly burned by its desire to rush AI-powered features into its Bing search engine, Edge browser, and Windows operating system. Windows 11’s Recall feature, a comprehensive database of screenshots and text tracking everything that users do on their PCs, was announced and then delayed multiple times after security researchers and other testers demonstrated how it could put users’ personal data at risk.

Three betas in, iOS 18 testers still can’t try out Apple Intelligence features Read More »

in-bid-to-loosen-nvidia’s-grip-on-ai,-amd-to-buy-finnish-startup-for-$665m

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AI tech stack —

The acquisition is the largest of its kind in Europe in a decade.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M

AMD is to buy Finnish artificial intelligence startup Silo AI for $665 million in one of the largest such takeovers in Europe as the US chipmaker seeks to expand its AI services to compete with market leader Nvidia.

California-based AMD said Silo’s 300-member team would use its software tools to build custom large language models (LLMs), the kind of AI technology that underpins chatbots such as OpenAI’s ChatGPT and Google’s Gemini. The all-cash acquisition is expected to close in the second half of this year, subject to regulatory approval.

“This agreement helps us both accelerate our customer engagements and deployments while also helping us accelerate our own AI tech stack,” Vamsi Boppana, senior vice president of AMD’s artificial intelligence group, told the Financial Times.

The acquisition is the largest of a privately held AI startup in Europe since Google acquired UK-based DeepMind for around 400 million pounds in 2014, according to data from Dealroom.

The deal comes at a time when buyouts by Silicon Valley companies have come under tougher scrutiny from regulators in Brussels and the UK. Europe-based AI startups, including Mistral, DeepL, and Helsing, have raised hundreds of millions of dollars this year as investors seek out a local champion to rival US-based OpenAI and Anthropic.

Helsinki-based Silo AI, which is among the largest private AI labs in Europe, offers tailored AI models and platforms to enterprise customers. The Finnish company launched an initiative last year to build LLMs in European languages, including Swedish, Icelandic, and Danish.

AMD’s AI technology competes with that of Nvidia, which has taken the lion’s share of the high-performance chip market. Nvidia’s success has propelled its valuation past $3 trillion this year as tech companies push to build the computing infrastructure needed to power the biggest AI models. AMD started to roll out its MI300 chips late last year in a direct challenge to Nvidia’s “Hopper” line of chips.

Peter Sarlin, Silo AI co-founder and chief executive, called the acquisition the “logical next step” as the Finnish group seeks to become a “flagship” AI company.

Silo AI is committed to “open source” AI models, which are available for free and can be customized by anyone. This distinguishes it from the likes of OpenAI and Google, which favor their own proprietary or “closed” models.

The startup previously described its family of open models, called “Poro,” as an important step toward “strengthening European digital sovereignty” and democratizing access to LLMs.

The concentration of the most powerful LLMs into the hands of a few US-based Big Tech companies is meanwhile attracting attention from antitrust regulators in Washington and Brussels.

The Silo deal shows AMD seeking to scale its business quickly and drive customer engagement with its own offering. AMD views Silo, which builds custom models for clients, as a link between its “foundational” AI software and the real-world applications of the technology.

Software has become a new battleground for semiconductor companies as they try to lock in customers to their hardware and generate more predictable revenues, outside the boom-and-bust chip sales cycle.

Nvidia’s success in the AI market stems from its multibillion-dollar investment in Cuda, its proprietary software that allows chips originally designed for processing computer graphics and video games to run a wider range of applications.

Since starting to develop Cuda in 2006, Nvidia has expanded its software platform to include a range of apps and services, largely aimed at corporate customers that lack the in-house resources and skills that Big Tech companies have to build on its technology.

Nvidia now offers more than 600 “pre-trained” models, meaning they are simpler for customers to deploy. The Santa Clara, California-based group last month started rolling out a “microservices” platform, called NIM, which promises to let developers build chatbots and AI “co-pilot” services quickly.

Historically, Nvidia has offered its software free of charge to buyers of its chips, but said this year that it planned to charge for products such as NIM.

AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft, and Intel have also worked on Triton.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

In bid to loosen Nvidia’s grip on AI, AMD to buy Finnish startup for $665M Read More »

chatgpt’s-much-heralded-mac-app-was-storing-conversations-as-plain-text

ChatGPT’s much-heralded Mac app was storing conversations as plain text

Seriously? —

The app was updated to address the issue after it gained public attention.

A message field for ChatGPT pops up over a Mac desktop

Enlarge / The app lets you invoke ChatGPT from anywhere in the system with a keyboard shortcut, Spotlight-style.

Samuel Axon

OpenAI announced its Mac desktop app for ChatGPT with a lot of fanfare a few weeks ago, but it turns out it had a rather serious security issue: user chats were stored in plain text, where any bad actor could find them if they gained access to your machine.

As Threads user Pedro José Pereira Vieito noted earlier this week, “the OpenAI ChatGPT app on macOS is not sandboxed and stores all the conversations in plain-text in a non-protected location,” meaning “any other running app / process / malware can read all your ChatGPT conversations without any permission prompt.”

He added:

macOS has blocked access to any user private data since macOS Mojave 10.14 (6 years ago!). Any app accessing private user data (Calendar, Contacts, Mail, Photos, any third-party app sandbox, etc.) now requires explicit user access.

OpenAI chose to opt-out of the sandbox and store the conversations in plain text in a non-protected location, disabling all of these built-in defenses.

OpenAI has now updated the app, and the local chats are now encrypted, though they are still not sandboxed. (The app is only available as a direct download from OpenAI’s website and is not available through Apple’s App Store where more stringent security is required.)

Many people now use ChatGPT like they might use Google: to ask important questions, sort through issues, and so on. Often, sensitive personal data could be shared in those conversations.

It’s not a great look for OpenAI, which recently entered into a partnership with Apple to offer chat bot services built into Siri queries in Apple operating systems. Apple detailed some of the security around those queries at WWDC last month, though, and they’re more stringent than what OpenAI did (or to be more precise, didn’t do) with its Mac app, which is a separate initiative from the partnership.

If you’ve been using the app recently, be sure to update it as soon as possible.

ChatGPT’s much-heralded Mac app was storing conversations as plain text Read More »

ai-trains-on-kids’-photos-even-when-parents-use-strict-privacy-settings

AI trains on kids’ photos even when parents use strict privacy settings

“Outrageous” —

Even unlisted YouTube videos are used to train AI, watchdog warns.

AI trains on kids’ photos even when parents use strict privacy settings

Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.

Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.

These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.

That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.

From a single link to one photo that showed “two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural,” Han could trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia.” And perhaps most disturbingly, “information about these children does not appear to exist anywhere else on the Internet”—suggesting that families were particularly cautious in shielding these boys’ identities online.

Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed “a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating” during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be “unlisted” and would not appear in searches.

Only someone with a link to the video was supposed to have access, but that didn’t stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube’s spokesperson, Jack Malon, told Ars that YouTube has “been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse.” But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That’s why—even more than parents need tech companies to up their game blocking AI training—kids need regulators to intervene and stop training before it happens, Han’s report said.

Han’s report comes a month before Australia is expected to release a reformed draft of the country’s Privacy Act. Those reforms include a draft of Australia’s first child data protection law, known as the Children’s Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren’t “actually sure how much the government is going to announce in August.”

“Children in Australia are waiting with bated breath to see if the government will adopt protections for them,” Han said, emphasizing in her report that “children should not have to live in fear that their photos might be stolen and weaponized against them.”

AI uniquely harms Australian kids

To hunt down the photos of Australian kids, Han “reviewed fewer than 0.0001 percent of the 5.85 billion images and captions contained in the data set.” Because her sample was so small, Han expects that her findings represent a significant undercount of how many children could be impacted by the AI scraping.

“It’s astonishing that out of a random sample size of about 5,000 photos, I immediately fell into 190 photos of Australian children,” Han told Ars. “You would expect that there would be more photos of cats than there are personal photos of children,” since LAION-5B is a “reflection of the entire Internet.”

LAION is working with HRW to remove links to all the images flagged, but cleaning up the dataset does not seem to be a fast process. Han told Ars that based on her most recent exchange with the German nonprofit, LAION had not yet removed links to photos of Brazilian kids that she reported a month ago.

LAION declined Ars’ request for comment.

In June, LAION’s spokesperson, Nathan Tyler, told Ars that, “as a nonprofit, volunteer organization,” LAION is committed to doing its part to help with the “larger and very concerning issue” of misuse of children’s data online. But removing links from the LAION-5B dataset does not remove the images online, Tyler noted, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl. And Han pointed out that removing the links from the dataset doesn’t change AI models that have already trained on them.

“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set,” Han’s report said.

Kids whose images are used to train AI models are exposed to a variety of harms, Han reported, including a risk that image generators could more convincingly create harmful or explicit deepfakes. In Australia last month, “about 50 girls from Melbourne reported that photos from their social media profiles were taken and manipulated using AI to create sexually explicit deepfakes of them, which were then circulated online,” Han reported.

For First Nations children—”including those identified in captions as being from the Anangu, Arrernte, Pitjantjatjara, Pintupi, Tiwi, and Warlpiri peoples”—the inclusion of links to photos threatens unique harms. Because culturally, First Nations peoples “restrict the reproduction of photos of deceased people during periods of mourning,” Han said the AI training could perpetuate harms by making it harder to control when images are reproduced.

Once an AI model trains on the images, there are other obvious privacy risks, including a concern that AI models are “notorious for leaking private information,” Han said. Guardrails added to image generators do not always prevent these leaks, with some tools “repeatedly broken,” Han reported.

LAION recommends that, if troubled by the privacy risks, parents remove images of kids online as the most effective way to prevent abuse. But Han told Ars that’s “not just unrealistic, but frankly, outrageous.”

“The answer is not to call for children and parents to remove wonderful photos of kids online,” Han said. “The call should be [for] some sort of legal protections for these photos, so that kids don’t have to always wonder if their selfie is going to be abused.”

AI trains on kids’ photos even when parents use strict privacy settings Read More »

google’s-greenhouse-gas-emissions-jump-48%-in-five-years

Google’s greenhouse gas emissions jump 48% in five years

computationally intensive means energy intensive —

Google’s 2030 “Net zero” target looks increasingly doubtful as AI use soars.

Cooling pipes at a Google data center in Douglas County, Georgia.

Cooling pipes at a Google data center in Douglas County, Georgia.

Google’s greenhouse gas emissions have surged 48 percent in the past five years due to the expansion of its data centers that underpin artificial intelligence systems, leaving its commitment to get to “net zero” by 2030 in doubt.

The Silicon Valley company’s pollution amounted to 14.3 million tonnes of carbon equivalent in 2023, a 48 percent increase from its 2019 baseline and a 13 percent rise since last year, Google said in its annual environmental report on Tuesday.

Google said the jump highlighted “the challenge of reducing emissions” at the same time as it invests in the build-out of large language models and their associated applications and infrastructure, admitting that “the future environmental impact of AI” was “complex and difficult to predict.”

Chief Sustainability Officer Kate Brandt said the company remained committed to the 2030 target but stressed the “extremely ambitious” nature of the goal.

“We do still expect our emissions to continue to rise before dropping towards our goal,” said Brandt.

She added that Google was “working very hard” on reducing its emissions, including by signing deals for clean energy. There was also a “tremendous opportunity for climate solutions that are enabled by AI,” said Brandt.

As Big Tech giants including Google, Amazon, and Microsoft have outlined plans to invest tens of billions of dollars into AI, climate experts have raised concerns about the environmental impacts of the power-intensive tools and systems.

In May, Microsoft admitted that its emissions had risen by almost a third since 2020, in large part due to the construction of data centers. However, Microsoft co-founder Bill Gates last week also argued that AI would help propel climate solutions.

Meanwhile, energy generation and transmission constraints are already posing a challenge for the companies seeking to build out the new technology. Analysts at Bernstein said in June that AI would “double the rate of US electricity demand growth and total consumption could outstrip current supply in the next two years.”

In Tuesday’s report, Google said its 2023 energy-related emissions—which come primarily from data center electricity consumption—rose 37 percent year on year and overall represented a quarter of its total greenhouse gas emissions.

Google’s supply chain emissions—its largest chunk, representing 75 percent of its total emissions—also rose 8 percent. Google said they would “continue to rise in the near term” as a result in part of the build-out of the infrastructure needed to run AI systems.

Google has pledged to achieve net zero across its direct and indirect greenhouse gas emissions by 2030 and to run on carbon-free energy during every hour of every day within each grid it operates by the same date.

However, the company warned in Tuesday’s report that the “termination” of some clean energy projects during 2023 had pushed down the amount of renewables it had access to.

Meanwhile, the company’s data center electricity consumption had “outpaced” Google’s ability to bring more clean power projects online in the US and Asia-Pacific regions.

Google’s data center electricity consumption increased 17 percent in 2023, and amounted to approximately 7-10 percent of global data center electricity consumption, the company estimated. Its data centers also consumed 17 percent more water in 2023 than during the previous year, Google said.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Google’s greenhouse gas emissions jump 48% in five years Read More »

lightening-the-load:-ai-helps-exoskeleton-work-with-different-strides

Lightening the load: AI helps exoskeleton work with different strides

One model to rule them all —

A model trained in a virtual environment does remarkably well in the real world.

Image of two people using powered exoskeletons to move heavy items around, as seen in the movie Aliens.

Enlarge / Right now, the software doesn’t do arms, so don’t go taking on any aliens with it.

20th Century Fox

Exoskeletons today look like something straight out of sci-fi. But the reality is they are nowhere near as robust as their fictional counterparts. They’re quite wobbly, and it takes long hours of handcrafting software policies, which regulate how they work—a process that has to be repeated for each individual user.

To bring the technology a bit closer to Avatar’s Skel Suits or Warhammer 40k power armor, a team at North Carolina University’s Lab of Biomechatronics and Intelligent Robotics used AI to build the first one-size-fits-all exoskeleton that supports walking, running, and stair-climbing. Critically, its software adapts itself to new users with no need for any user-specific adjustments. “You just wear it and it works,” says Hao Su, an associate professor and co-author of the study.

Tailor-made robots

An exoskeleton is a robot you wear to aid your movements—it makes walking, running, and other activities less taxing, the same way an e-bike adds extra watts on top of those you generate yourself, making pedaling easier. “The problem is, exoskeletons have a hard time understanding human intentions, whether you want to run or walk or climb stairs. It’s solved with locomotion recognition: systems that recognize human locomotion intentions,” says Su.

Building those locomotion recognition systems currently relies on elaborate policies that define what actuators in an exoskeleton need to do in each possible scenario. “Let’s take walking. The current state of the art is we put the exoskeleton on you and you walk on a treadmill for an hour. Based on that, we try to adjust its operation to your individual set of movements,” Su explains.

Building handcrafted control policies and doing long human trials for each user makes exoskeletons super expensive, with prices reaching $200,000 or more. So, Su’s team used AI to automatically generate control policies and eliminate human training. “I think within two or three years, exoskeletons priced between $2,000 and $5,000 will be absolutely doable,” Su claims.

His team hopes these savings will come from developing the exoskeleton control policy using a digital model, rather than living, breathing humans.

Digitizing robo-aided humans

Su’s team started by building digital models of a human musculoskeletal system and an exoskeleton robot. Then they used multiple neural networks that operated each component. One was running the digitized model of a human skeleton, moved by simplified muscles. The second neural network was running the exoskeleton model. Finally, the third neural net was responsible for imitating motion—basically predicting how a human model would move wearing the exoskeleton and how the two would interact with each other. “We trained all three neural networks simultaneously to minimize muscle activity,” says Su.

One problem the team faced is that exoskeleton studies typically use a performance metric based on metabolic rate reduction. “Humans, though, are incredibly complex, and it is very hard to build a model with enough fidelity to accurately simulate metabolism,” Su explains. Luckily, according to the team, reducing muscle activations is rather tightly correlated with metabolic rate reduction, so it kept the digital model’s complexity within reasonable limits. The training of the entire human-exoskeleton system with all three neural networks took roughly eight hours on a single RTX 3090 GPU. And the results were record-breaking.

Bridging the sim-to-real gap

After developing the controllers for the digital exoskeleton model, which were developed by the neural networks in simulation, Su’s team simply copy-pasted the control policy to a real controller running a real exoskeleton. Then, they tested how an exoskeleton trained this way would work with 20 different participants. The averaged metabolic rate reduction in walking was over 24 percent, over 13 percent in running, and 15.4 percent in stair climbing—all record numbers, meaning their exoskeleton beat every other exoskeleton ever made in each category.

This was achieved without needing any tweaks to fit it to individual gaits. But the neural networks’ magic didn’t end there.

“The problem with traditional, handcrafted policies was that it was just telling it ‘if walking is detected do one thing; if walking faster is detected do another thing.’ These were [a mix of] finite state machines and switch controllers. We introduced end-to-end continuous control,” says Su. What this continuous control meant was that the exoskeleton could follow the human body as it made smooth transitions between different activities—from walking to running, from running to climbing stairs, etc. There was no abrupt mode switching.

“In terms of software, I think everyone will be using this neural network-based approach soon,” Su claims. To improve the exoskeletons in the future, his team wants to make them quieter, lighter, and more comfortable.

But the plan is also to make them work for people who need them the most. “The limitation now is that we tested these exoskeletons with able-bodied participants, not people with gait impairments. So, what we want to do is something they did in another exoskeleton study at Stanford University. We would take a one-minute video of you walking, and based on that, we would build a model to individualize our general model. This should work well for people with impairments like knee arthritis,” Su claims.

Nature, 2024.  DOI: 10.1038/s41586-024-07382-4

Lightening the load: AI helps exoskeleton work with different strides Read More »