Google

google-translate-just-nearly-doubled-its-number-of-supported-languages

Google Translate just nearly doubled its number of supported languages

Large language models —

This includes common languages like Cantonese and lesser-known ones like Manx.

The Google PaLM 2 logo.

Enlarge / The logo for PaLM 2, a Google large language model.

Google

Google announced today that it has added support for 110 new languages to Google Translate, nearly doubling the number of languages that can be translated.

The company used the PaLM 2 large language model to facilitate these additions.

In a blog post, Google Senior Software Engineer Isaac Caswell claimed that the newly added languages are spoken by more than 614 million people, or about 8 percent of the global population.

He noted that about a quarter of the languages originate in Africa, “representing our largest expansion of African languages to date.”

The blog post also went into some light detail about Google’s philosophy for choosing languages and for deciding which dialects to support:

Languages have an immense amount of variation: regional varieties, dialects, different spelling standards. In fact, many languages have no one standard form, so it’s impossible to pick a “right” variety. Our approach has been to prioritize the most commonly used varieties of each language. For example, Romani is a language that has many dialects all throughout Europe. Our models produce text that is closest to Southern Vlax Romani, a commonly used variety online. But it also mixes in elements from others, like Northern Vlax and Balkan Romani.

This update brings the total number of languages supported by Google Translate to 243, which is just the beginning of its publicized initiative to ultimately support 1,000 languages through the use of AI. You can see the full list of languages added in a help page published by Google.

By contrast, Apple Translate supports 21 languages, though that number includes both US and UK English as distinct options. Apple recently announced plans to add Hindi to its Translate app. Of course, Apple and Google take very different approaches to—and have different levels of investment in—these tools.

Google Translate just nearly doubled its number of supported languages Read More »

microsoft-risks-huge-fine-over-“possibly-abusive”-bundling-of-teams-and-office

Microsoft risks huge fine over “possibly abusive” bundling of Teams and Office

A screen shows a virtual meeting with Microsoft Teams at a conference on January 30, 2024 in Barcelona, Spain.

Enlarge / A screen shows a virtual meeting with Microsoft Teams at a conference on January 30, 2024 in Barcelona, Spain.

Microsoft may be hit with a massive fine in the European Union for “possibly abusively” bundling Teams with its Office 365 and Microsoft 365 software suites for businesses.

On Tuesday, the European Commission (EC) announced preliminary findings of an investigation into whether Microsoft’s “suite-centric business model combining multiple types of software in a single offering” unfairly shut out rivals in the “software as a service” (SaaS) market.

“Since at least April 2019,” the EC found, Microsoft’s practice of “tying Teams with its core SaaS productivity applications” potentially restricted competition in the “market for communication and collaboration products.”

The EC is also “concerned” that the practice may have helped Microsoft defend its dominant market position by shutting out “competing suppliers of individual software” like Slack and German video-conferencing software Alfaview. Makers of those rival products had complained to the EC last year, setting off the ongoing probe into Microsoft’s bundling.

Customers should have choices, the EC said, and seemingly at every step, Microsoft sought instead to lock customers into using only its software.

“Microsoft may have granted Teams a distribution advantage by not giving customers the choice whether or not to acquire access to Teams when they subscribe to their SaaS productivity applications,” the EC wrote. This alleged abusive practice “may have been further exacerbated by interoperability limitations between Teams’ competitors and Microsoft’s offerings.”

For Microsoft, the EC’s findings are likely not entirely unexpected, although Tuesday’s announcement must be disappointing. The company had been hoping to avoid further scrutiny by introducing some major changes last year. Most drastically, Microsoft began “offering some suites without Teams,” the EC said, but even that wasn’t enough to appease EU regulators.

“The Commission preliminarily finds that these changes are insufficient to address its concerns and that more changes to Microsoft’s conduct are necessary to restore competition,” the EC said, concluding that “the conduct may have prevented Teams’ rivals from competing, and in turn innovating, to the detriment of customers in the European Economic Area.”

Microsoft will now be given an opportunity to defend its practices. If the company is unsuccessful, it risks a potential fine up to 10 percent of its annual worldwide turnover and an order possibly impacting how the leading global company conducts business.

In a statement to Ars, Microsoft President Brad Smith confirmed that the tech giant would work with the commission to figure out a better solution.

“Having unbundled Teams and taken initial interoperability steps, we appreciate the additional clarity provided today and will work to find solutions to address the commission’s remaining concerns,” Smith said.

The EC’s executive vice-president in charge of competition policy, Margrethe Vestager, explained in a statement why the commission refuses to back down from closely scrutinizing Microsoft’s alleged unfair practices.

“We are concerned that Microsoft may be giving its own communication product Teams an undue advantage over competitors by tying it to its popular productivity suites for businesses,” Vestager said. “And preserving competition for remote communication and collaboration tools is essential as it also fosters innovation” in these markets.

Changes coming to EU antitrust law in 2025

The EC initially launched its investigation into Microsoft’s allegedly abusive Teams bundling last July. Its probe came after Slack and Alfaview makers complained that Microsoft may be violating Article 102 of the Treaty on the Functioning of the European Union (TFEU), “which prohibits the abuse of a dominant market position.”

Nearly one year later, there’s no telling when the EC’s inquiry into Microsoft Teams will end. Microsoft will have a chance to review all evidence of infringement gathered by EU regulators to form its response. After that, the EC will review any additional evidence before making its decision, and there is no legal deadline to complete the antitrust inquiry, the EC said.

It’s possible that the EC’s decision may come next year when the EU is preparing to release new guidance to more “vigorously” and effectively enforce TFEU.

Last March, the EC called for stakeholder feedback after rolling out “the first major policy initiative in the area of abuse of dominance rules.” The initiative sought to update TFEU for the first time since 2008 based on reviewing relevant case law.

“A robust enforcement of rules on abuse of dominance benefits both consumers and a stronger European economy,” Vestager said at that time. “We have carefully analyzed numerous EU court judgments on the application of Article 102, and it is time for us to start working on guidelines reflecting this case law.”

Microsoft risks huge fine over “possibly abusive” bundling of Teams and Office Read More »

political-deepfakes-are-the-most-popular-way-to-misuse-ai

Political deepfakes are the most popular way to misuse AI

This is not going well —

Study from Google’s DeepMind lays out nefarious ways AI is being used.

Political deepfakes are the most popular way to misuse AI

Artificial intelligence-generated “deepfakes” that impersonate politicians and celebrities are far more prevalent than efforts to use AI to assist cyber attacks, according to the first research by Google’s DeepMind division into the most common malicious uses of the cutting-edge technology.

The study said the creation of realistic but fake images, video, and audio of people was almost twice as common as the next highest misuse of generative AI tools: the falsifying of information using text-based tools, such as chatbots, to generate misinformation to post online.

The most common goal of actors misusing generative AI was to shape or influence public opinion, the analysis, conducted with the search group’s research and development unit Jigsaw, found. That accounted for 27 percent of uses, feeding into fears over how deepfakes might influence elections globally this year.

Deepfakes of UK Prime Minister Rishi Sunak, as well as other global leaders, have appeared on TikTok, X, and Instagram in recent months. UK voters go to the polls next week in a general election.

Concern is widespread that, despite social media platforms’ efforts to label or remove such content, audiences may not recognize these as fake, and dissemination of the content could sway voters.

Ardi Janjeva, research associate at The Alan Turing Institute, called “especially pertinent” the paper’s finding that the contamination of publicly accessible information with AI-generated content could “distort our collective understanding of sociopolitical reality.”

Janjeva added: “Even if we are uncertain about the impact that deepfakes have on voting behavior, this distortion may be harder to spot in the immediate term and poses long-term risks to our democracies.”

The study is the first of its kind by DeepMind, Google’s AI unit led by Sir Demis Hassabis, and is an attempt to quantify the risks from the use of generative AI tools, which the world’s biggest technology companies have rushed out to the public in search of huge profits.

As generative products such as OpenAI’s ChatGPT and Google’s Gemini become more widely used, AI companies are beginning to monitor the flood of misinformation and other potentially harmful or unethical content created by their tools.

In May, OpenAI released research revealing operations linked to Russia, China, Iran, and Israel had been using its tools to create and spread disinformation.

“There had been a lot of understandable concern around quite sophisticated cyber attacks facilitated by these tools,” said Nahema Marchal, lead author of the study and researcher at Google DeepMind. “Whereas what we saw were fairly common misuses of GenAI [such as deepfakes that] might go under the radar a little bit more.”

Google DeepMind and Jigsaw’s researchers analyzed around 200 observed incidents of misuse between January 2023 and March 2024, taken from social media platforms X and Reddit, as well as online blogs and media reports of misuse.

Ars Technica

The second most common motivation behind misuse was to make money, whether offering services to create deepfakes, including generating naked depictions of real people, or using generative AI to create swaths of content, such as fake news articles.

The research found that most incidents use easily accessible tools, “requiring minimal technical expertise,” meaning more bad actors can misuse generative AI.

Google DeepMind’s research will influence how it improves its evaluations to test models for safety, and it hopes it will also affect how its competitors and other stakeholders view how “harms are manifesting.”

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Political deepfakes are the most popular way to misuse AI Read More »

music-industry-giants-allege-mass-copyright-violation-by-ai-firms

Music industry giants allege mass copyright violation by AI firms

No one wants to be defeated —

Suno and Udio could face damages of up to $150,000 per song allegedly infringed.

Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson's music.

Enlarge / Michael Jackson in concert, 1986. Sony Music owns a large portion of publishing rights to Jackson’s music.

Universal Music Group, Sony Music, and Warner Records have sued AI music-synthesis companies Udio and Suno for allegedly committing mass copyright infringement by using recordings owned by the labels to train music-generating AI models, reports Reuters. Udio and Suno can generate novel song recordings based on text-based descriptions of music (i.e., “a dubstep song about Linus Torvalds”).

The lawsuits, filed in federal courts in New York and Massachusetts, claim that the AI companies’ use of copyrighted material to train their systems could lead to AI-generated music that directly competes with and potentially devalues the work of human artists.

Like other generative AI models, both Udio and Suno (which we covered separately in April) rely on a broad selection of existing human-created artworks that teach a neural network the relationship between words in a written prompt and styles of music. The record labels correctly note that these companies have been deliberately vague about the sources of their training data.

Until generative AI models hit the mainstream in 2022, it was common practice in machine learning to scrape and use copyrighted information without seeking permission to do so. But now that the applications of those technologies have become commercial products themselves, rightsholders have come knocking to collect. In the case of Udio and Suno, the record labels are seeking statutory damages of up to $150,000 per song used in training.

In the lawsuit, the record labels cite specific examples of AI-generated content that allegedly re-creates elements of well-known songs, including The Temptations’ “My Girl,” Mariah Carey’s “All I Want for Christmas Is You,” and James Brown’s “I Got You (I Feel Good).” It also claims the music-synthesis models can produce vocals resembling those of famous artists, such as Michael Jackson and Bruce Springsteen.

Reuters claims it’s the first instance of lawsuits specifically targeting music-generating AI, but music companies and artists alike have been gearing up to deal with challenges the technology may pose for some time.

In May, Sony Music sent warning letters to over 700 AI companies (including OpenAI, Microsoft, Google, Suno, and Udio) and music-streaming services that prohibited any AI researchers from using its music to train AI models. In April, over 200 musical artists signed an open letter that called on AI companies to stop using AI to “devalue the rights of human artists.” And last November, Universal Music filed a copyright infringement lawsuit against Anthropic for allegedly including artists’ lyrics in its Claude LLM training data.

Similar to The New York Times’ lawsuit against OpenAI over the use of training data, the outcome of the record labels’ new suit could have deep implications for the future development of generative AI in creative fields, including requiring companies to license all musical training data used in creating music-synthesis models.

Compulsory licenses for AI training data could make AI model development economically impractical for small startups like Udio and Suno—and judging by the aforementioned open letter, many musical artists may applaud that potential outcome. But such a development would not preclude major labels from eventually developing their own AI music generators themselves, allowing only large corporations with deep pockets to control generative music tools for the foreseeable future.

Music industry giants allege mass copyright violation by AI firms Read More »

win+c,-windows’-most-cursed-keyboard-shortcut,-is-getting-retired-again

Win+C, Windows’ most cursed keyboard shortcut, is getting retired again

What job will Win+C lose next? —

Win+C has been assigned to some of Windows’ least successful features.

A rendering of the Copilot button.

Enlarge / A rendering of the Copilot button.

Microsoft

Microsoft is all-in on its Copilot+ PC push right now, but the fact is that they’ll be an extremely small minority among the PC install base for the foreseeable future. The program’s stringent hardware requirements—16GB of RAM, at least 256GB of storage, and a fast neural processing unit (NPU)—disqualify all but new PCs, keeping features like Recall from running on all current Windows 11 PCs.

But the Copilot chatbot remains supported on all Windows 11 PCs (and most Windows 10 PCs), and a change Microsoft has made to recent Windows 11 Insider Preview builds is actually making the feature less useful and accessible than it is in the current publicly available versions of Windows. Copilot is being changed from a persistent sidebar into an app window that can be resized, minimized, and pinned and unpinned from the taskbar, just like any other app. But at least as of this writing, this version of Copilot can no longer adjust Windows’ settings, and it’s no longer possible to call it up with the Windows+C keyboard shortcut. Only newer keyboards with the dedicated Copilot key will have an easy built-in keyboard shortcut for summoning Copilot.

If Microsoft keeps these changes intact, they’ll hit Windows 11 PCs when the 24H2 update is released to the general public later this year; the changes are already present on Copilot+ PCs, which are running a version of Window 11 24H2 out of the box.

Changing how Copilot works is all well and good—despite how quickly Microsoft has pushed it out to every Windows PC in existence, it has been labeled a “preview” up until the 24H2 update, and some amount of change is to be expected. But discontinuing the just-introduced Win+C keyboard shortcut to launch Copilot feels pointless, especially since the Win+C shortcut isn’t being reassigned.

The Copilot assistant exists on the taskbar, so it’s not as though it’s difficult to access, but the feature is apparently important enough to merit the first major change to Windows keyboards in three decades. Surely it also justifies retaining a keyboard shortcut for the vast majority of PC keyboards without a dedicated Copilot key.

People who want to continue to use Win+C as a launch key for Copilot can do so with custom keyboard remappers like Microsoft’s own Keyboard Manager PowerToy. Simply set Win+C as a shortcut for the obscure Win+Shift+F23 shortcut that the hardware Copilot key is already mapped to and you’ll be back in business.

Win+C has a complicated history

Win+C always seems to get associated with transient, unsuccessful Windows features like Charms and Cortana.

Enlarge / Win+C always seems to get associated with transient, unsuccessful Windows features like Charms and Cortana.

Andrew Cunningham

The Win+C keyboard shortcut actually has a bit of a checkered history, having been reassigned over the years to multiple less-than-successful Windows initiatives. In Windows 8, it made its debut as a shortcut for the “Charms” menu, part of the operating system’s tablet-oriented user interface that was designed to partially replace the old Start menu. But Windows 10 retreated from this new tablet UI, and the Charms bar was discontinued.

In Windows 10, Win+C was assigned to the Cortana voice assistant instead, Microsoft’s contribution to the early-2010s voice assistant boom kicked off by Apple’s Siri and refined by competitors like Amazon’s Alexa. But Cortana, like the Charms bar, never really took off, and Microsoft switched the voice assistant off in 2023 after a few years of steadily deprioritizing it in Windows 10 (and mostly hiding it in Windows 11).

Most older versions of Windows didn’t do anything with the Win+C, but if you go all the way back to the Windows 95 era, users of Microsoft Natural Keyboards who installed Microsoft’s IntelliType software could use Win+C to open the Control Panel. This shortcut apparently never made it into Windows itself, even as the Windows key became standard equipment on PCs in the late ’90s and early 2000s.

So pour one out for Win+C, the keyboard shortcut that is always trying to do something new and not quite catching on. We can’t wait to see what it does next.

Win+C, Windows’ most cursed keyboard shortcut, is getting retired again Read More »

runway’s-latest-ai-video-generator-brings-giant-cotton-candy-monsters-to-life

Runway’s latest AI video generator brings giant cotton candy monsters to life

Screen capture of a Runway Gen-3 Alpha video generated with the prompt

Enlarge / Screen capture of a Runway Gen-3 Alpha video generated with the prompt “A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them.”

On Sunday, Runway announced a new AI video synthesis model called Gen-3 Alpha that’s still under development, but it appears to create video of similar quality to OpenAI’s Sora, which debuted earlier this year (and has also not yet been released). It can generate novel, high-definition video from text prompts that range from realistic humans to surrealistic monsters stomping the countryside.

Unlike Runway’s previous best model from June 2023, which could only create two-second-long clips, Gen-3 Alpha can reportedly create 10-second-long video segments of people, places, and things that have a consistency and coherency that easily surpasses Gen-2. If 10 seconds sounds short compared to Sora’s full minute of video, consider that the company is working with a shoestring budget of compute compared to more lavishly funded OpenAI—and actually has a history of shipping video generation capability to commercial users.

Gen-3 Alpha does not generate audio to accompany the video clips, and it’s highly likely that temporally coherent generations (those that keep a character consistent over time) are dependent on similar high-quality training material. But Runway’s improvement in visual fidelity over the past year is difficult to ignore.

AI video heats up

It’s been a busy couple of weeks for AI video synthesis in the AI research community, including the launch of the Chinese model Kling, created by Beijing-based Kuaishou Technology (sometimes called “Kwai”). Kling can generate two minutes of 1080p HD video at 30 frames per second with a level of detail and coherency that reportedly matches Sora.

Gen-3 Alpha prompt: “Subtle reflections of a woman on the window of a train moving at hyper-speed in a Japanese city.”

Not long after Kling debuted, people on social media began creating surreal AI videos using Luma AI’s Luma Dream Machine. These videos were novel and weird but generally lacked coherency; we tested out Dream Machine and were not impressed by anything we saw.

Meanwhile, one of the original text-to-video pioneers, New York City-based Runway—founded in 2018—recently found itself the butt of memes that showed its Gen-2 tech falling out of favor compared to newer video synthesis models. That may have spurred the announcement of Gen-3 Alpha.

Gen-3 Alpha prompt: “An astronaut running through an alley in Rio de Janeiro.”

Generating realistic humans has always been tricky for video synthesis models, so Runway specifically shows off Gen-3 Alpha’s ability to create what its developers call “expressive” human characters with a range of actions, gestures, and emotions. However, the company’s provided examples weren’t particularly expressive—mostly people just slowly staring and blinking—but they do look realistic.

Provided human examples include generated videos of a woman on a train, an astronaut running through a street, a man with his face lit by the glow of a TV set, a woman driving a car, and a woman running, among others.

Gen-3 Alpha prompt: “A close-up shot of a young woman driving a car, looking thoughtful, blurred green forest visible through the rainy car window.”

The generated demo videos also include more surreal video synthesis examples, including a giant creature walking in a rundown city, a man made of rocks walking in a forest, and the giant cotton candy monster seen below, which is probably the best video on the entire page.

Gen-3 Alpha prompt: “A giant humanoid, made of fluffy blue cotton candy, stomping on the ground, and roaring to the sky, clear blue sky behind them.”

Gen-3 will power various Runway AI editing tools (one of the company’s most notable claims to fame), including Multi Motion Brush, Advanced Camera Controls, and Director Mode. It can create videos from text or image prompts.

Runway says that Gen-3 Alpha is the first in a series of models trained on a new infrastructure designed for large-scale multimodal training, taking a step toward the development of what it calls “General World Models,” which are hypothetical AI systems that build internal representations of environments and use them to simulate future events within those environments.

Runway’s latest AI video generator brings giant cotton candy monsters to life Read More »

google’s-abuse-of-fitbit-continues-with-web-app-shutdown

Google’s abuse of Fitbit continues with web app shutdown

Welcome to the Google lifestyle —

Users say the app, which is now the only Fitbit interface, lacks matching features.

Google’s abuse of Fitbit continues with web app shutdown

Fitbit

Google’s continued abuse of the Fitbit brand is continuing with the shutdown of the web dashboard. Fitbit.com used to be both a storefront and a way for users to get a big-screen UI to sift through reams of fitness data. The store closed up shop in April, and now the web dashboard is dying in July.

In a post on the “Fitbit Community” forums, the company said: “Next month, we’re consolidating the Fitbit.com dashboard into the Fitbit app. The web browser will no longer offer access to the Fitbit.com dashboard after July 8, 2024.” That’s it. There’s no replacement or new fitness thing Google is more interested in; web functionality is just being removed. Google, we’ll remind you, used to be a web company. Now it’s a phone app or nothing. Google did the same thing to its Google Fit product in 2019, killing off the more powerful website in favor of an app focus.

Dumping the web app leaves a few holes in Fitbit’s ecosystem. The Fitbit app doesn’t support big screens like tablet devices, so this is removing the only large-format interface for data. Fitbit’s competitors all have big-screen interfaces. Garmin has a very similar website, and the Apple Watch has an iPad health app. This isn’t an improvement. To make matters worse, the app does not have the features of the web dashboard, with many of the livid comments in the forums on Reddit calling out the app’s deficiencies in graphing, achievement statistics, calorie counting, and logs.

The web dashboard.

The web dashboard.

Fitbit

Google bought Fitbit back in 2021 and has spent most of its time shutting down Fitbit features and making the products worse. Migrations to Google Accounts started in 2022. The Google Assistant was removed from Fitbit’s 2022 product line, the Sense 2 and Versa 4, when support existed on the previous models. Social features—a key part of fitness motivation for many—were killed off in 2023. Google has mostly focused on making Fitbit an app for the Pixel Watch.

Google’s abuse of Fitbit continues with web app shutdown Read More »

google’s-pixel-8-series-gets-usb-c-to-displayport;-desktop-mode-rumors-heat-up

Google’s Pixel 8 series gets USB-C to DisplayPort; desktop mode rumors heat up

You would think a phone called “Pixel” would be better at this —

Grab a USB-C to DisplayPort cable and newer Pixels can be viewed from your TV or monitor.

The Pixel 8.

Enlarge / The Pixel 8.

Google

Google’s June Android update is out, and it’s bringing a few notable changes for Pixel phones. The most interesting is that the Pixel 8a, Pixel 8 and Pixel 8 Pro are all getting DisplayPort Alt Mode capabilities via their USB-C ports. This means you can go from USB-C to DisplayPort and plug right into a TV or monitor. This has been rumored forever and landed in some of the Android Betas earlier, but now it’s finally shipping out to production.

The Pixel 8’s initial display support is just a mirrored mode. You can either get an awkward vertical phone in the middle of your wide-screen display or turn the phone sideways and get a more reasonable layout. You could see it being useful for videos or presentations. It would be nice if it could do more.

Alongside this year-plus of display port rumors has been a steady drum beat (again) for an Android desktop mode. Google has been playing around with this idea since Android 7.0 in 2016. In 2019, we were told it was just a development testing project, and it never shipped to any real devices. Work around Android’s desktop mode has been heating up, though, so maybe a second swing at this idea will result in an actual product.

Android 15's in-development desktop mode.

Android 15’s in-development desktop mode.

Android Authority’s Mishaal Rahman has been tracking down the new desktop mode for a while now and now has it running. The new desktop mode looks just like a real desktop OS. Every app gets a title bar window decoration with an app icon, a label, and maximize and close buttons. You can drag windows around and resize them; the OS supports automatic window tiling by dragging to the side of the screen; and there’s even a little drop-down menu in the title bar app icon. If you were to turn that on with Tablet Android’s bottom app bar, you would have a lot of what you need for a desktop OS.

Just like last time, we’ve got no clue if this will turn into a real product. The biggest Android partner, Samsung, certainly seems to think the idea is worth doing. Samsung’s “DeX” desktop mode has been a feature for years on its devices.

DisplayPort support is part of the June 2024 update and should roll out to devices soon.

Google’s Pixel 8 series gets USB-C to DisplayPort; desktop mode rumors heat up Read More »

apple-and-openai-currently-have-the-most-misunderstood-partnership-in-tech

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn’t using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered “Apple Intelligence” during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we’ve seen confusion on social media about why Apple didn’t develop a cutting-edge GPT-4-like chatbot internally. Despite Apple’s year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple’s lack of innovation.

“This is really strange. Surely Apple could train a very good competing LLM if they wanted? They’ve had a year,” wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misinformation about it—saying things like, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”

While Apple has developed many technologies internally, it has also never been shy about integrating outside tech when necessary in various ways, from acquisitions to built-in clients—in fact, Siri was initially developed by an outside company. But by making a deal with a company like OpenAI, which has been the source of a string of tech controversies recently, it’s understandable that some people don’t understand why Apple made the call—and what it might entail for the privacy of their on-device data.

“Our customers want something with world knowledge some of the time”

While Apple Intelligence largely utilizes its own Apple-developed LLMs, Apple also realized that there may be times when some users want to use what the company considers the current “best” existing LLM—OpenAI’s GPT-4 family. In an interview with The Washington Post, Apple CEO Tim Cook explained the decision to integrate OpenAI first:

“I think they’re a pioneer in the area, and today they have the best model,” he said. “And I think our customers want something with world knowledge some of the time. So we considered everything and everyone. And obviously we’re not stuck on one person forever or something. We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

The proposed benefit of Apple integrating ChatGPT into various experiences within iOS, iPadOS, and macOS is that it allows AI users to access ChatGPT’s capabilities without the need to switch between different apps—either through the Siri interface or through Apple’s integrated “Writing Tools.” Users will also have the option to connect their paid ChatGPT account to access extra features.

As an answer to privacy concerns, Apple says that before any data is sent to ChatGPT, the OS asks for the user’s permission, and the entire ChatGPT experience is optional. According to Apple, requests are not stored by OpenAI, and users’ IP addresses are hidden. Apparently, communication with OpenAI servers happens through API calls similar to using the ChatGPT app on iOS, and there is reportedly no deeper OS integration that might expose user data to OpenAI without the user’s permission.

We can only take Apple’s word for it at the moment, of course, and solid details about Apple’s AI privacy efforts will emerge once security experts get their hands on the new features later this year.

Apple’s history of tech integration

So you’ve seen why Apple chose OpenAI. But why look to outside companies for tech? In some ways, Apple building an external LLM client into its operating systems isn’t too different from what it has previously done with streaming video (the YouTube app on the original iPhone), Internet search (Google search integration), and social media (integrated Twitter and Facebook sharing).

The press has positioned Apple’s recent AI moves as Apple “catching up” with competitors like Google and Microsoft in terms of chatbots and generative AI. But playing it slow and cool has long been part of Apple’s M.O.—not necessarily introducing the bleeding edge of technology but improving existing tech through refinement and giving it a better user interface.

Apple and OpenAI currently have the most misunderstood partnership in tech Read More »

google-avoids-jury-trial-by-sending-$2.3-million-check-to-us-government

Google avoids jury trial by sending $2.3 million check to US government

Judge, no jury —

Google gets a bench trial after sending unexpected check to Justice Department.

At Google headquarters, the company's logo is seen on the glass exterior of a building.

Getty Images | Justin Sullivan

Google has achieved its goal of avoiding a jury trial in one antitrust case after sending a $2.3 million check to the US Department of Justice. Google will face a bench trial, a trial conducted by a judge without a jury, after a ruling today that the preemptive check is big enough to cover any damages that might have been awarded by a jury.

“I am satisfied that the cashier’s check satisfies any damages claim,” US District Judge Leonie Brinkema said after a hearing in the Eastern District of Virginia on Friday, according to Bloomberg. “A fair reading of the expert reports does not support” a higher amount, Brinkema said.

The check was reportedly for $2,289,751. “Because the damages are no longer part of the case, Brinkema ruled a jury is no longer needed and she will oversee the trial, set to begin in September,” according to Bloomberg.

The payment was unusual, but so was the US request for a jury trial because antitrust cases are typically heard by a judge without a jury. The US argued that a jury should rule on damages because US government agencies were overcharged for advertising.

The US opposed Google’s motion to strike the jury demand in a filing last week, arguing that “the check it delivered did not actually compensate the United States for the full extent of its claimed damages” and that “the unilateral offer of payment was improperly premised on Google’s insistence that such payment ‘not be construed’ as an admission of damages.”

The government’s damages expert calculated damages that were “much higher” than the amount cited by Google, the US filing said. In last week’s filing, the higher damages amount sought by the government was redacted.

Lawsuit targets Google advertising

The US and eight states sued Google in January 2023 in a lawsuit related to the company’s advertising technology business. There are now 17 states involved in the case.

Google’s objection to a jury trial said that similar antitrust cases have been tried by judges because of their technical and often abstract nature. “To secure this unusual posture, several weeks before filing the Complaint, on the eve of Christmas 2022, DOJ attorneys scrambled around looking for agencies on whose behalf they could seek damages,” Google said.

The US and states’ lawsuit claimed that Google “corrupted legitimate competition in the ad tech industry” in a plan to “neutralize or eliminate ad tech competitors, actual or potential, through a series of acquisitions” and “wield its dominance across digital advertising markets to force more publishers and advertisers to use its products while disrupting their ability to use competing products effectively.”

The US government lawsuit said that federal agencies bought over $100 million in advertising since 2019 and aimed to recover treble damages for Google’s alleged overcharges on those purchases. But the government narrowed its claims to the ad purchases of just eight agencies, lowering the potential damages amount.

Google sent the check in mid-May. While the amount wasn’t initially public, Google said it contained “every dollar the United States could conceivably hope to recover under the damages calculation of the United States’ own expert.” Google also said it “continues to dispute liability and welcomes a full resolution by this Court of all remaining claims in the Complaint.”

US: We want more

The US disagreed that $2.3 million was the maximum it could recover. “Under the law, Google must pay the United States the maximum amount it could possibly recover at trial, which Google has not done,” the US said. “And Google cannot condition acceptance of that payment on its assertion that the United States was not harmed in the first place. In doing so, Google attempts to seize the strategic upside of satisfying the United States’ damages claim (potentially allowing it to avoid judgment by a jury) while at the same time avoiding the strategic downside of the United States being free to argue the common-sense inference that Google’s payment, is, at minimum, an acknowledgment of the harm done to federal agency advertisers who used Google’s ad tech tools.”

In a filing on Wednesday, Google said the DOJ previously agreed that its claims amounted to less than $1 million before trebling and pre-judgment interest. The check sent by Google was for the exact amount after trebling and interest, the filing said. But the “DOJ now ignores this undisputed fact, offering up a brand new figure, previously uncalculated by any DOJ expert, unsupported by the record, and never disclosed,” Google told the court.

Siding with Google at today’s hearing, Brinkema “said the amount of Google’s check covered the highest possible amount the government had sought in its initial filings,” the Associated Press reported. “She likened receipt of the money, which was paid unconditionally to the government regardless of whether the tech giant prevailed in its arguments to strike a jury trial, as equivalent to ‘receiving a wheelbarrow of cash.'”

While the US lost its attempt to obtain more damages than Google offered, the lawsuit also seeks an order declaring that Google illegally monopolized the market. The complaint requests a breakup in which Google would have to divest “the Google Ad Manager suite, including both Google’s publisher ad server, DFP, and Google’s ad exchange, AdX.”

Google avoids jury trial by sending $2.3 million check to US government Read More »

google’s-ai-overviews-misunderstand-why-people-use-google

Google’s AI Overviews misunderstand why people use Google

robot hand holding glue bottle over a pizza and tomatoes

Aurich Lawson | Getty Images

Last month, we looked into some of the most incorrect, dangerous, and downright weird answers generated by Google’s new AI Overviews feature. Since then, Google has offered a partial apology/explanation for generating those kinds of results and has reportedly rolled back the feature’s rollout for at least some types of queries.

But the more I’ve thought about that rollout, the more I’ve begun to question the wisdom of Google’s AI-powered search results in the first place. Even when the system doesn’t give obviously wrong results, condensing search results into a neat, compact, AI-generated summary seems like a fundamental misunderstanding of how people use Google in the first place.

Reliability and relevance

When people type a question into the Google search bar, they only sometimes want the kind of basic reference information that can be found on a Wikipedia page or corporate website (or even a Google information snippet). Often, they’re looking for subjective information where there is no one “right” answer: “What are the best Mexican restaurants in Santa Fe?” or “What should I do with my kids on a rainy day?” or “How can I prevent cheese from sliding off my pizza?”

The value of Google has always been in pointing you to the places it thinks are likely to have good answers to those questions. But it’s still up to you, as a user, to figure out which of those sources is the most reliable and relevant to what you need at that moment.

  • This wasn’t funny when the guys at Pep Boys said it, either. (via)

    Kyle Orland / Google

  • Weird Al recommends “running with scissors” as well! (via)

    Kyle Orland / Google

  • This list of steps actually comes from a forum thread response about doing something completely different. (via)

    Kyle Orland / Google

  • An island that’s part of the mainland? (via)

    Kyle Orland / Google

  • If everything’s cheaper now, why does everything seem so expensive?

    Kyle Orland / Google

  • Pretty sure this Truman was never president… (via)

    Kyle Orland / Google

For reliability, any savvy Internet user makes use of countless context clues when judging a random Internet search result. Do you recognize the outlet or the author? Is the information from someone with seeming expertise/professional experience or a random forum poster? Is the site well-designed? Has it been around for a while? Does it cite other sources that you trust, etc.?

But Google also doesn’t know ahead of time which specific result will fit the kind of information you’re looking for. When it comes to restaurants in Santa Fe, for instance, are you in the mood for an authoritative list from a respected newspaper critic or for more off-the-wall suggestions from random locals? Or maybe you scroll down a bit and stumble on a loosely related story about the history of Mexican culinary influences in the city.

One of the unseen strengths of Google’s search algorithm is that the user gets to decide which results are the best for them. As long as there’s something reliable and relevant in those first few pages of results, it doesn’t matter if the other links are “wrong” for that particular search or user.

Google’s AI Overviews misunderstand why people use Google Read More »

google’s-ai-overview-is-flawed-by-design,-and-a-new-company-blog-post-hints-at-why

Google’s AI Overview is flawed by design, and a new company blog post hints at why

guided by voices —

Google: “There are bound to be some oddities and errors” in system that told people to eat rocks.

A selection of Google mascot characters created by the company.

Enlarge / The Google “G” logo surrounded by whimsical characters, all of which look stunned and surprised.

On Thursday, Google capped off a rough week of providing inaccurate and sometimes dangerous answers through its experimental AI Overview feature by authoring a follow-up blog post titled, “AI Overviews: About last week.” In the post, attributed to Google VP Liz Reid, head of Google Search, the firm formally acknowledged issues with the feature and outlined steps taken to improve a system that appears flawed by design, even if it doesn’t realize it is admitting it.

To recap, the AI Overview feature—which the company showed off at Google I/O a few weeks ago—aims to provide search users with summarized answers to questions by using an AI model integrated with Google’s web ranking systems. Right now, it’s an experimental feature that is not active for everyone, but when a participating user searches for a topic, they might see an AI-generated answer at the top of the results, pulled from highly ranked web content and summarized by an AI model.

While Google claims this approach is “highly effective” and on par with its Featured Snippets in terms of accuracy, the past week has seen numerous examples of the AI system generating bizarre, incorrect, or even potentially harmful responses, as we detailed in a recent feature where Ars reporter Kyle Orland replicated many of the unusual outputs.

Drawing inaccurate conclusions from the web

On Wednesday morning, Google's AI Overview was erroneously telling us the Sony PlayStation and Sega Saturn were available in 1993.

Enlarge / On Wednesday morning, Google’s AI Overview was erroneously telling us the Sony PlayStation and Sega Saturn were available in 1993.

Kyle Orland / Google

Given the circulating AI Overview examples, Google almost apologizes in the post and says, “We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously.” But Reid, in an attempt to justify the errors, then goes into some very revealing detail about why AI Overviews provides erroneous information:

AI Overviews work very differently than chatbots and other LLM products that people may have tried out. They’re not simply generating an output based on training data. While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from our index. That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further. Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by top web results.

This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might.

Here we see the fundamental flaw of the system: “AI Overviews are built to only show information that is backed up by top web results.” The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage. Google Search has been broken for some time, and now the company is relying on those gamed and spam-filled results to feed its new AI model.

Even if the AI model draws from a more accurate source, as with the 1993 game console search seen above, Google’s AI language model can still make inaccurate conclusions about the “accurate” data, confabulating erroneous information in a flawed summary of the information available.

Generally ignoring the folly of basing its AI results on a broken page-ranking algorithm, Google’s blog post instead attributes the commonly circulated errors to several other factors, including users making nonsensical searches “aimed at producing erroneous results.” Google does admit faults with the AI model, like misinterpreting queries, misinterpreting “a nuance of language on the web,” and lacking sufficient high-quality information on certain topics. It also suggests that some of the more egregious examples circulating on social media are fake screenshots.

“Some of these faked results have been obvious and silly,” Reid writes. “Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared. So we’d encourage anyone encountering these screenshots to do a search themselves to check.”

(No doubt some of the social media examples are fake, but it’s worth noting that any attempts to replicate those early examples now will likely fail because Google will have manually blocked the results. And it is potentially a testament to how broken Google Search is if people believed extreme fake examples in the first place.)

While addressing the “nonsensical searches” angle in the post, Reid uses the example search, “How many rocks should I eat each day,” which went viral in a tweet on May 23. Reid says, “Prior to these screenshots going viral, practically no one asked Google that question.” And since there isn’t much data on the web that answers it, she says there is a “data void” or “information gap” that was filled by satirical content found on the web, and the AI model found it and pushed it as an answer, much like Featured Snippets might. So basically, it was working exactly as designed.

A screenshot of an AI Overview query,

Enlarge / A screenshot of an AI Overview query, “How many rocks should I eat each day” that went viral on X last week.

Google’s AI Overview is flawed by design, and a new company blog post hints at why Read More »