Google

google-sues-two-crypto-app-makers-over-allegedly-vast-“pig-butchering”-scheme

Google sues two crypto app makers over allegedly vast “pig butchering” scheme

Foul Play —

Crypto and other investment app scams promoted on YouTube targeted 100K users.

Google sues two crypto app makers over allegedly vast “pig butchering” scheme

Google has sued two app developers based in China over an alleged scheme targeting 100,000 users globally over four years with at least 87 fraudulent cryptocurrency and other investor apps distributed through the Play Store.

The tech giant alleged that scammers lured victims with “promises of high returns” from “seemingly legitimate” apps offering investment opportunities in cryptocurrencies and other products. Commonly known as “pig-butchering schemes,” these scams displayed fake returns on investments, but when users went to withdraw the funds, they discovered they could not.

In some cases, Google alleged, developers would “double down on the scheme by requesting various fees and other payments from victims that were supposedly necessary for the victims to recover their principal investments and purported gains.”

Google accused the app developers—Yunfeng Sun (also known as “Alphonse Sun”) and Hongnam Cheung (also known as “Zhang Hongnim” and “Stanford Fischer”)—of conspiring to commit “hundreds of acts of wire fraud” to further “an unlawful pattern of racketeering activity” that siphoned up to $75,000 from each user successfully scammed.

Google was able to piece together the elaborate alleged scheme because the developers used a wide array of Google products and services to target victims, Google said, including Google Play, Voice, Workspace, and YouTube, breaching each one’s terms of service. Perhaps most notably, the Google Play Store’s developer program policies “forbid developers to upload to Google Play ‘apps that expose users to deceptive or harmful financial products and services,’ including harmful products and services ‘related to the management or investment of money and cryptocurrencies.'”

In addition to harming Google consumers, Google claimed that each product and service’s reputation would continue to be harmed unless the US district court in New York ordered a permanent injunction stopping developers from using any Google products or services.

“By using Google Play to conduct their fraud scheme,” scammers “have threatened the integrity of Google Play and the user experience,” Google alleged. “By using other Google products to support their scheme,” the scammers “also threaten the safety and integrity of those other products, including YouTube, Workspace, and Google Voice.”

Google’s lawsuit is the company’s most recent attempt to block fraudsters from targeting Google products by suing individuals directly, Bloomberg noted. Last year, Google sued five people accused of distributing a fake Bard AI chatbot that instead downloaded malware to Google users’ devices, Bloomberg reported.

How did the alleged Google Play scams work?

Google said that the accused developers “varied their approach from app to app” when allegedly trying to scam users out of thousands of dollars but primarily relied on three methods to lure victims.

The first method relied on sending text messages using Google Voice—such as “I am Sophia, do you remember me?” or “I miss you all the time, how are your parents Mike?”—”to convince the targeted victims that they were sent to the wrong number.” From there, the scammers would apparently establish “friendships” or “romantic relationships” with victims before moving the conversation to apps like WhatsApp, where they would “offer to guide the victim through the investment process, often reassuring the victim of any doubts they had about the apps.” These supposed friends, Google claimed, would “then disappear once the victim tried to withdraw funds.”

Another strategy allegedly employed by scammers relied on videos posted to platforms like YouTube, where fake investment opportunities would be promoted, promising “rates of return” as high as “two percent daily.”

The third tactic, Google said, pushed bogus affiliate marketing campaigns, promising users commissions for “signing up additional users.” These apps, Google claimed, were advertised on social media as “a guaranteed and easy way to earn money.”

Once a victim was drawn into using one of the fraudulent apps, “user interfaces sought to convince victims that they were maintaining balances on the app and that they were earning ‘returns’ on their investments,” Google said.

Occasionally, users would be allowed to withdraw small amounts, convincing them that it was safe to invest more money, but “later attempts to withdraw purported returns simply did not work.” And sometimes the scammers would “bilk” victims out of “even more money,” Google said, by requesting additional funds be submitted to make a withdrawal.

“Some demands” for additional funds, Google found, asked for anywhere “from 10 to 30 percent to cover purported commissions and/or taxes.” Victims, of course, “still did not receive their withdrawal requests even after these additional fees were paid,” Google said.

Which apps were removed from the Play Store?

Google tried to remove apps as soon as they were discovered to be fraudulent, but Google claimed that scammers concocted new aliases and infrastructure to “obfuscate their connection to suspended fraudulent apps.” Because scammers relied on so many different Google services, Google was able to connect the scheme to the accused developers through various business records.

Fraudulent apps named in the complaint include fake cryptocurrency exchanges called TionRT and SkypeWallet. To make the exchanges appear legitimate, scammers put out press releases on newswire services and created YouTube videos likely relying on actors to portray company leadership.

In one YouTube video promoting SkypeWallet, the supposed co-founder of Skype Coin uses the name “Romser Bennett,” which is the same name used for the supposed founder of another fraudulent app called OTCAI2.0, Google said. In each video, a completely different presumed hired actor plays the part of “Romser Bennett.” In other videos, Google found the exact same actor plays an engineer named “Rodriguez” for one app and a technical leader named “William Bryant” for another app.

Another fraudulent app that was flagged by Google was called the Starlight app. Promoted on TikTok and Instagram, Google said, that app promised “that users could earn commissions by simply watching videos.”

The Starlight app was downloaded approximately 23,000 times and seemingly primarily targeted users in Ghana, allegedly scamming at least 6,000 Ghanian users out of initial investment capital that they were told was required before they could start earning money on the app.

Across all 87 fraudulent apps that Google has removed, Google estimated that approximately 100,000 users were victimized, including approximately 8,700 in the United States.

Currently, Google is not aware of any live apps in the Play Store connected to the alleged scheme, the complaint said, but scammers intent on furthering the scheme “will continue to harm Google and Google Play users” without a permanent injunction, Google warned.

Google sues two crypto app makers over allegedly vast “pig butchering” scheme Read More »

waymo-and-uber-eats-start-human-less-food-deliveries-in-phoenix

Waymo and Uber Eats start human-less food deliveries in Phoenix

Someday the robots will be mad that we aren’t tipping them —

You’ll need to run outside when your robot delivery arrives.

A Waymo Jaguar I-Pace.

Enlarge / A Waymo Jaguar I-Pace.

Waymo

Your next food delivery driver may be a robot.

Waymo and Uber have been working together on regular Ubers for a while, but the two companies are now teaming up for food delivery. Automated Uber Eats is rolling out to Waymo’s Phoenix service area. Waymo says this will start in “select merchants in Chandler, Tempe and Mesa, including local favorites like Princess Pita, Filiberto’s, and Bosa Donuts.”

Phoenix Uber Eats customers can fire up the app and order some food, and they might see the message “autonomous vehicles may deliver your order.” Waymo says you’ll be able to opt out of robot delivery at checkout if you want.

  • The pop-up screen if a Waymo is delivering your order.

    Waymo

Of course, the big difference between human and robot food delivery is that the human driver will take your food door to door, while for the Waymo option, you’ll need to run outside and flag down your robot delivery vehicle when it arrives. Just like regular Uber, you’ll get a notification through the app when it’s time. The food should be in the trunk. If you get paired with a Waymo, your delivery tip will be refunded. Waymo doesn’t explain how the restaurant side of things will work, but inevitably, some poor food server will need to run outside when the Waymo arrives.

It seems pretty wasteful to have a 2-ton, crash-tested vehicle designed to seat five humans delivering a small bag of food, but at least the Jaguar i-Pace Waymos are all-electric. It’s a shame Waymo’s smaller “Firefly” cars were retired. There are smaller, more purpose-built food delivery bots out there—Uber Eats is partnered with Serve Robotics for smaller robot delivery—but these are all sidewalk-cruising, walking-speed robots that can only go a few blocks. The Nuro R3 (Nuro is also partnered with Uber) seems like a good example of what a road-going delivery should look like—it’s designed for food and not people, and it comes with heated or cooled food compartments. Waymo is still the industry leader in automated driving, though.

Waymo and Uber Eats start human-less food deliveries in Phoenix Read More »

google-might-make-users-pay-for-ai-features-in-search-results

Google might make users pay for AI features in search results

Pay-eye for the AI —

Plan would represent a first for what has been a completely ad-funded search engine.

You think this cute little search robot is going to work for free?

Enlarge / You think this cute little search robot is going to work for free?

Google might start charging for access to search results that use generative artificial intelligence tools. That’s according to a new Financial Times report citing “three people with knowledge of [Google’s] plans.”

Charging for any part of the search engine at the core of its business would be a first for Google, which has funded its search product solely with ads since 2000. But it’s far from the first time Google would charge for AI enhancements in general; the “AI Premium” tier of a Google One subscription costs $10 more per month than a standard “Premium” plan, for instance, while “Gemini Business” adds $20 a month to a standard Google Workspace subscription.

While those paid products offer access to Google’s high-end “Gemini Advanced” AI model, Google also offers free access to its less performant, plain “Gemini” model without any kind of paid subscription.

When ads aren’t enough?

Under the proposed plan, Google’s standard search (without AI) would remain free, and subscribers to a paid AI search tier would still see ads alongside their Gemini-powered search results, according to the FT report. But search ads—which brought in a reported $175 billion for Google last year—might not be enough to fully cover the increased costs involved with AI-powered search. A Reuters report from last year suggested that running a search query through an advanced neural network like Gemini “likely costs 10 times more than a standard keyword search,” potentially representing “several billion dollars of extra costs” across Google’s network.

Cost aside, it remains to be seen if there’s a critical mass of market demand for this kind of AI-enhanced search. Microsoft’s massive investment in generative AI features for its Bing search engine has failed to make much of a dent in Google’s market share over the last year or so. And there has reportedly been limited uptake for Google’s experimental opt-in “Search Generative Experience” (SGE), which adds chatbot responses above the usual set of links in response to a search query.

“SGE never feels like a useful addition to Google Search,” Ars’ Ron Amadeo wrote last month. “Google Search is a tool, and just as a screwdriver is not a hammer, I don’t want a chatbot in a search engine.”

Regardless, the current tech industry mania surrounding anything and everything related to generative AI may make Google feel it has to integrate the technology into some sort of “premium” search product sooner rather than later. For now, FT reports that Google hasn’t made a final decision on whether to implement the paid AI search plan, even as Google engineers work on the backend technology necessary to launch such a service

Google also faces AI-related difficulties on the other side of the search divide. Last month, the company announced it was redoubling its efforts to limit the appearance of “spammy, low-quality content”—much of it generated by AI chatbots—in its search results.

In February, Google shut down the image generation features of its Gemini AI model after the service was found inserting historically inaccurate examples of racial diversity into some of its prompt responses.

Google might make users pay for AI features in search results Read More »

billie-eilish,-pearl-jam,-200-artists-say-ai-poses-existential-threat-to-their-livelihoods

Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods

artificial music —

Artists say AI will “set in motion a race to the bottom that will degrade the value of our work.”

Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024 in Beverly Hills, California.

Enlarge / Billie Eilish attends the 2024 Vanity Fair Oscar Party hosted by Radhika Jones at the Wallis Annenberg Center for the Performing Arts on March 10, 2024, in Beverly Hills, California.

On Tuesday, the Artist Rights Alliance (ARA) announced an open letter critical of AI signed by over 200 musical artists, including Pearl Jam, Nicki Minaj, Billie Eilish, Stevie Wonder, Elvis Costello, and the estate of Frank Sinatra. In the letter, the artists call on AI developers, technology companies, platforms, and digital music services to stop using AI to “infringe upon and devalue the rights of human artists.” A tweet from the ARA added that AI poses an “existential threat” to their art.

Visual artists began protesting the advent of generative AI after the rise of the first mainstream AI image generators in 2022, and considering that generative AI research has since been undertaken for other forms of creative media, we have seen that protest extend to professionals in other creative domains, such as writers, actors, filmmakers—and now musicians.

“When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities, our music and our livelihoods,” the open letter states. It alleges that some of the “biggest and most powerful” companies (unnamed in the letter) are using the work of artists without permission to train AI models, with the aim of replacing human artists with AI-created content.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

  • A list of musical artists that signed the ARA open letter against generative AI.

In January, Billboard reported that AI research taking place at Google DeepMind had trained an unnamed music-generating AI on a large dataset of copyrighted music without seeking artist permission. That report may have been referring to Google’s Lyria, an AI-generation model announced in November that the company positioned as a tool for enhancing human creativity. The tech has since powered musical experiments from YouTube.

We’ve previously covered AI music generators that seemed fairly primitive throughout 2022 and 2023, such as Riffusion, Google’s MusicLM, and Stability AI’s Stable Audio. We’ve also covered open source musical voice-cloning technology that is frequently used to make musical parodies online. While we have yet to see an AI model that can generate perfect, fully composed high-quality music on demand, the quality of outputs from music synthesis models has been steadily improving over time.

In considering AI’s potential impact on music, it’s instructive to remember historical instances where tech innovations initially sparked concern among artists. For instance, the introduction of synthesizers in the 1960s and 1970s and the advent of digital sampling in the 1980s both faced scrutiny and fear from parts of the music community, but the music industry eventually adjusted.

While we’ve seen fear of the unknown related to AI going around quite a bit for the past year, it’s possible that AI tools will be integrated into the music production process like any other music production tool or technique that came before. It’s also possible that even if that kind of integration comes to pass, some artists will still get hurt along the way—and the ARA wants to speak out about it before the technology progresses further.

“Race to the bottom”

The Artists Rights Alliance is a nonprofit advocacy group that describes itself as an “alliance of working musicians, performers, and songwriters fighting for a healthy creative economy and fair treatment for all creators in the digital world.”

The signers of the ARA’s open letter say they acknowledge the potential of AI to advance human creativity when used responsibly, but they also claim that replacing artists with generative AI would “substantially dilute the royalty pool” paid out to artists, which could be “catastrophic” for many working musicians, artists, and songwriters who are trying to make ends meet.

In the letter, the artists say that unchecked AI will set in motion a race to the bottom that will degrade the value of their work and prevent them from being fairly compensated. “This assault on human creativity must be stopped,” they write. “We must protect against the predatory use of AI to steal professional artist’ voices and likenesses, violate creators’ rights, and destroy the music ecosystem.”

The emphasis on the word “human” in the letter is notable (“human artist” was used twice and “human creativity” and “human artistry” are used once, each) because it suggests the clear distinction they are drawing between the work of human artists and the output of AI systems. It implies recognition that we’ve entered a new era where not all creative output is made by people.

The letter concludes with a call to action, urging all AI developers, technology companies, platforms, and digital music services to pledge not to develop or deploy AI music-generation technology, content, or tools that undermine or replace the human artistry of songwriters and artists or deny them fair compensation for their work.

While it’s unclear whether companies will meet those demands, so far, protests from visual artists have not stopped development of ever-more advanced image-synthesis models. On Threads, frequent AI industry commentator Dare Obasanjo wrote, “Unfortunately this will be as effective as writing an open letter to stop the sun from rising tomorrow.”

Billie Eilish, Pearl Jam, 200 artists say AI poses existential threat to their livelihoods Read More »

users-say-google’s-vpn-app-“breaks”-the-windows-dns-settings

Users say Google’s VPN app “breaks” the Windows DNS settings

You know who you’re signing up with —

Does Google’s app really need to constantly reset all Windows network interfaces?

Users say Google’s VPN app “breaks” the Windows DNS settings

Aurich / Thinkstock

Google offers a VPN via its “Google One” monthly subscription plan, and while it debuted on phones, a desktop app has been available for Windows and Mac OS for over a year now. Since a lot of people pay for Google One for the cloud storage increase for their Google accounts, you might be tempted to try the VPN on a desktop, but Windows users testing out the app haven’t seemed too happy lately. An open bug report on Google’s GitHub for the project says the Windows app “breaks” the Windows DNS, and this has been ongoing since at least November.

A VPN would naturally route all your traffic through a secure tunnel, but you’ve still got to do DNS lookups somewhere. A lot of VPN services also come with a DNS service, and Google is no different. The problem is that Google’s VPN app changes the Windows DNS settings of all network adapters to always use Google’s DNS, whether the VPN is on or off. Even if you change them, Google’s program will change them back.

Most VPN apps don’t work this way, and even Google’s Mac VPN program doesn’t work this way. The users in the thread (and the ones emailing us) expect the app, at minimum, to use the original Windows settings when the VPN is off. Since running a VPN is often about privacy and security, users want to be able to change the DNS away from Google even when the VPN is running.

Changing the DNS can result in several problems for certain setups. As users in the thread point out, some people, especially those using a VPN, want an encrypted DNS setup, and Google’s VPN program will just turn this off. It can break custom filtering setups and will prevent users from accessing local network IPs, like a router configuration page or corporate intranet pages. It will also make it impossible to log in to a captive portal, which you often see on public Wi-Fi at a hotel, airport, or coffee shop.

Besides that behavior, the thread is full of all sorts of reports of Google’s VPN program getting screwy with the Windows DNS settings. Several users say Google’s VPN app frequently resets the DNS settings of all network adapters, even if they change them after the initial install sets them to 8.8.8.8. For instance, one reply from ryanzimbauser says: “This program has absolutely no business changing all present NICs to a separate DNS on the startup of my computer while the program is not set to ‘Launch app after computer starts.’ This recent change interfered with my computer’s ability to access a network implementing a private DNS filter. This has broken my trust and I will not be reinstalling this program until this is remedied.”

Several user reports say that even after uninstalling the Google VPN, the DNS settings don’t revert to what they used to be. Maybe this is more of a Windows problem than a Google problem, but a lot of users have trouble changing the settings away from 8.8.8.8 through the control panel after uninstalling. They are resorting to registry changes, PowerShell scripts, or the “reset network settings” button.

Google employee Ryan Lothian responded to the thread, saying:

Hey folks, thank you for reporting this behaviour.

To protect users privacy, the Google One VPN deliberately sets DNS to use Google’s DNS servers. This prevents a nefarious DNS server (that might be set by DHCP) compromising your privacy. Visit https://developers.google.com/speed/public-dns/privacy to learn about the limited logging performed by Google DNS.

We think this is a good default for most users. However, we do recognize that some users might want to have their own DNS, or have the DNS revert when VPN disconnects. We’ll consider adding this to a future release of the app.

It’s pretty rare for Google, the web and Android company, to make a Windows program. There’s Chrome, the Drive syncing app, Google Earth Pro, this VPN app, and not too much else. You can find it by going to the Google One website, clicking “Benefits” in the sidebar, and then “View Details” under the VPN box, where you’ll find an exceedingly rare Google Windows executable.

If you want a VPN and care about privacy, there are probably better places to go than Google. The company can still see all the websites you’re visiting via its DNS servers, and while the VPN data might be private, Google’s DNS holds onto your web history for up to 48 hours and is subject to subpoenas. There are several accusations in the thread of Google changing DNS for data harvesting purposes, but if you’re concerned about that, maybe don’t do business with one of the world’s biggest user-tracking companies.

Users say Google’s VPN app “breaks” the Windows DNS settings Read More »

openai-drops-login-requirements-for-chatgpt’s-free-version

OpenAI drops login requirements for ChatGPT’s free version

free as in beer? —

ChatGPT 3.5 still falls far short of GPT-4, and other models surpassed it long ago.

A glowing OpenAI logo on a blue background.

Benj Edwards

On Monday, OpenAI announced that visitors to the ChatGPT website in some regions can now use the AI assistant without signing in. Previously, the company required that users create an account to use it, even with the free version of ChatGPT that is currently powered by the GPT-3.5 AI language model. But as we have noted in the past, GPT-3.5 is widely known to provide more inaccurate information compared to GPT-4 Turbo, available in paid versions of ChatGPT.

Since its launch in November 2022, ChatGPT has transformed over time from a tech demo to a comprehensive AI assistant, and it’s always had a free version available. The cost is free because “you’re the product,” as the old saying goes. Using ChatGPT helps OpenAI gather data that will help the company train future AI models, although free users and ChatGPT Plus subscription members can both opt out of allowing the data they input into ChatGPT to be used for AI training. (OpenAI says it never trains on inputs from ChatGPT Team and Enterprise members at all).

Opening ChatGPT to everyone could provide a frictionless on-ramp for people who might use it as a substitute for Google Search or potentially gain new customers by providing an easy way for people to use ChatGPT quickly, then offering an upsell to paid versions of the service.

“It’s core to our mission to make tools like ChatGPT broadly available so that people can experience the benefits of AI,” OpenAI says on its blog page. “For anyone that has been curious about AI’s potential but didn’t want to go through the steps to set up an account, start using ChatGPT today.”

When you visit the ChatGPT website, you're immediately presented with a chat box like this (in some regions). Screenshot captured April 1, 2024.

Enlarge / When you visit the ChatGPT website, you’re immediately presented with a chat box like this (in some regions). Screenshot captured April 1, 2024.

Benj Edwards

Since kids will also be able to use ChatGPT without an account—despite it being against the terms of service—OpenAI also says it’s introducing “additional content safeguards,” such as blocking more prompts and “generations in a wider range of categories.” What exactly that entails has not been elaborated upon by OpenAI, but we reached out to the company for comment.

There might be a few other downsides to the fully open approach. On X, AI researcher Simon Willison wrote about the potential for automated abuse as a way to get around paying for OpenAI’s services: “I wonder how their scraping prevention works? I imagine the temptation for people to abuse this as a free 3.5 API will be pretty strong.”

With fierce competition, more GPT-3.5 access may backfire

Willison also mentioned a common criticism of OpenAI (as voiced in this case by Wharton professor Ethan Mollick) that people’s ideas about what AI models can do have so far largely been influenced by GPT-3.5, which, as we mentioned, is far less capable and far more prone to making things up than the paid version of ChatGPT that uses GPT-4 Turbo.

“In every group I speak to, from business executives to scientists, including a group of very accomplished people in Silicon Valley last night, much less than 20% of the crowd has even tried a GPT-4 class model,” wrote Mollick in a tweet from early March.

With models like Google Gemini Pro 1.5 and Anthropic Claude 3 potentially surpassing OpenAI’s best proprietary model at the moment —and open weights AI models eclipsing the free version of ChatGPT—allowing people to use GPT-3.5 might not be putting OpenAI’s best foot forward. Microsoft Copilot, powered by OpenAI models, also supports a frictionless, no-login experience, but it allows access to a model based on GPT-4. But Gemini currently requires a sign-in, and Anthropic sends a login code through email.

For now, OpenAI says the login-free version of ChatGPT is not yet available to everyone, but it will be coming soon: “We’re rolling this out gradually, with the aim to make AI accessible to anyone curious about its capabilities.”

OpenAI drops login requirements for ChatGPT’s free version Read More »

google-agrees-to-delete-incognito-data-despite-prior-claim-that’s-“impossible”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

Deleting files —

What a lawyer calls “a historic step,” Google considers not that “significant.”

Google agrees to delete Incognito data despite prior claim that’s “impossible”

To settle a class-action dispute over Chrome’s “Incognito” mode, Google has agreed to delete billions of data records reflecting users’ private browsing activities.

In a statement provided to Ars, users’ lawyer, David Boies, described the settlement as “a historic step in requiring honesty and accountability from dominant technology companies.” Based on Google’s insights, users’ lawyers valued the settlement between $4.75 billion and $7.8 billion, the Monday court filing said.

Under the settlement, Google agreed to delete class-action members’ private browsing data collected in the past, as well as to “maintain a change to Incognito mode that enables Incognito users to block third-party cookies by default.” This, plaintiffs’ lawyers noted, “ensures additional privacy for Incognito users going forward, while limiting the amount of data Google collects from them” over the next five years. Plaintiffs’ lawyers said that this means that “Google will collect less data from users’ private browsing sessions” and “Google will make less money from the data.”

“The settlement stops Google from surreptitiously collecting user data worth, by Google’s own estimates, billions of dollars,” Boies said. “Moreover, the settlement requires Google to delete and remediate, in unprecedented scope and scale, the data it improperly collected in the past.”

Google had already updated disclosures to users, changing the splash screen displayed “at the beginning of every Incognito session” to inform users that Google was still collecting private browsing data. Under the settlement, those disclosures to all users must be completed by March 31, after which the disclosures must remain. Google also agreed to “no longer track people’s choice to browse privately,” and the court filing said that “Google cannot roll back any of these important changes.”

Notably, the settlement does not award monetary damages to class members. Instead, Google agreed that class members retain “rights to sue Google individually for damages” through arbitration, which, users’ lawyers wrote, “is important given the significant statutory damages available under the federal and state wiretap statutes.”

“These claims remain available for every single class member, and a very large number of class members recently filed and are continuing to file complaints in California state court individually asserting those damages claims in their individual capacities,” the court filing said.

While “Google supports final approval of the settlement,” the company “disagrees with the legal and factual characterizations contained in the motion,” the court filing said. Google spokesperson José Castañeda told Ars that the tech giant thinks that the “data being deleted isn’t as significant” as Boies represents, confirming that Google was “pleased to settle this lawsuit, which we always believed was meritless.”

“The plaintiffs originally wanted $5 billion and are receiving zero,” Castañeda said. “We never associate data with users when they use Incognito mode. We are happy to delete old technical data that was never associated with an individual and was never used for any form of personalization.”

While Castañeda said that Google was happy to delete the data, a footnote in the court filing noted that initially, “Google claimed in the litigation that it was impossible to identify (and therefore delete) private browsing data because of how it stored data.” Now, under the settlement, however, Google has agreed “to remediate 100 percent of the data set at issue.”

Mitigation efforts include deleting fields Google used to detect users in Incognito mode, “partially redacting IP addresses,” and deleting “detailed URLs, which will prevent Google from knowing the specific pages on a website a user visited when in private browsing mode.” Keeping “only the domain-level portion of the URL (i.e., only the name of the website) will vastly improve user privacy by preventing Google (or anyone who gets their hands on the data) from knowing precisely what users were browsing,” the court filing said.

Because Google did not oppose the motion for final approval, US District Judge Yvonne Gonzalez Rogers is expected to issue an order approving the settlement on July 30.

Google agrees to delete Incognito data despite prior claim that’s “impossible” Read More »

google-says-running-ai-models-on-phones-is-a-huge-ram-hog

Google says running AI models on phones is a huge RAM hog

8GB of RAM ought to be enough for anybody —

Google wants AI models to be loaded 24/7, so 8GB of RAM might not be enough.

The Google Gemini logo.

Enlarge / The Google Gemini logo.

Google

In early March, Google made the odd announcement that only one of its two latest smartphones, the Pixel 8 and Pixel 8 Pro, would be able to run its latest AI model, called “Google Gemini.” Despite having very similar specs, the smaller Pixel 8 wouldn’t get the new AI model, with the company citing mysterious “hardware limitations” as the reason. It was a strange statement considering the fact that Google designed and marketed the Pixel 8 to be AI-centric and then designed a smartphone-centric AI model called “Gemini Nano” yet still couldn’t make the two work together.

A few weeks later, Google is backtracking somewhat. The company announced on the Pixel Phone Help forum that the smaller Pixel 8 actually will get Gemini Nano in the next big quarterly Android release, which should happen in June. There’s a catch, though—while the Pixel 8 Pro will get Gemini Nano as a user-facing feature, on the Pixel 8, it’s only being released “as a developer option.” That means you’ll be able to turn it on only via the hidden Developer Options menu in the settings, and most people will miss out on it.

Google’s Seang Chau, VP of devices and services software, explained the decision on the company’s in-house “Made by Google” podcast. “The Pixel 8 Pro, having 12GB of RAM, was a perfect place for us to put [Gemini Nano] on the device and see what we could do,” Chau said. “When we looked at the Pixel 8 as an example, the Pixel 8 has 4GB less memory, and it wasn’t as easy of a call to just say, ‘all right, we’re going to enable it on Pixel 8 as well.'” According to Chau, Google’s trepidation is because the company doesn’t want to “degrade the experience” on the smaller Pixel 8, which only has 8GB of RAM.

Chau went on to describe what it’s like to have a large language model like Gemini Nano on your phone, and it sounds like there are big trade-offs involved. Google wants some of the AI models to be “RAM-resident” so they’re always loaded in memory. One such feature is “smart reply,” which tries to auto-generate text replies.

Chau told the podcast, “Smart Reply is something that requires the models to be RAM-resident so that it’s available all the time. You don’t want to wait for the model to load on a Gboard reply, so we keep it resident.” On the Pixel 8 Pro, smart reply can be turned on and off via the normal keyboard settings, but on the Pixel 8, you’ll need to turn on the developer flag first.

The bigger Pixel 8 Pro gets the latest AI features. The smaller model will have it locked behind a developer option.

Enlarge / The bigger Pixel 8 Pro gets the latest AI features. The smaller model will have it locked behind a developer option.

Google

So unlike an app, which can be loaded and unloaded as you use it, running something like Gemini Nano could mean permanently losing what is apparently a big chunk of system memory. The baseline of 8GB of RAM for Android phones may need to be increased again in the future. The high mark we’ve seen for phones is 24GB of RAM, and the bigger flagships usually have 12GB or 16GB of RAM, so it’s certainly doable.

Google’s Gemini Nano model is also shipping on the Galaxy S24 lineup, and the base model there has 8GB of RAM, too. When Google originally cited hardware limitations on the Pixel 8 for the feature’s absence, its explanation was confusing—if the base-model S24 can run it, the Pixel 8 should be able to as well. It’s all about how much of a trade-off you’re willing to make in available memory for apps, though. Chau says the team is “still doing system health validation because even if you’re a developer, you might want to use your phone on a daily basis.”

The elephant in the room, though, is that as a user, I don’t even know if I want Gemini Nano on my phone. We’re at the peak of the generative AI hype cycle, and Google has its own internal reasons (the stock market) for pushing AI so hard. While visiting ChatGPT and asking it questions can be useful, that’s just an app. Actually useful OS-level generative AI features are few and far between. I don’t really need a keyboard to auto-generate replies. If it’s just going to use up a bunch of RAM that could be used by apps, I might want to turn it off.

Google says running AI models on phones is a huge RAM hog Read More »

facebook-secretly-spied-on-snapchat-usage-to-confuse-advertisers,-court-docs-say

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

“I can’t think of a good argument for why this is okay” —

Zuckerberg told execs to “figure out” how to spy on encrypted Snapchat traffic.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say

Unsealed court documents have revealed more details about a secret Facebook project initially called “Ghostbusters,” designed to sneakily access encrypted Snapchat usage data to give Facebook a leg up on its rival, just when Snapchat was experiencing rapid growth in 2016.

The documents were filed in a class-action lawsuit from consumers and advertisers, accusing Meta of anticompetitive behavior that blocks rivals from competing in the social media ads market.

“Whenever someone asks a question about Snapchat, the answer is usually that because their traffic is encrypted, we have no analytics about them,” Facebook CEO Mark Zuckerberg (who has since rebranded his company as Meta) wrote in a 2016 email to Javier Olivan.

“Given how quickly they’re growing, it seems important to figure out a new way to get reliable analytics about them,” Zuckerberg continued. “Perhaps we need to do panels or write custom software. You should figure out how to do this.”

At the time, Olivan was Facebook’s head of growth, but now he’s Meta’s chief operating officer. He responded to Zuckerberg’s email saying that he would have the team from Onavo—a controversial traffic-analysis app acquired by Facebook in 2013—look into it.

Olivan told the Onavo team that he needed “out of the box thinking” to satisfy Zuckerberg’s request. He “suggested potentially paying users to ‘let us install a really heavy piece of software'” to intercept users’ Snapchat data, a court document shows.

What the Onavo team eventually came up with was a project internally known as “Ghostbusters,” an obvious reference to Snapchat’s logo featuring a white ghost. Later, as the project grew to include other Facebook rivals, including YouTube and Amazon, the project was called the “In-App Action Panel” (IAAP).

The IAAP program’s purpose was to gather granular insights into users’ engagement with rival apps to help Facebook develop products as needed to stay ahead of competitors. For example, two months after Zuckerberg’s 2016 email, Meta launched Stories, a Snapchat copycat feature, on Instagram, which the Motley Fool noted rapidly became a key ad revenue source for Meta.

In an email to Olivan, the Onavo team described the “technical solution” devised to help Zuckerberg figure out how to get reliable analytics about Snapchat users. It worked by “develop[ing] ‘kits’ that can be installed on iOS and Android that intercept traffic for specific sub-domains, allowing us to read what would otherwise be encrypted traffic so we can measure in-app usage,” the Onavo team said.

Olivan was told that these so-called “kits” used a “man-in-the-middle” attack typically employed by hackers to secretly intercept data passed between two parties. Users were recruited by third parties who distributed the kits “under their own branding” so that they wouldn’t connect the kits to Onavo unless they used a specialized tool like Wireshark to analyze the kits. TechCrunch reported in 2019 that sometimes teens were paid to install these kits. After that report, Facebook promptly shut down the project.

This “man-in-the-middle” tactic, consumers and advertisers suing Meta have alleged, “was not merely anticompetitive, but criminal,” seemingly violating the Wiretap Act. It was used to snoop on Snapchat starting in 2016, on YouTube from 2017 to 2018, and on Amazon in 2018, relying on creating “fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis.”

Ars could not reach Snapchat, Google, or Amazon for comment.

Facebook allegedly sought to confuse advertisers

Not everyone at Facebook supported the IAAP program. “The company’s highest-level engineering executives thought the IAAP Program was a legal, technical, and security nightmare,” another court document said.

Pedro Canahuati, then-head of security engineering, warned that incentivizing users to install the kits did not necessarily mean that users understood what they were consenting to.

“I can’t think of a good argument for why this is okay,” Canahuati said. “No security person is ever comfortable with this, no matter what consent we get from the general public. The general public just doesn’t know how this stuff works.”

Mike Schroepfer, then-chief technology officer, argued that Facebook wouldn’t want rivals to employ a similar program analyzing their encrypted user data.

“If we ever found out that someone had figured out a way to break encryption on [WhatsApp] we would be really upset,” Schroepfer said.

While the unsealed emails detailing the project have recently raised eyebrows, Meta’s spokesperson told Ars that “there is nothing new here—this issue was reported on years ago. The plaintiffs’ claims are baseless and completely irrelevant to the case.”

According to Business Insider, advertisers suing said that Meta never disclosed its use of Onavo “kits” to “intercept rivals’ analytics traffic.” This is seemingly relevant to their case alleging anticompetitive behavior in the social media ads market, because Facebook’s conduct, allegedly breaking wiretapping laws, afforded Facebook an opportunity to raise its ad rates “beyond what it could have charged in a competitive market.”

Since the documents were unsealed, Meta has responded with a court filing that said: “Snapchat’s own witness on advertising confirmed that Snap cannot ‘identify a single ad sale that [it] lost from Meta’s use of user research products,’ does not know whether other competitors collected similar information, and does not know whether any of Meta’s research provided Meta with a competitive advantage.”

This conflicts with testimony from a Snapchat executive, who alleged that the project “hamper[ed] Snap’s ability to sell ads” by causing “advertisers to not have a clear narrative differentiating Snapchat from Facebook and Instagram.” Both internally and externally, “the intelligence Meta gleaned from this project was described” as “devastating to Snapchat’s ads business,” a court filing said.

Facebook secretly spied on Snapchat usage to confuse advertisers, court docs say Read More »

chrome-launches-native-build-for-arm-powered-windows-laptops

Chrome launches native build for Arm-powered Windows laptops

Firefox works, too —

When the big Windows-on-Arm relaunch happens in mid-2024, Chrome will be ready.

Extreme close-up photograph of finger above Chrome icon on smartphone.

We are quickly barreling toward an age of viable Arm-powered Windows laptops with the upcoming launch of Qualcomm’s Snapdragon X Elite CPU. Hardware options are great, but getting useful computers out of them will require a lot of new software, and a big one has just launched: Chrome for Windows on Arm.

Google has had a nightly “canary” build running since January, but now it has a blog post up touting a production-ready version of Chrome for “Arm-compatible Windows PCs powered by Snapdragon.” That’s right, Qualcomm has a big hand in this release, too, with its own press announcement touting Google’s browser release for its upcoming chip. Google promises a native version of Chrome will be “fully optimized for your PC’s [Arm] hardware and operating system to make browsing the web faster and smoother.”

Apple upended laptop CPU architecture when it dumped Intel and launched the Arm-based Apple Silicon M1. A few years later and Qualcomm is ready to answer—mostly by buying a company full of Apple Silicon veterans—with the upcoming launch of the Snapdragon X Elite chip. Qualcomm claims the X Elite will bring Apple Silicon-class hardware to Windows, but the chip isn’t out yet—it’s due for a “mid-2024” release. Most of the software you’ll be running will still be written in x86 and need to go through a translation layer, which will slow things down, but at least it won’t have to be your primary browser.

Google says the release will be out this week. Assuming you don’t have an Arm laptop yet, you can visit “google.com/chrome,” scroll all the way down to the footer, and click “other platforms,” which will eventually show the new release.

Chrome launches native build for Arm-powered Windows laptops Read More »

where’d-my-results-go?-google-search’s-chatbot-is-no-longer-opt-in

Where’d my results go? Google Search’s chatbot is no longer opt-in

Google's generative search results turn the normally stark-white results page into a range of pastels.

Enlarge / Google’s generative search results turn the normally stark-white results page into a range of pastels.

Google

Last year Google brought its new obsession with AI-powered chatbots to Google Search with the launch of the “Search Generative Experience,” or “SGE.” If you opted in, SGE intercepted your Google search queries and put a giant, screen-filling generative AI chatbot response at the top of your search results. The usual 10 blue links were still there, but you had to scroll past Google’s ChatGPT clone to see them. That design choice makes outgoing web links seem like a legacy escape hatch for when the chatbot doesn’t work, and Google wants to know why more people haven’t opted in to this.

Barry Schwartz at Search Engine Land reports that Google is going to start pushing SGE out to some users, even if they haven’t opted in to the “Labs experiment.” A Google spokesperson told the site SGE will be turned on for a “subset of queries, on a small percentage of search traffic in the US.” The report says “Google told us they want to get feedback from searchers who have not opted into SGE specifically. This way they can get feedback and learn how a more general population will find this technology helpful.”

Citing his conversation with Google, Schwartz says some users automatically see Chatbot results for queries where Google thinks a chatbot “can be especially helpful.” Google will turn on the feature for “queries that are often more complex or involve questions where it may be helpful to get information from a range of web pages—like ‘how do I get marks off painted walls.'”

I don’t think anyone has spotted one of these non-opt-in SGE pages in the wild yet, so it’s unclear what the presentation will be. As an opt-in, SGE has a huge explanation page of how your search results will change. The chatbot is easily Google Search’s biggest format change ever, and having that happen automatically would be awfully confusing!

It’s also unclear if you can opt out of this. Today SGE is not compatible with Firefox, so that might be one way to skip Google’s AI obsession for now. Google Search has recently undergone a big leadership shuffle, with Liz Reid taking over as the new head of Search. Reid previously led—wait for it—the SGE team, so the prevailing theory is that we’re going to get way more AI stuff in search going forward.

Where’d my results go? Google Search’s chatbot is no longer opt-in Read More »

apple,-google,-and-meta-are-failing-dma-compliance,-eu-suspects

Apple, Google, and Meta are failing DMA compliance, EU suspects

EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Enlarge / EU Commissioner for Internal Market Thierry Breton talks to media about non-compliance investigations against Google, Apple, and Meta under the Digital Markets Act (DMA).

Not even three weeks after the European Union’s Digital Markets Act (DMA) took effect, the European Commission (EC) announced Monday that it is already probing three out of six gatekeepers—Apple, Google, and Meta—for suspected non-compliance.

Apple will need to prove that changes to its app store and existing user options to swap out default settings easily are sufficient to comply with the DMA.

Similarly, Google’s app store rules will be probed, as well as any potentially shady practices unfairly preferencing its own services—like Google Shopping and Hotels—in search results.

Finally, Meta’s “Subscription for No Ads” option—allowing Facebook and Instagram users to opt out of personalized ad targeting for a monthly fee—may not fly under the DMA. Even if Meta follows through on its recent offer to slash these fees by nearly 50 percent, the model could be deemed non-compliant.

“The DMA is very clear: gatekeepers must obtain users’ consent to use their personal data across different services,” the EC’s commissioner for internal market, Thierry Breton, said Monday. “And this consent must be free!”

In total, the EC announced five investigations: two against Apple, two against Google, and one against Meta.

“We suspect that the suggested solutions put forward by the three companies do not fully comply with the DMA,” antitrust chief Margrethe Vestager said, ordering companies to “retain certain documents” viewed as critical to assessing evidence in the probe.

The EC’s investigations are expected to conclude within one year. If tech companies are found non-compliant, they risk fines of up to 10 percent of total worldwide turnover. Any repeat violations could spike fines to 20 percent.

“Moreover, in case of systematic infringements, the Commission may also adopt additional remedies, such as obliging a gatekeeper to sell a business or parts of it or banning the gatekeeper from acquisitions of additional services related to the systemic non-compliance,” the EC’s announcement said.

In addition to probes into Apple, Google, and Meta, the EC will scrutinize Apple’s fee structure for app store alternatives and send retention orders to Amazon and Microsoft. That makes ByteDance the only gatekeeper so far to escape “investigatory steps” as the EU fights to enforce the DMA’s strict standards. (ByteDance continues to contest its gatekeeper status.)

“These are the cases where we already have concrete evidence of possible non-compliance,” Breton said. “And this in less than 20 days of DMA implementation. But our monitoring and investigative work of course doesn’t stop here,” Breton said. “We may have to open other non-compliance cases soon.

Google and Apple have both issued statements defending their current plans for DMA compliance.

“To comply with the Digital Markets Act, we have made significant changes to the way our services operate in Europe,” Google’s competition director Oliver Bethell told Ars, promising to “continue to defend our approach in the coming months.”

“We’re confident our plan complies with the DMA, and we’ll continue to constructively engage with the European Commission as they conduct their investigations,” Apple’s spokesperson told Ars. “Teams across Apple have created a wide range of new developer capabilities, features, and tools to comply with the regulation. At the same time, we’ve introduced protections to help reduce new risks to the privacy, quality, and security of our EU users’ experience. Throughout, we’ve demonstrated flexibility and responsiveness to the European Commission and developers, listening and incorporating their feedback.”

A Meta spokesperson told Ars that Meta “designed Subscription for No Ads to address several overlapping regulatory obligations, including the DMA,” promising to comply with the DMA while arguing that “subscriptions as an alternative to advertising are a well-established business model across many industries.”

The EC’s announcement came after all designated gatekeepers were required to submit DMA compliance reports and scheduled public workshops to discuss DMA compliance. Those workshops conclude tomorrow with Microsoft and appear to be partly driving the EC’s decision to probe Apple, Google, and Meta.

“Stakeholders provided feedback on the compliance solutions offered,” Vestager said. “Their feedback tells us that certain compliance measures fail to achieve their objectives and fall short of expectations.”

Apple and Google app stores probed

Under the DMA, “gatekeepers can no longer prevent their business users from informing their users within the app about cheaper options outside the gatekeeper’s ecosystem,” Vestager said. “That is called anti-steering and is now forbidden by law.”

Stakeholders told the EC that Apple’s and Google’s fee structures appear to “go against” the DMA’s “free of charge” requirement, Vestager said, because companies “still charge various recurring fees and still limit steering.”

This feedback pushed the EC to launch its first two probes under the DMA against Apple and Google.

“We will investigate to what extent these fees and limitations defeat the purpose of the anti-steering provision and by that, limit consumer choice,” Vestager said.

These probes aren’t the end of Apple’s potential app store woes in the EU, either. Breton said that the EC has “many questions on Apple’s new business model” for the app store. These include “questions on the process that Apple used for granting and terminating membership of” its developer program, following a scandal where Epic Games’ account was briefly terminated.

“We also have questions on the fee structure and several other aspects of the business model,” Breton said, vowing to “check if they allow for real opportunities for app developers in line with the letter and the spirit of the DMA.”

Apple, Google, and Meta are failing DMA compliance, EU suspects Read More »