united kingdom

google-accused-of-shadow-campaigns-redirecting-antitrust-scrutiny-to-microsoft

Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft

On Monday, Microsoft came out guns blazing, posting a blog accusing Google of “dishonestly” funding groups conducting allegedly biased studies to discredit Microsoft and mislead antitrust enforcers and the public.

In the blog, Microsoft lawyer Rima Alaily alleged that an astroturf group called the Open Cloud Coalition will launch this week and will appear to be led by “a handful of European cloud providers.” In actuality, however, those smaller companies were secretly recruited by Google, which allegedly pays them “to serve as the public face” and “obfuscate” Google’s involvement, Microsoft’s blog said. In return, Google likely offered the cloud providers cash or discounts to join, Alaily alleged.

The Open Cloud Coalition is just one part of a “pattern of shadowy campaigns” that Google has funded, both “directly and indirectly,” to muddy the antitrust waters, Alaily alleged. The only other named example that Alaily gives while documenting this supposed pattern is the US-based Coalition for Fair Software Licensing (CFSL), which Alaily said has attacked Microsoft’s cloud computing business in the US, the United Kingdom, and the European Union.

That group is led by Ryan Triplette, who Alaily said is “a well-known lobbyist for Google in Washington, DC, but Google’s affiliation isn’t disclosed publicly by the organization.” An online search confirms Triplette was formerly a lobbyist for Franklin Square Group, which Politico reported represented Google during her time there.

Ars could not immediately reach the CFSL for comment. Google’s spokesperson told Ars that the company has “been a public supporter of CFSL for more than two years” and has “no idea what evidence Microsoft cites that we are the main funder of CFSL.” If Triplette was previously a lobbyist for Google, the spokesperson said, “that’s a weird criticism to make” since it’s likely “everybody in law, policy, etc.,” has “worked for Google, Microsoft, or Amazon at some point, in some capacity.”

Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft Read More »

apple-“clearly-underreporting”-child-sex-abuse,-watchdogs-say

Apple “clearly underreporting” child sex abuse, watchdogs say

Apple “clearly underreporting” child sex abuse, watchdogs say

After years of controversies over plans to scan iCloud to find more child sexual abuse materials (CSAM), Apple abandoned those plans last year. Now, child safety experts have accused the tech giant of not only failing to flag CSAM exchanged and stored on its services—including iCloud, iMessage, and FaceTime—but also allegedly failing to report all the CSAM that is flagged.

The United Kingdom’s National Society for the Prevention of Cruelty to Children (NSPCC) shared UK police data with The Guardian showing that Apple is “vastly undercounting how often” CSAM is found globally on its services.

According to the NSPCC, police investigated more CSAM cases in just the UK alone in 2023 than Apple reported globally for the entire year. Between April 2022 and March 2023 in England and Wales, the NSPCC found, “Apple was implicated in 337 recorded offenses of child abuse images.” But in 2023, Apple only reported 267 instances of CSAM to the National Center for Missing & Exploited Children (NCMEC), supposedly representing all the CSAM on its platforms worldwide, The Guardian reported.

Large tech companies in the US must report CSAM to NCMEC when it’s found, but while Apple reports a couple hundred CSAM cases annually, its big tech peers like Meta and Google report millions, NCMEC’s report showed. Experts told The Guardian that there’s ongoing concern that Apple “clearly” undercounts CSAM on its platforms.

Richard Collard, the NSPCC’s head of child safety online policy, told The Guardian that he believes Apple’s child safety efforts need major improvements.

“There is a concerning discrepancy between the number of UK child abuse image crimes taking place on Apple’s services and the almost negligible number of global reports of abuse content they make to authorities,” Collard told The Guardian. “Apple is clearly behind many of their peers in tackling child sexual abuse when all tech firms should be investing in safety and preparing for the rollout of the Online Safety Act in the UK.”

Outside the UK, other child safety experts shared Collard’s concerns. Sarah Gardner, the CEO of a Los Angeles-based child protection organization called the Heat Initiative, told The Guardian that she considers Apple’s platforms a “black hole” obscuring CSAM. And she expects that Apple’s efforts to bring AI to its platforms will intensify the problem, potentially making it easier to spread AI-generated CSAM in an environment where sexual predators may expect less enforcement.

“Apple does not detect CSAM in the majority of its environments at scale, at all,” Gardner told The Guardian.

Gardner agreed with Collard that Apple is “clearly underreporting” and has “not invested in trust and safety teams to be able to handle this” as it rushes to bring sophisticated AI features to its platforms. Last month, Apple integrated ChatGPT into Siri, iOS and Mac OS, perhaps setting expectations for continually enhanced generative AI features to be touted in future Apple gear.

“The company is moving ahead to a territory that we know could be incredibly detrimental and dangerous to children without the track record of being able to handle it,” Gardner told The Guardian.

So far, Apple has not commented on the NSPCC’s report. Last September, Apple did respond to the Heat Initiative’s demands to detect more CSAM, saying that rather than focusing on scanning for illegal content, its focus is on connecting vulnerable or victimized users directly with local resources and law enforcement that can assist them in their communities.

Apple “clearly underreporting” child sex abuse, watchdogs say Read More »

astronomers-discover-technique-to-spot-ai-fakes-using-galaxy-measurement-tools

Astronomers discover technique to spot AI fakes using galaxy-measurement tools

stars in their eyes —

Researchers use technique to quantify eyeball reflections that often reveal deepfake images.

Researchers write,

Enlarge / Researchers write, “In this image, the person on the left (Scarlett Johansson) is real, while the person on the right is AI-generated. Their eyeballs are depicted underneath their faces. The reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for the fake person.”

In 2024, it’s almost trivial to create realistic AI-generated images of people, which has led to fears about how these deceptive images might be detected. Researchers at the University of Hull recently unveiled a novel method for detecting AI-generated deepfake images by analyzing reflections in human eyes. The technique, presented at the Royal Astronomical Society’s National Astronomy Meeting last week, adapts tools used by astronomers to study galaxies for scrutinizing the consistency of light reflections in eyeballs.

Adejumoke Owolabi, an MSc student at the University of Hull, headed the research under the guidance of Dr. Kevin Pimbblet, professor of astrophysics.

Their detection technique is based on a simple principle: A pair of eyes being illuminated by the same set of light sources will typically have a similarly shaped set of light reflections in each eyeball. Many AI-generated images created to date don’t take eyeball reflections into account, so the simulated light reflections are often inconsistent between each eye.

A series of real eyes showing largely consistent reflections in both eyes.

Enlarge / A series of real eyes showing largely consistent reflections in both eyes.

In some ways, the astronomy angle isn’t always necessary for this kind of deepfake detection because a quick glance at a pair of eyes in a photo can reveal reflection inconsistencies, which is something artists who paint portraits have to keep in mind. But the application of astronomy tools to automatically measure and quantify eye reflections in deepfakes is a novel development.

Automated detection

In a Royal Astronomical Society blog post, Pimbblet explained that Owolabi developed a technique to detect eyeball reflections automatically and ran the reflections’ morphological features through indices to compare similarity between left and right eyeballs. Their findings revealed that deepfakes often exhibit differences between the pair of eyes.

The team applied methods from astronomy to quantify and compare eyeball reflections. They used the Gini coefficient, typically employed to measure light distribution in galaxy images, to assess the uniformity of reflections across eye pixels. A Gini value closer to 0 indicates evenly distributed light, while a value approaching 1 suggests concentrated light in a single pixel.

A series of deepfake eyes showing inconsistent reflections in each eye.

Enlarge / A series of deepfake eyes showing inconsistent reflections in each eye.

In the Royal Astronomical Society post, Pimbblet drew comparisons between how they measured eyeball reflection shape and how they typically measure galaxy shape in telescope imagery: “To measure the shapes of galaxies, we analyze whether they’re centrally compact, whether they’re symmetric, and how smooth they are. We analyze the light distribution.”

The researchers also explored the use of CAS parameters (concentration, asymmetry, smoothness), another tool from astronomy for measuring galactic light distribution. However, this method proved less effective in identifying fake eyes.

A detection arms race

While the eye-reflection technique offers a potential path for detecting AI-generated images, the method might not work if AI models evolve to incorporate physically accurate eye reflections, perhaps applied as a subsequent step after image generation. The technique also requires a clear, up-close view of eyeballs to work.

The approach also risks producing false positives, as even authentic photos can sometimes exhibit inconsistent eye reflections due to varied lighting conditions or post-processing techniques. But analyzing eye reflections may still be a useful tool in a larger deepfake detection toolset that also considers other factors such as hair texture, anatomy, skin details, and background consistency.

While the technique shows promise in the short term, Dr. Pimbblet cautioned that it’s not perfect. “There are false positives and false negatives; it’s not going to get everything,” he told the Royal Astronomical Society. “But this method provides us with a basis, a plan of attack, in the arms race to detect deepfakes.”

Astronomers discover technique to spot AI fakes using galaxy-measurement tools Read More »

apple-warns-proposed-uk-law-will-affect-software-updates-around-the-world

Apple warns proposed UK law will affect software updates around the world

Heads up —

Apple may leave the UK if required to provide advance notice of product updates.

Apple warns proposed UK law will affect software updates around the world

Apple is “deeply concerned” that proposed changes to a United Kingdom law could give the UK government unprecedented power to “secretly veto” privacy and security updates to its products and services, the tech giant said in a statement provided to Ars.

If passed, potentially this spring, the amendments to the UK’s Investigatory Powers Act (IPA) could deprive not just UK users, but all users globally of important new privacy and security features, Apple warned.

“Protecting our users’ privacy and the security of their data is at the very heart of everything we do at Apple,” Apple said. “We’re deeply concerned the proposed amendments” to the IPA “now before Parliament place users’ privacy and security at risk.”

The IPA was initially passed in 2016 to ensure that UK officials had lawful access to user data to investigate crimes like child sexual exploitation or terrorism. Proposed amendments were announced last November, after a review showed that the “Act has not been immune to changes in technology over the last six years” and “there is a risk that some of these technological changes have had a negative effect on law enforcement and intelligence services’ capabilities.”

The proposed amendments require that any company that fields government data requests must notify UK officials of any updates they planned to make that could restrict the UK government’s access to this data, including any updates impacting users outside the UK.

UK officials said that this would “help the UK anticipate the risk to public safety posed by the rolling out of technology by multinational companies that precludes lawful access to data. This will reduce the risk of the most serious offenses such as child sexual exploitation and abuse or terrorism going undetected.”

According to the BBC, the House of Lords will begin debating the proposed changes on Tuesday.

Ahead of that debate, Apple described the amendments on Monday as “an unprecedented overreach by the government” that “if enacted” could allow the UK to “attempt to secretly veto new user protections globally, preventing us from ever offering them to customers.”

In a letter last year, Apple argued that “it would be improper for the Home Office to act as the world’s regulator of security technology.”

Apple told the UK Home Office that imposing “secret requirements on providers located in other countries” that apply to users globally “could be used to force a company like Apple, that would never build a backdoor, to publicly withdraw critical security features from the UK market, depriving UK users of these protections.” It could also “dramatically disrupt the global market for security technologies, putting users in the UK and around the world at greater risk,” Apple claimed.

The proposed changes, Apple said, “would suppress innovation, stifle commerce, and—when combined with purported extraterritorial application—make the Home Office the de facto global arbiter of what level of data security and encryption are permissible.”

UK defends proposed changes

The UK Home Office has repeatedly stressed that these changes do not “provide powers for the Secretary of State to approve or refuse technical changes,” but “simply” requires companies “to inform the Secretary of State of relevant changes before those changes are implemented.”

“The intention is not to introduce a consent or veto mechanism or any other kind of barrier to market,” a UK Home Office fact sheet said. “A key driver for this amendment is to give operational partners time to understand the change and adapt their investigative techniques where necessary, which may in some circumstances be all that is required to maintain lawful access.”

The Home Office has also claimed that “these changes do not directly relate to end-to-end encryption,” while admitting that they “are designed to ensure that companies are not able to unilaterally make design changes which compromise exceptional lawful access where the stringent safeguards of the IPA regime are met.”

This seems to suggest that companies will not be allowed to cut off the UK government from accessing encrypted data under certain circumstances, which concerns privacy advocates who consider end-to-end encryption a vital user privacy and security protection. Earlier this month, civil liberties groups including Big Brother Watch, Liberty, Open Rights Group and Privacy International filed a joint brief opposing the proposed changes, the BBC reported, warning that passing the amendments would be “effectively transforming private companies into arms of the surveillance state and eroding the security of devices and the Internet.”

“We have always been clear that we support technological innovation and private and secure communications technologies, including end-to-end encryption, but this cannot come at a cost to public safety,” a UK government official told the BBC.

The UK government may face more opposition to the amendments than from tech companies and privacy advocates, though. In Apple’s letter last year, the tech giant noted that the proposed changes to the IPA could conflict with EU and US laws, including the EU’s General Data Protection Regulation—considered the world’s strongest privacy law.

Under the GDPR, companies must implement measures to safeguard users’ personal data, Apple said, noting that “encryption is one means by which a company can meet” that obligation.

“Secretly installing backdoors in end-to-end encrypted technologies in order to comply with UK law for persons not subject to any lawful process would violate that obligation,” Apple argued.

Apple warns proposed UK law will affect software updates around the world Read More »