Policy

apple-abruptly-abandons-“buy-now,-pay-later”-service-amid-regulatory-scrutiny

Apple abruptly abandons “buy now, pay later” service amid regulatory scrutiny

Apple abruptly abandons “buy now, pay later” service amid regulatory scrutiny

Apple has abruptly discontinued its “buy now, pay later” (BNPL) service, Apple Pay Later, which turned Apple into a money lender when it launched last March in the US and became widely available in October.

The service previously allowed users to split the cost of purchases of up to $1,000 into four installments that were repaid over six weeks without worrying about extra fees or paying interest. For Apple, it was likely a move to increase total Apple Pay users as the company sought to offer more core financial services through its devices.

Now, it appears that Apple has found a different route to offer short-term loans at checkout in Apple Pay. An Apple spokesperson told 9to5Mac that the decision to end Apple Pay Later came ahead of the company’s plan to start offering new types of installment loans globally.

“Starting later this year, users across the globe will be able to access installment loans offered through credit and debit cards, as well as lenders, when checking out with Apple Pay,” Apple’s spokesperson said. “With the introduction of this new global installment loan offering, we will no longer offer Apple Pay Later in the US.”

Apple also noted its decision to kill off the service on a support page posted Monday, confirming that “Apple Pay Later is no longer offering new loans.” Apple specified that all “existing Apple Pay Later loans and purchases are not affected,” and loans can continue to be managed through users’ wallets.

One of the biggest challenges for BNPL customers is often seeking a refund for returned purchases, but Apple has assured Apple Pay Later customers that the refund process has not changed for any existing purchases. Customers can contact Apple Support if they have “trouble with a refund,” Apple’s support page said.

Apple announced its new installment loan program at its recent annual developer event, confirming that it had partnered with banks, including Citi in the US, to provide short-term loans as a payment option in its upcoming iOS 18 operating system due out before the end of 2024. Apple’s spokesperson told 9to5Mac that unlike Apple Pay Later, which was only available in the US, installment loans will be an option offered in more countries.

“Our focus continues to be on providing our users with access to easy, secure, and private payment options with Apple Pay, and this solution will enable us to bring flexible payments to more users, in more places across the globe, in collaboration with Apple Pay enabled banks and lenders,” Apple’s spokesperson said.

In a blog post, Apple described new features “available for any Apple Pay-enabled bank or issuer to integrate in supported markets.” These features allow users to “view and redeem rewards, and access installment loan offerings from eligible credit or debit cards, when making a purchase online or in-app with iPhone and iPad,” the blog said. For users in the US, Apple will soon make it easy to “apply for loans directly through Affirm when they check out with Apple Pay.”

A brief history of short-lived Apple Pay Later

The iPhone maker rolled out Apple Pay Later in March 2023, just after BNPL services fell under scrutiny by regulators globally, The Verge reported in 2022. Early studies found that “BNPL users are twice as likely to overdraft” and estimated that 43 percent of younger BNPL users have missed a payment.

A fear quickly arose that Apple Pay Later might “normalize” reliance on BNPL lending for frivolous large purchases that customers might then struggle to repay, The Verge reported. BNPL had already become hugely popular with Gen Z shoppers eager to purchase the latest TikTok fashions they may not otherwise be able to afford, The Verge noted.

In 2021, the US Consumer Financial Protection Bureau (CFPB) launched an inquiry into BNPL, flagging emerging potential consumer risks in 2022. Those included privacy risks from data harvesting and excessive debt accumulation from frequently reported borrower overextension.

However, despite emerging concerns about BNPL, Apple Pay Later was immediately popular, according to a JD Power survey of 8,000 consumers. In the first three months that the service was available, nearly one-fifth of BNPL customers used Apple Pay Later. With its BNPL offering, Apple attracted new customers who were interested in trying a new BNPL service from a trusted brand, JD Power reported, posing an immediate threat to BNPL services offered by “traditional payments juggernauts” like PayPal.

At that time, Apple was well-positioned to provide short-term loans, JD Power reported, finding that the “average Apple Pay Later user tended to be more financially healthy than most other BNPL customers, potentially giving it a more sustainable user base than its competitors.”

Apple abruptly abandons “buy now, pay later” service amid regulatory scrutiny Read More »

surgeon-general’s-proposed-social-media-warning-label-for-kids-could-hurt-kids

Surgeon general’s proposed social media warning label for kids could hurt kids

Surgeon general’s proposed social media warning label for kids could hurt kids

US Surgeon General Vivek Murthy wants to put a warning label on social media platforms, alerting young users of potential mental health harms.

“It is time to require a surgeon general’s warning label on social media platforms stating that social media is associated with significant mental health harms for adolescents,” Murthy wrote in a New York Times op-ed published Monday.

Murthy argued that a warning label is urgently needed because the “mental health crisis among young people is an emergency,” and adolescents overusing social media can increase risks of anxiety and depression and negatively impact body image.

Spiking mental health issues for young people began long before the surgeon general declared a youth behavioral health crisis during the pandemic, an April report from a New York nonprofit called the United Health Fund found. Between 2010 and 2022, “adolescents ages 12–17 have experienced the highest year-over-year increase in having a major depressive episode,” the report said. By 2022, 6.7 million adolescents in the US were reporting “suffering from one or more behavioral health condition.”

However, mental health experts have maintained that the science is divided, showing that kids can also benefit from social media depending on how they use it. Murthy’s warning label seems to ignore that tension, prioritizing raising awareness of potential harms even though parents potentially restricting online access due to the proposed label could end up harming some kids. The label also would seemingly fail to acknowledge known risks to young adults, whose brains continue developing after the age of 18.

To create the proposed warning label, Murthy is seeking better data from social media companies that have not always been transparent about studying or publicizing alleged harms to kids on their platforms. Last year, a Meta whistleblower, Arturo Bejar, testified to a US Senate subcommittee that Meta overlooks obvious reforms and “continues to publicly misrepresent the level and frequency of harm that users, especially children, experience” on its platforms Facebook and Instagram.

According to Murthy, the US is past the point of accepting promises from social media companies to make their platforms safer. “We need proof,” Murthy wrote.

“Companies must be required to share all of their data on health effects with independent scientists and the public—currently they do not—and allow independent safety audits,” Murthy wrote, arguing that parents need “assurance that trusted experts have investigated and ensured that these platforms are safe for our kids.”

“A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe,” Murthy wrote.

Kids need safer platforms, not a warning label

Leaving parents to police kids’ use of platforms is unacceptable, Murthy said, because their efforts are “pitted against some of the best product engineers and most well-resourced companies in the world.”

That is nearly an impossible battle for parents, Murthy argued. If platforms are allowed to ignore harms to kids while pursuing financial gains by developing features that are laser-focused on maximizing young users’ online engagement, platforms will “likely” perpetuate the cycle of problematic use that Murthy described in his op-ed, the American Psychological Association (APA) warned this year.

Downplayed in Murthy’s op-ed, however, is the fact that social media use is not universally harmful to kids and can be beneficial to some, especially children in marginalized groups. Monitoring this tension remains a focal point of the APA’s most recent guidance, which noted that in April 2024 that “society continues to wrestle with ways to maximize the benefits of these platforms while protecting youth from the potential harms associated with them.”

“Psychological science continues to reveal benefits from social media use, as well as risks and opportunities that certain content, features, and functions present to young social media users,” APA reported.

According to the APA, platforms urgently need to enact responsible safety standards that diminish risks without restricting kids’ access to beneficial social media use.

“By early 2024, few meaningful changes to social media platforms had been enacted by industry, and no federal policies had been adopted,” the APA report said. “There remains a need for social media companies to make fundamental changes to their platforms.”

The APA has recommended a range of platform reforms, including limiting infinite scroll, imposing time limits on young users, reducing kids’ push notifications, and adding protections to shield kids from malicious actors.

Bejar agreed with the APA that platforms owe it to parents to make meaningful reforms. His ideal future would see platforms gathering more granular feedback from young users to expose harms and confront them faster. He provided senators with recommendations that platforms could use to “radically improve the experience of our children on social media” without “eliminating the joy and value they otherwise get from using such services” and without “significantly” affecting profits.

Bejar’s reforms included platforms providing young users with open-ended ways to report harassment, abuse, and harmful content that allow users to explain exactly why a contact or content was unwanted—rather than platforms limiting feedback to certain categories they want to track. This could help ensure that companies that strategically limit language in reporting categories don’t obscure the harms and also provide platforms with more information to improve services, Bejar suggested.

By improving feedback mechanisms, Bejar said, platforms could more easily adjust kids’ feeds to stop recommending unwanted content. The APA’s report agreed that this was an obvious area for platform improvement, finding that “the absence of clear and transparent processes for addressing reports of harmful content makes it harder for youth to feel protected or able to get help in the face of harmful content.”

Ultimately, the APA, Bejar, and Murthy all seem to agree that it is important to bring in outside experts to help platforms come up with better solutions, especially as technology advances. The APA warned that “AI-recommended content has the potential to be especially influential and hard to resist” for some of the youngest users online (ages 10–13).

Surgeon general’s proposed social media warning label for kids could hurt kids Read More »

meta-halts-plans-to-train-ai-on-facebook,-instagram-posts-in-eu

Meta halts plans to train AI on Facebook, Instagram posts in EU

Not so fast —

Meta was going to start training AI on Facebook and Instagram posts on June 26.

Meta halts plans to train AI on Facebook, Instagram posts in EU

Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.

The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.

There’s not much information available yet on Meta’s decision. But Meta’s EU regulator, the Irish Data Protection Commission (DPC), posted a statement confirming that Meta made the move after ongoing discussions with the DPC about compliance with the EU’s strict data privacy laws, including the General Data Protection Regulation (GDPR).

“The DPC welcomes the decision by Meta to pause its plans to train its large language model using public content shared by adults on Facebook and Instagram across the EU/EEA,” the DPC said. “This decision followed intensive engagement between the DPC and Meta. The DPC, in co-operation with its fellow EU data protection authorities, will continue to engage with Meta on this issue.”

The European Center for Digital Rights, known as Noyb, had filed 11 complaints across the EU and intended to file more to stop Meta from moving forward with its AI plans. The DPC initially gave Meta AI the green light to proceed but has now made a U-turn, Noyb said.

Meta’s policy still requires update

In a blog, Meta had previously teased new AI features coming to the EU, including everything from customized stickers for chats and stories to Meta AI, a “virtual assistant you can access to answer questions, generate images, and more.” Meta had argued that training on EU users’ personal data was necessary so that AI services could reflect “the diverse cultures and languages of the European communities who will use them.”

Before the pause, the company had been hoping to rely “on the legal basis of ‘legitimate interests’” to process the data, because it’s needed “to improve AI at Meta.” But Noyb and EU data regulators had argued that Meta’s legal basis did not comply with the GDPR, with the Norwegian Data Protection Authority arguing that “the most natural thing would have been to ask the users for their consent before their posts and images are used in this way.”

Rather than ask for consent, however, Meta had given EU users until June 26 to opt out. Noyb had alleged that in going this route, Meta planned to use “dark patterns” to thwart AI opt-outs in the EU and collect as much data as possible to fuel undisclosed AI technologies. Noyb urgently argued that once users’ data is in the system, “users seem to have no option of ever having it removed.”

Noyb said that the “obvious explanation” for Meta seemingly halting its plans was pushback from EU officials, but the privacy advocacy group also warned EU users that Meta’s privacy policy has not yet been fully updated to reflect the pause.

“We welcome this development but will monitor this closely,” Max Schrems, Noyb chair, said in a statement provided to Ars. “So far there is no official change of the Meta privacy policy, which would make this commitment legally binding. The cases we filed are ongoing and will need a determination.”

Ars was not immediately able to reach Meta for comment.

Meta halts plans to train AI on Facebook, Instagram posts in EU Read More »

apple-punishes-women-for-same-behaviors-that-get-men-promoted,-lawsuit-says

Apple punishes women for same behaviors that get men promoted, lawsuit says

Apple punishes women for same behaviors that get men promoted, lawsuit says

Apple has spent years “intentionally, knowingly, and deliberately paying women less than men for substantially similar work,” a proposed class action lawsuit filed in California on Thursday alleged.

A victory for women suing could mean that more than 12,000 current and former female employees in California could collectively claw back potentially millions in lost wages from an apparently ever-widening wage gap allegedly perpetuated by Apple policies.

The lawsuit was filed by two employees who have each been with Apple for more than a decade, Justina Jong and Amina Salgado. They claimed that Apple violated California employment laws between 2020 and 2024 by unfairly discriminating against California-based female employees in Apple’s engineering, marketing, and AppleCare divisions and “systematically” paying women “lower compensation than men with similar education and experience.”

Apple allegedly has displayed an ongoing bias toward male employees, offering them higher starting salaries and promoting them for the “same behaviors” that female employees allegedly were punished for.

Jong, currently a customer/technical training instructor on Apple’s global developer relations/app review team, said that she only became aware of a stark pay disparity by chance.

“One day, I saw a W-2 left on the office printer,” Jong said. “It belonged to my male colleague, who has the same job position. I noticed that he was being paid almost $10,000 more than me, even though we performed substantially similar work. This revelation made me feel terrible.”

But Salgado had long been aware of the problem. Salgado, currently on a temporary assignment as a development manager in the AppleCare division, spent years complaining about her lower wages, prompting Apple internal investigations that never led to salary increases.

Finally, late last year, Salgado’s insistence on fair pay was resolved after Apple hired a third-party firm that concluded she was “paid less than men performing substantially similar work.” Apple subsequently increased her pay rate but dodged responsibility for back pay that Salgado now seeks to recover.

Eve Cervantez, a lawyer for women suing, said in a press release shared with Ars that these women were put in “a no-win situation.”

“Once women are hired into a lower pay range at Apple, subsequent pay raises or any bonuses are tracked accordingly, meaning they don’t correct the gender pay gap,” Cervantez said. “Instead, they perpetuate and widen the gap because raises and bonuses are based on a percentage of the employee’s base salary.”

Apple did not immediately respond to Ars’ request to comment.

Apple punishes women for same behaviors that get men promoted, lawsuit says Read More »

tesla-investors-sue-elon-musk-for-diverting-carmaker’s-resources-to-xai

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI

Tesla sued by shareholders —

Lawsuit: Musk’s xAI poached Tesla employees, Nvidia GPUs, and data.

A large Tesla logo

Getty Images | SOPA Images

A group of Tesla investors yesterday sued Elon Musk, the company, and its board members, alleging that Tesla was harmed by Musk’s diversion of resources to his xAI venture. The diversion of resources includes hiring AI employees away from Tesla, diverting microchips from Tesla to X (formerly Twitter) and xAI, and “xAI’s use of Tesla’s data to develop xAI’s own software/hardware, all without compensation to Tesla,” the lawsuit said.

The lawsuit in Delaware Court of Chancery was filed by three Tesla shareholders: the Cleveland Bakers and Teamsters Pension Fund, Daniel Hazen, and Michael Giampietro. It seeks financial damages for Tesla and the disgorging of Musk’s equity stake in xAI to Tesla.

“Could the CEO of Coca-Cola loyally start a competing soft-drink company on the side, then divert scarce ingredients from Coca-Cola to the startup? Could the CEO of Goldman Sachs loyally start a competing financial advisory company on the side, then hire away key bankers from Goldman Sachs to the startup? Could the board of either company loyally permit such conduct without doing anything about it? Of course not,” the lawsuit says.

Tesla and Musk have touted artificial intelligence “as the key to Tesla’s future” and described Tesla as an AI company, the lawsuit said. By founding xAI, Musk started a competing company “and then divert[ed] talent and resources from his corporation to the startup,” with the apparent approval of Tesla’s board, the lawsuit said.

After founding xAI in March 2023, “Musk hired away numerous key AI-focused employees from Tesla to xAI” and later diverted Nvidia GPUs from Tesla to X and xAI, the lawsuit said. The GPU diversion was recently confirmed by Nvidia emails that were revealed in a report by CNBC.

GPU diversion

Before founding xAI, “Musk stated that Tesla needed more Nvidia H100 GPUs than Nvidia had available for sale, a common problem in the AI industry… After Musk established xAI, however, he began personally directing Nvidia to redirect GPUs from Tesla to xAI and X,” the lawsuit said.

The investors suing Musk and Tesla don’t buy Musk’s justification. “For his part, Musk dubiously claimed in a post on X following the publication of the CNBC report that, contrary to his prior public representations about Tesla’s appetite for Nvidia hardware, ‘Tesla had no place to send the Nvidia chips to turn them on, so they would have just sat in a warehouse,'” the lawsuit said.

The complaint says that a pitch deck to potential investors in xAI said the new firm “intended to harvest data from X and Tesla to help xAI catch up to AI companies OpenAI and Anthropic. X would provide data from social media users, and Tesla would provide video data from its cars.”

“It is apparent that Musk has pitched prospective investors in xAI partly by exploiting information owned by Tesla,” the lawsuit also said. “On information and belief, Musk has already or intends to have xAI harvest data from Tesla without appropriately compensating Tesla even though X has already been provided xAI equity for its data contributions. None of this would be necessary if Musk properly created xAI as a subsidiary of Tesla.”

We contacted Tesla today and will update this article if the company provides a response to the lawsuit. The filing of the complaint was previously reported by TechCrunch.

Same court nullified Musk’s pay

The Delaware Court of Chancery is the same one that nullified Elon Musk’s 2018 pay package following a different investor lawsuit. Tesla shareholders yesterday re-approved the $44.9 billion pay plan, with 72 percent voting yes on the proposal, but the re-vote doesn’t end the legal battle over Musk’s pay. Tesla shareholders also approved a corporate move from Delaware to Texas, which was proposed by Musk and Tesla after the pay-plan court ruling.

That drama factors into the lawsuit filed yesterday. After the pay ruling that effectively reduced Musk’s stake in Tesla, “Musk accelerated his efforts to grow xAI” by “raising billions of dollars and poaching at least eleven employees from Tesla,” the new lawsuit said. The lawsuit also points to Musk’s threat “that he would only build an AI and robotics business within Tesla if Tesla gave him at least 25% voting power.”

The lawsuit accuses Tesla’s board of “permit[ting] Musk to create and grow xAI, hindering Tesla’s AI development efforts and diverting billions of dollars in value from Tesla to xAI.” The board’s failure to act is alleged to be “an obvious breach of its members’ unyielding fiduciary duty to protect the interests of Tesla and its stockholders.”

The Tesla board members’ close ties to Musk could play a key role in the case. In the pay-plan ruling, Delaware Court of Chancery Judge Kathaleen McCormick found that most of Tesla’s board members were beholden to Musk or had compromising conflicts. The lawsuit filed yesterday points to the court’s previous findings on those board members, including Kimbal Musk, Elon Musk’s brother; and James Murdoch, a longtime friend of Musk.

Tesla investors sue Elon Musk for diverting carmaker’s resources to xAI Read More »

apple-set-to-be-first-big-tech-group-to-face-charges-under-eu-digital-law

Apple set to be first Big Tech group to face charges under EU digital law

non-compliance —

Brussels to announce iPhone maker is failing to open up its App Store to competition.

App Store icon on an iPhone screen

Getty Images | NurPhoto

Brussels is set to charge Apple over allegedly stifling competition on its mobile app store, the first time EU regulators have used new digital rules to target a Big Tech group.

The European Commission has determined that the iPhone maker is not complying with obligations to allow app developers to “steer” users to offers outside its App Store without imposing fees on them, according to three people with close knowledge of its investigation.

The charges would be the first brought against a tech company under the Digital Markets Act, landmark legislation designed to force powerful “online gatekeepers” to open up their businesses to competition in the EU.

The commission, the EU’s executive arm, said in March it was investigating Apple, as well as Alphabet and Meta, under powers granted by the DMA. An announcement over the charges against Apple was expected in the coming weeks, said two people with knowledge of the case.

These people said regulators have only made preliminary findings, and Apple could still take actions to correct its practices, which could then lead regulators to reassess any final decision. They added the timing of any announcement could also shift.

The EU could also decide to announce charges against other tech groups, with regulators still investigating whether Google parent Alphabet is favoring its own app store and Facebook owner Meta’s use of personal data for advertising.

If found to be breaking the DMA, Apple faces daily penalties for non-compliance of up to 5 percent of its average daily worldwide turnover, which is currently just over $1 billion.

The move comes as competition watchdogs around the world increase their scrutiny of Big Tech companies and their market dominance. In March, the US brought an antitrust case against Apple for allegedly using its power in the smartphone sector to squash rivals and limit consumer choice.

Epic Games, which sued Apple over the App Store in 2020, is also awaiting a decision from a California federal judge on whether Apple failed to comply with a US injunction prohibiting its steering rules, following a series of court hearings over recent weeks.

In January, Apple announced historic changes to its iOS mobile software, App Store, and Safari browser in the EU.

The changes were an effort to placate regulators in Brussels and meant Apple would allow users to access rival app stores and download apps from other sources. The changes also included slashing the fee paid by companies using the App Store to sell digital goods and services from 30 percent to 17 percent.

However, the EU is also looking at whether these fee changes properly adhere to its new digital rules. Apple introduced new charges in Europe, including a “core technology fee” of 50 cents on developers with apps that have more than 1 million users for every first installment by a user. Apple will also charge an additional 3 percent fee to app developers that use its payment processor.

Some developers have argued they could face higher charges as a result of the fee changes. The EU could also announce initial charges over these developer fees, people familiar with the commission’s thinking said.

According to analysis by Sensor Tower, consumer spending on Apple’s App Store throughout the second quarter of 2024 was “relatively flat,” suggesting the EU rules have yet to affect the company’s bottom line.

Apple declined to comment but pointed to an earlier statement that said: “We’re confident our plan complies with the DMA, and we’ll continue to constructively engage with the European Commission as they conduct their investigations.”

The EU declined to comment.

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

Apple set to be first Big Tech group to face charges under EU digital law Read More »

cop-busted-for-unauthorized-use-of-clearview-ai-facial-recognition-resigns

Cop busted for unauthorized use of Clearview AI facial recognition resigns

Secret face scans —

Indiana cop easily hid frequent personal use of Clearview AI face scans.

Cop busted for unauthorized use of Clearview AI facial recognition resigns

An Indiana cop has resigned after it was revealed that he frequently used Clearview AI facial recognition technology to track down social media users not linked to any crimes.

According to a press release from the Evansville Police Department, this was a clear “misuse” of Clearview AI’s controversial face scan tech, which some US cities have banned over concerns that it gives law enforcement unlimited power to track people in their daily lives.

To help identify suspects, police can scan what Clearview AI describes on its website as “the world’s largest facial recognition network.” The database pools more than 40 billion images collected from news media, mugshot websites, public social media, and other open sources.

But these scans must always be linked to an investigation, and Evansville police chief Philip Smith said that instead, the disgraced cop repeatedly disguised his personal searches by deceptively “utilizing an actual case number associated with an actual incident” to evade detection.

Smith’s department discovered the officer’s unauthorized use after performing an audit before renewing their Clearview AI subscription in March. That audit showed “an anomaly of very high usage of the software by an officer whose work output was not indicative of the number of inquiry searches that they had.”

Another clue to the officer’s abuse of the tool was that most face scans conducted during investigations are “usually live or CCTV images”—shots taken in the wild—Smith said. However, the officer who resigned was mainly searching social media images, which was a red flag.

An investigation quickly “made clear that this officer was using Clearview AI” for “personal purposes,” Smith said, declining to name the officer or verify if targets of these searchers were notified.

As a result, Smith recommended that the department terminate the officer. However, the officer resigned “before the Police Merit Commission could make a final determination on the matter,” Smith said.

Easily dodging Clearview AI’s built-in compliance features

Clearview AI touts the face image network as a public safety resource, promising to help law enforcement make arrests sooner while committing to “ethical and responsible” use of the tech.

On its website, the company says that it understands that “law enforcement agencies need built-in compliance features for increased oversight, accountability, and transparency within their jurisdictions, such as advanced admin tools, as well as user-friendly dashboards, reporting, and metrics tools.”

To “help deter and detect improper searches,” its website says that a case number and crime type is required, and “every agency is required to have an assigned administrator that can see an in-depth overview of their organization’s search history.”

It seems that neither of those safeguards stopped the Indiana cop from repeatedly scanning social media images for undisclosed personal reasons, seemingly rubber-stamping the case number and crime type requirement and going unnoticed by his agency’s administrator. This incident could have broader implications in the US, where its technology has been widely used by police to conduct nearly 1 million searches, Clearview AI CEO Hoan Ton-That told the BBC last year.

In 2022, Ars reported when Clearview AI told investors it had ambitions to collect more than 100 billion face images, ensuring that “almost everyone in the world will be identifiable.” As privacy concerns about the controversial tech mounted, it became hotly debated. Facebook moved to stop the company from scraping faces on its platform, and the ACLU won a settlement that banned Clearview AI from contracting with most businesses. But the US government retained access to the tech, including “hundreds of police forces across the US,” Ton-That told the BBC.

Most law enforcement agencies are hesitant to discuss their Clearview AI tactics in detail, the BBC reported, so it’s often unclear who has access and why. But the Miami Police confirmed that “it uses this software for every type of crime,” the BBC reported.

Now, at least one Indiana police department has confirmed that an officer can sneakily abuse the tech and conduct unapproved face scans with apparent ease.

According to Kashmir Hill—the journalist who exposed Clearview AI’s tech—the disgraced cop was following in the footsteps of “billionaires, Silicon Valley investors, and a few high-wattage celebrities” who got early access to Clearview AI tech in 2020 and considered it a “superpower on their phone, allowing them to put a name to a face and dig up online photos of someone that the person might not even realize were online.”

Advocates have warned that stronger privacy laws are needed to stop law enforcement from abusing Clearview AI’s network, which Hill described as “a Shazam for people.”

Smith said the officer disregarded department guidelines by conducting the improper face scans.

“To ensure that the software is used for its intended purposes, we have put in place internal operational guidelines and adhere to the Clearview AI terms of service,” Smith said. “Both have language that clearly states that this is a tool for official use and is not to be used for personal reasons.

Cop busted for unauthorized use of Clearview AI facial recognition resigns Read More »

musk-says-he’s-winning-tesla-shareholder-vote-on-pay-plan-by-“wide-margin”

Musk says he’s winning Tesla shareholder vote on pay plan by “wide margin”

Tesla shareholder vote —

Court battle over pay plan will continue even if Musk wins shareholder vote.

Elon Musk wearing a suit and waving with his hand as he walks away from a courthouse.

Enlarge / Elon Musk.

Getty Images | Bloomberg

Elon Musk said last night that Tesla shareholders provided enough votes to re-approve his 2018 pay package, which was previously nullified by a Delaware judge. A proposal to transfer Tesla’s state of incorporation from Delaware to Texas also has enough votes to pass, according to a post by Musk.

“Both Tesla shareholder resolutions are currently passing by wide margins!” Musk wrote. His post included charts indicating that both shareholder resolutions had more than enough yes votes to surpass the “guaranteed win” threshold.

The Wall Street Journal notes that the “results provided by Musk are preliminary, and voters can change their votes until the polls close at the meeting on Thursday.” The shareholder meeting is at 3: 30 pm Central Time. An official announcement on the results is expected today.

Under a settlement with the Securities and Exchange Commission, Musk is required to get pre-approval from a Tesla securities lawyer for social media posts that may contain information material to the company or its shareholders. Tesla today submitted an SEC filing containing a screenshot of Musk’s X post describing the preliminary results, but the company otherwise did not make an announcement.

Legal uncertainty remains

The vote isn’t the last word on the pay package that was once estimated to be worth $56 billion and more recently valued at $46 billion based on Tesla’s stock price. The pay plan was nullified by a Delaware Court of Chancery ruling in January 2024 after a lawsuit filed by a shareholder.

Judge Kathaleen McCormick ruled that the pay plan was unfair to Tesla’s shareholders, saying the proxy information given to investors before 2018 was materially deficient. McCormick said that “the proxy statement inaccurately described key directors as independent and misleadingly omitted details about the process.”

As the Financial Times wrote, there would still be legal uncertainty even if shareholders re-approve the pay deal today:

In asking shareholders to approve of the same 2018 pay package that was nullified by the Delaware Court of Chancery in January, Tesla is relying on a legal principle known as “ratification,” in which the validity of a corporate action can be cemented by a shareholder vote. Ratification, the company told shareholders in a proxy note earlier this year, “will restore Tesla’s stockholder democracy.”

This instance, however, is the first time a company has tried to leverage that principle after its board was found to have breached its fiduciary duty to approve the deal in the first place.

Even Tesla admits it does not know what happens next. “The [Tesla board] special committee and its advisers noted that they could not predict with certainty how a stockholder vote to ratify the 2018 CEO performance award would be treated under Delaware law in these novel circumstances,” it said in a proxy statement sent to shareholders.

The BBC writes that “legal experts say it is not clear if a court that blocked the deal will accept the re-vote, which is not binding, and allow the company to restore the pay package.”

New lawsuit challenges re-vote

The re-vote was already being challenged in the same Delaware court that nullified the 2018 vote. Donald Ball, who owns 28,245 shares of Tesla stock, last week sued Musk and Tesla in a complaint that alleges the Tesla “Board has not disclosed a complete or fair picture” to shareholders of the impact of re-approving Musk’s pay plan.

That includes “radical tax implications for Tesla that will potentially wipe out Tesla’s pre-tax profits for the last two years,” the lawsuit said. The Ball lawsuit also alleged that “Musk has engaged in strong-arm, coercive tactics to obtain stockholder approval for both the Redomestication Vote and the Ratification Vote.”

Tesla Board Chairperson Robyn Denholm urged shareholders to re-approve the Musk pay plan, suggesting that Musk could leave Tesla or devote less time to the company if the resolution is voted down.

Musk says he’s winning Tesla shareholder vote on pay plan by “wide margin” Read More »

starlink-user-terminal-now-costs-just-$300-in-28-states,-$500-in-rest-of-us

Starlink user terminal now costs just $300 in 28 states, $500 in rest of US

Starlink price cut —

The $600 standard price was replaced with regional pricing of $500 or $300.

A rectangular satellite dish sitting on the ground outdoors.

Enlarge / The standard Starlink satellite dish.

Starlink

You can now buy a Starlink satellite dish for $299 (plus shipping and tax) in 28 US states due to a discount for areas where SpaceX’s broadband network has excess capacity.

Starlink had raised its upfront hardware cost from $499 to $599 in March 2022 but cut the standard price back down to $499 this week. In the 28 states where the network has what SpaceX deems excess capacity, a $200 discount is being applied to bring the price down to $299. It’s unclear how long the deal will last, though we can assume the number of states eligible for $299 pricing will fall if a lot of people sign up.

“In the United States, new orders in certain regions are eligible for a one-time savings in areas where Starlink has abundant network availability,” a support page posted yesterday said. “$200 will be removed from your Starlink kit price when ordering on Starlink.com and if activated after purchasing from a retailer, a $200 credit will be applied. The savings are only available for Residential Standard service in these designated regional savings areas.”

The 28 states in the “regional savings areas” are Arizona, California, Colorado, Connecticut, Delaware, Florida, Hawaii, Idaho, Iowa, Kansas, Maine, Maryland, Massachusetts, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, New York, North Dakota, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Dakota, Utah, Vermont, and Wyoming.

There’s one more significant price difference that applies based on location. Since early 2023, Starlink has charged $120 a month for service in areas with limited capacity and $90 a month in areas with excess capacity. So if you’re in an excess-capacity area, you can buy a $299 dish and get $90 monthly service.

Whether you pay $499 or $299 upfront, you’ll get a Wi-Fi router and the new version of Starlink’s standard residential user terminal. There is a drawback compared to the older version of the Starlink dish, which is now called “Starlink Actuated” and doesn’t seem to be available for residential orders on Starlink.com anymore.

The current standard satellite dish doesn’t have the old version’s ability to re-position itself. The new version must be positioned manually, but the Starlink app can help you find the best position.

“The ‘actuated’ part of Standard Actuated refers to the electric motors inside the antenna housing,” says an in-depth comparison of the models written by Starlink user Noah Clarke. “The motors, which are connected to the mast, can rotate and tilt the Standard Actuated dish, enabling it to self-align to the Starlink satellites. In contrast, the Standard dish has done away with the built-in mast and motors. The Standard dish must be manually rotated during the initial installation, with the help of the Starlink app.”

Starlink offers mounting hardware as optional accessories during the checkout process. There’s a pivot mount for $74, a wall mount for $67, a pipe adapter for $38, and a 45-meter cable for $115. The optional cable is three times longer than the one that comes with the standard terminal.

Starlink user terminal now costs just $300 in 28 states, $500 in rest of US Read More »

musk’s-x-demands-money-from-laid-off-employees,-claims-they-were-overpaid

Musk’s X demands money from laid-off employees, claims they were overpaid

Math is hard —

Laid-off Aussies reportedly got up to $70K extra from currency-conversion error.

An app icon and logo for Elon Musk's X service.

Getty Images | Kirill Kudryavtsev

Elon Musk’s X Corp. is reportedly demanding money from at least six Australians who were laid off, saying the company accidentally overpaid them. The Sydney Morning Herald reported today that “X is threatening to take some former Australian employees to court, demanding they return entitlements it claims were overpaid to them after it bungled the currency conversion from US to Australian dollars on the payments.”

Emails sent this year by X’s Asia Pacific human resources department to the laid-off employees said there was “a significant overpayment in error in January 2023.” The alleged overpayments ranged from $1,500 to $70,000 for each employee.

So far, none of the former employees have repaid the money, The Sydney Morning Herald was told. One Australian dollar is currently worth $0.67 in US currency.

“The company said the overpayment was related to ‘deferred cash compensation,’ in the form of employee shares issued to the staff when they joined Twitter,” the article said. “These shares were valued at $US54.20 ($82) each, the price at which Musk bought Twitter in 2022, and the total number of shares acquired by an employee was based on the length of their tenure at the company.”

X reportedly made the currency conversion errors “when employees were paid their entitlements once they were made redundant… According to one account, X paid out the share entitlements at a conversion rate 2.5 times the value of the shares.”

X asked the laid-off employees for repayment “at your earliest convenience” and said the company reserved the right to seek the return of the money plus interest in court, the report said.

In US, ex-workers still fighting for severance

Employment law specialist Hayden Stephens was paraphrased in the report as saying that the ex-X workers may be forced to return the money, but they should first “ask X to clearly explain how the error occurred and ask for supporting documentary evidence.” He said that if there was a genuine mistake, “there is usually an obligation to repay that money” under Australian employment law.

X has not responded to a request for comment from Ars today.

X overpaying laid-off employees is the opposite of what allegedly happened to many former US-based workers. X was served with lawsuits and arbitration claims from about 2,000 ex-employees who were fighting to receive severance. Settlement talks in multiple severance cases ended without deals, court filings state.

X is also facing a lawsuit from four former Twitter executives who say they were cheated out of more than $128 million in severance when Musk fired them immediately after buying the firm. The lawsuit was filed by former Twitter CEO Parag Agrawal, former CFO Ned Segal, former Chief Legal Officer Vijaya Gadde, and former General Counsel Sean Edgett. The plaintiffs proposed a trial date in November 2025.

Musk also refused to pay a variety of Twitter vendors after taking over, leading to a deluge of lawsuits seeking compensation.

Musk’s X demands money from laid-off employees, claims they were overpaid Read More »

adobe-to-update-vague-ai-terms-after-users-threaten-to-cancel-subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe to update vague AI terms after users threaten to cancel subscriptions

Adobe has promised to update its terms of service to make it “abundantly clear” that the company will “never” train generative AI on creators’ content after days of customer backlash, with some saying they would cancel Adobe subscriptions over its vague terms.

Users got upset last week when an Adobe pop-up informed them of updates to terms of use that seemed to give Adobe broad permissions to access user content, take ownership of that content, or train AI on that content. The pop-up forced users to agree to these terms to access Adobe apps, disrupting access to creatives’ projects unless they immediately accepted them.

For any users unwilling to accept, canceling annual plans could trigger fees amounting to 50 percent of their remaining subscription cost. Adobe justifies collecting these fees because a “yearly subscription comes with a significant discount.”

On X (formerly Twitter), YouTuber Sasha Yanshin wrote that he canceled his Adobe license “after many years as a customer,” arguing that “no creator in their right mind can accept” Adobe’s terms that seemed to seize a “worldwide royalty-free license to reproduce, display, distribute” or “do whatever they want with any content” produced using their software.

“This is beyond insane,” Yanshin wrote on X. “You pay a huge monthly subscription, and they want to own your content and your entire business as well. Going to have to learn some new tools.”

Adobe’s design leader Scott Belsky replied, telling Yanshin that Adobe had clarified the update in a blog post and noting that Adobe’s terms for licensing content are typical for every cloud content company. But he acknowledged that those terms were written about 11 years ago and that the language could be plainer, writing that “modern terms of service in the current climate of customer concerns should evolve to address modern day concerns directly.”

Yanshin has so far not been encouraged by any of Adobe’s attempts to clarify its terms, writing that he gives “precisely zero f*cks about Adobe’s clarifications or blog posts.”

“You forced people to sign new Terms,” Yanshin told Belsky on X. “Legally, they are the only thing that matters.”

Another user in the thread using an anonymous X account also pushed back, writing, “Point to where it says in the terms that you won’t use our content for LLM or AI training? And state unequivocally that you do not have the right to use our work beyond storing it. That would go a long way.”

“Stay tuned,” Belsky wrote on X. “Unfortunately, it takes a process to update a TOS,” but “we are working on incorporating these clarifications.”

Belsky co-authored the blog this week announcing that Adobe’s terms would be updated by June 18 after a week of fielding feedback from users.

“We’ve never trained generative AI on customer content, taken ownership of a customer’s work, or allowed access to customer content beyond legal requirements,” Adobe’s blog said. “Nor were we considering any of those practices as part of the recent Terms of Use update. That said, we agree that evolving our Terms of Use to reflect our commitments to our community is the right thing to do.”

Adobe to update vague AI terms after users threaten to cancel subscriptions Read More »

ai-trained-on-photos-from-kids’-entire-childhood-without-their-consent

AI trained on photos from kids’ entire childhood without their consent

AI trained on photos from kids’ entire childhood without their consent

Photos of Brazilian kids—sometimes spanning their entire childhood—have been used without their consent to power AI tools, including popular image generators like Stable Diffusion, Human Rights Watch (HRW) warned on Monday.

This act poses urgent privacy risks to kids and seems to increase risks of non-consensual AI-generated images bearing their likenesses, HRW’s report said.

An HRW researcher, Hye Jung Han, helped expose the problem. She analyzed “less than 0.0001 percent” of LAION-5B, a dataset built from Common Crawl snapshots of the public web. The dataset does not contain the actual photos but includes image-text pairs derived from 5.85 billion images and captions posted online since 2008.

Among those images linked in the dataset, Han found 170 photos of children from at least 10 Brazilian states. These were mostly family photos uploaded to personal and parenting blogs most Internet surfers wouldn’t easily stumble upon, “as well as stills from YouTube videos with small view counts, seemingly uploaded to be shared with family and friends,” Wired reported.

LAION, the German nonprofit that created the dataset, has worked with HRW to remove the links to the children’s images in the dataset.

That may not completely resolve the problem, though. HRW’s report warned that the removed links are “likely to be a significant undercount of the total amount of children’s personal data that exists in LAION-5B.” Han told Wired that she fears that the dataset may still be referencing personal photos of kids “from all over the world.”

Removing the links also does not remove the images from the public web, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl, LAION’s spokesperson, Nate Tyler, told Ars.

“This is a larger and very concerning issue, and as a nonprofit, volunteer organization, we will do our part to help,” Tyler told Ars.

Han told Ars that “Common Crawl should stop scraping children’s personal data, given the privacy risks involved and the potential for new forms of misuse.”

According to HRW’s analysis, many of the Brazilian children’s identities were “easily traceable,” due to children’s names and locations being included in image captions that were processed when building the LAION dataset.

And at a time when middle and high school-aged students are at greater risk of being targeted by bullies or bad actors turning “innocuous photos” into explicit imagery, it’s possible that AI tools may be better equipped to generate AI clones of kids whose images are referenced in AI datasets, HRW suggested.

“The photos reviewed span the entirety of childhood,” HRW’s report said. “They capture intimate moments of babies being born into the gloved hands of doctors, young children blowing out candles on their birthday cake or dancing in their underwear at home, students giving a presentation at school, and teenagers posing for photos at their high school’s carnival.”

There is less risk that the Brazilian kids’ photos are currently powering AI tools since “all publicly available versions of LAION-5B were taken down” in December, Tyler told Ars. That decision came out of an “abundance of caution” after a Stanford University report “found links in the dataset pointing to illegal content on the public web,” Tyler said, including 3,226 suspected instances of child sexual abuse material.

Han told Ars that “the version of the dataset that we examined pre-dates LAION’s temporary removal of its dataset in December 2023.” The dataset will not be available again until LAION determines that all flagged illegal content has been removed.

“LAION is currently working with the Internet Watch Foundation, the Canadian Centre for Child Protection, Stanford, and Human Rights Watch to remove all known references to illegal content from LAION-5B,” Tyler told Ars. “We are grateful for their support and hope to republish a revised LAION-5B soon.”

In Brazil, “at least 85 girls” have reported classmates harassing them by using AI tools to “create sexually explicit deepfakes of the girls based on photos taken from their social media profiles,” HRW reported. Once these explicit deepfakes are posted online, they can inflict “lasting harm,” HRW warned, potentially remaining online for their entire lives.

“Children should not have to live in fear that their photos might be stolen and weaponized against them,” Han said. “The government should urgently adopt policies to protect children’s data from AI-fueled misuse.”

Ars could not immediately reach Stable Diffusion maker Stability AI for comment.

AI trained on photos from kids’ entire childhood without their consent Read More »