Policy

judge-calls-out-openai’s-“straw-man”-argument-in-new-york-times-copyright-suit

Judge calls out OpenAI’s “straw man” argument in New York Times copyright suit

“Taken as true, these facts give rise to a plausible inference that defendants at a minimum had reason to investigate and uncover end-user infringement,” Stein wrote.

To Stein, the fact that OpenAI maintains an “ongoing relationship” with users by providing outputs that respond to users’ prompts also supports contributory infringement claims, despite OpenAI’s argument that ChatGPT’s “substantial noninfringing uses” are exonerative.

OpenAI defeated some claims

For OpenAI, Stein’s ruling likely disappoints, although Stein did drop some of NYT’s claims.

Likely upsetting to news publishers, that included a “free-riding” claim that ChatGPT unfairly profits off time-sensitive “hot news” items, including the NYT’s Wirecutter posts. Stein explained that news publishers failed to plausibly allege non-attribution (which is key to a free-riding claim) because, for example, ChatGPT cites the NYT when sharing information from Wirecutter posts. Those claims are pre-empted by the Copyright Act anyway, Stein wrote, granting OpenAI’s motion to dismiss.

Stein also dismissed a claim from the NYT regarding alleged removal of copyright management information (CMI), which Stein said cannot be proven simply because ChatGPT reproduces excerpts of NYT articles without CMI.

The Digital Millennium Copyright Act (DMCA) requires news publishers to show that ChatGPT’s outputs are “close to identical” to the original work, Stein said, and allowing publishers’ claims based on excerpts “would risk boundless DMCA liability”—including for any use of block quotes without CMI.

Asked for comment on the ruling, an OpenAI spokesperson declined to go into any specifics, instead repeating OpenAI’s long-held argument that AI training on copyrighted works is fair use. (Last month, OpenAI warned Donald Trump that the US would lose the AI race to China if courts ruled against that argument.)

“ChatGPT helps enhance human creativity, advance scientific discovery and medical research, and enable hundreds of millions of people to improve their daily lives,” OpenAI’s spokesperson said. “Our models empower innovation, and are trained on publicly available data and grounded in fair use.”

Judge calls out OpenAI’s “straw man” argument in New York Times copyright suit Read More »

eu-may-“make-an-example-of-x”-by-issuing-$1-billion-fine-to-musk’s-social-network

EU may “make an example of X” by issuing $1 billion fine to Musk’s social network

European Union regulators are preparing major penalties against X, including a fine that could exceed $1 billion, according to a New York Times report yesterday.

The European Commission determined last year that Elon Musk’s social network violated the Digital Services Act. Regulators are now in the process of determining what punishment to impose.

“The penalties are set to include a fine and demands for product changes,” the NYT report said, attributing the information to “four people with knowledge of the plans.” The penalty is expected to be issued this summer and would be the first one under the new EU law.

“European authorities have been weighing how large a fine to issue X as they consider the risks of further antagonizing [President] Trump amid wider trans-Atlantic disputes over trade, tariffs and the war in Ukraine,” the NYT report said. “The fine could surpass $1 billion, one person said, as regulators seek to make an example of X to deter other companies from violating the law, the Digital Services Act.”

X’s global government affairs account criticized European regulators in a post last night. “If the reports that the European Commission is considering enforcement actions against X are accurate, it represents an unprecedented act of political censorship and an attack on free speech,” X said. “X has gone above and beyond to comply with the EU’s Digital Services Act, and we will use every option at our disposal to defend our business, keep our users safe, and protect freedom of speech in Europe.”

Penalty math could include Musk’s other firms

The Digital Services Act allows fines of up to 6 percent of a company’s total worldwide annual turnover. EU regulators suggested last year that they could calculate fines by including revenue from Musk’s other companies, including SpaceX. Yesterday’s NYT report says this method is still under consideration:

EU may “make an example of X” by issuing $1 billion fine to Musk’s social network Read More »

nj-teen-wins-fight-to-put-nudify-app-users-in-prison,-impose-fines-up-to-$30k

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K


Here’s how one teen plans to fix schools failing kids affected by nudify apps.

When Francesca Mani was 14 years old, boys at her New Jersey high school used nudify apps to target her and other girls. At the time, adults did not seem to take the harassment seriously, telling her to move on after she demanded more severe consequences than just a single boy’s one or two-day suspension.

Mani refused to take adults’ advice, going over their heads to lawmakers who were more sensitive to her demands. And now, she’s won her fight to criminalize deepfakes. On Wednesday, New Jersey Governor Phil Murphy signed a law that he said would help victims “take a stand against deceptive and dangerous deepfakes” by making it a crime to create or share fake AI nudes of minors or non-consenting adults—as well as deepfakes seeking to meddle with elections or damage any individuals’ or corporations’ reputations.

Under the law, victims targeted by nudify apps like Mani can sue bad actors, collecting up to $1,000 per harmful image created either knowingly or recklessly. New Jersey hopes these “more severe consequences” will deter kids and adults from creating harmful images, as well as emphasize to schools—whose lax response to fake nudes has been heavily criticized—that AI-generated nude images depicting minors are illegal and must be taken seriously and reported to police. It imposes a maximum fine of $30,000 on anyone creating or sharing deepfakes for malicious purposes, as well as possible punitive damages if a victim can prove that images were created in willful defiance of the law.

Ars could not reach Mani for comment, but she celebrated the win in the governor’s press release, saying, “This victory belongs to every woman and teenager told nothing could be done, that it was impossible, and to just move on. It’s proof that with the right support, we can create change together.”

On LinkedIn, her mother, Dorota Mani—who has been working with the governor’s office on a commission to protect kids from online harms—thanked lawmakers like Murphy and former New Jersey Assemblyman Herb Conaway, who sponsored the law, for “standing with us.”

“When used maliciously, deepfake technology can dismantle lives, distort reality, and exploit the most vulnerable among us,” Conaway said. “I’m proud to have sponsored this legislation when I was still in the Assembly, as it will help us keep pace with advancing technology. This is about drawing a clear line between innovation and harm. It’s time we take a firm stand to protect individuals from digital deception, ensuring that AI serves to empower our communities.”

Doing nothing is no longer an option for schools, teen says

Around the country, as cases like Mani’s continue to pop up, experts expect that shame prevents most victims from coming forward to flag abuses, suspecting that the problem is much more widespread than media reports suggest.

Encode Justice has a tracker monitoring reported cases involving minors, including allowing victims to anonymously report harms around the US. But the true extent of the harm currently remains unknown, as cops warn of a flood of AI child sex images obscuring investigations into real-world child abuse.

Confronting this shadowy threat to kids everywhere, Mani was named as one of TIME’s most influential people in AI last year due to her advocacy fighting deepfakes. She’s not only pressured lawmakers to take strong action to protect vulnerable people, but she’s also pushed for change at tech companies and in schools nationwide.

“When that happened to me and my classmates, we had zero protection whatsoever,” Mani told TIME, and neither did other girls around the world who had been targeted and reached out to thank her for fighting for them. “There were so many girls from different states, different countries. And we all had three things in common: the lack of AI school policies, the lack of laws, and the disregard of consent.”

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, told CBS News last year that protecting teens started with laws that criminalize sharing fake nudes and provide civil remedies, just as New Jersey’s law does. That way, “schools would have protocols,” she said, and “investigators and law enforcement would have roadmaps on how to investigate” and “what charges to bring.”

Clarity is urgently needed in schools, advocates say. At Mani’s school, the boys who shared the photos had their names shielded and were pulled out of class individually to be interrogated, but victims like Mani had no privacy whatsoever. Their names were blared over the school’s loud system, as boys mocked their tears in the hallway. To this day, it’s unclear who exactly shared and possibly still has copies of the images, which experts say could haunt Mani throughout her life. And the school’s inadequate response was a major reason why Mani decided to take a stand, seemingly viewing the school as a vehicle furthering her harassment.

“I realized I should stop crying and be mad, because this is unacceptable,” Mani told CBS News.

Mani pushed for NJ’s new law and claimed the win, but she thinks that change must start at schools, where the harassment starts. In her school district, the “harassment, intimidation and bullying” policy was updated to incorporate AI harms, but she thinks schools should go even further. Working with Encode Justice, she is helping to push a plan to fix schools failing kids targeted by nudify apps.

“My goal is to protect women and children—and we first need to start with AI school policies, because this is where most of the targeting is happening,” Mani told TIME.

Encode Justice did not respond to Ars’ request to comment. But their plan noted a common pattern in schools throughout the US. Students learn about nudify apps through ads on social media—such as Instagram reportedly driving 90 percent of traffic to one such nudify app—where they can also usually find innocuous photos of classmates to screenshot. Within seconds, the apps can nudify the screenshotted images, which Mani told CBS News then spread “rapid fire”  by text message and DMs, and often shared over school networks.

To end the abuse, schools need to be prepared, Encode Justice said, especially since “their initial response can sometimes exacerbate the situation.”

At Mani’s school, for example, leadership was criticized for announcing the victims’ names over the loudspeaker, which Encode Justice said never should have happened. Another misstep was at a California middle school, which delayed action for four months until parents went to police, Encode Justice said. In Texas, a school failed to stop images from spreading for eight months while a victim pleaded for help from administrators and police who failed to intervene. The longer the delays, the more victims will likely be targeted. In Pennsylvania, a single ninth grader targeted 46 girls before anyone stepped in.

Students deserve better, Mani feels, and Encode Justice’s plan recommends that all schools create action plans to stop failing students and respond promptly to stop image sharing.

That starts with updating policies to ban deepfake sexual imagery, then clearly communicating to students “the seriousness of the issue and the severity of the consequences.” Consequences should include identifying all perpetrators and issuing suspensions or expulsions on top of any legal consequences students face, Encode Justice suggested. They also recommend establishing “written procedures to discreetly inform relevant authorities about incidents and to support victims at the start of an investigation on deepfake sexual abuse.” And, critically, all teachers must be trained on these new policies.

“Doing nothing is no longer an option,” Mani said.

Photo of Ashley Belanger

Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.

NJ teen wins fight to put nudify app users in prison, impose fines up to $30K Read More »

critics-suspect-trump’s-weird-tariff-math-came-from-chatbots

Critics suspect Trump’s weird tariff math came from chatbots

Rumors claim Trump consulted chatbots

On social media, rumors swirled that the Trump administration got these supposedly fake numbers from chatbots. On Bluesky, tech entrepreneur Amy Hoy joined others posting screenshots from ChatGPT, Gemini, Claude, and Grok, each showing that the chatbots arrived at similar calculations as the Trump administration.

Some of the chatbots also warned against the oversimplified math in outputs. ChatGPT acknowledged that the easy method “ignores the intricate dynamics of international trade.” Gemini cautioned that it could only offer a “highly simplified conceptual approach” that ignored the “vast real-world complexities and consequences” of implementing such a trade strategy. And Claude specifically warned that “trade deficits alone don’t necessarily indicate unfair trade practices, and tariffs can have complex economic consequences, including increased prices and potential retaliation.” And even Grok warns that “imposing tariffs isn’t exactly ‘easy'” when prompted, calling it “a blunt tool: quick to swing, but the ripple effects (higher prices, pissed-off allies) can complicate things fast,” an Ars test showed, using a similar prompt as social media users generally asking, “how do you impose tariffs easily?”

The Verge plugged in phrasing explicitly used by the Trump administration—prompting chatbots to provide “an easy way for the US to calculate tariffs that should be imposed on other countries to balance bilateral trade deficits between the US and each of its trading partners, with the goal of driving bilateral trade deficits to zero”—and got the “same fundamental suggestion” as social media users reported.

Whether the Trump administration actually consulted chatbots while devising its global trade policy will likely remain a rumor. It’s possible that the chatbots’ training data simply aligned with the administration’s approach.

But with even chatbots warning that the strategy may not benefit the US, the pressure appears to be on Trump to prove that the reciprocal tariffs will lead to “better-paying American jobs making beautiful American-made cars, appliances, and other goods” and “address the injustices of global trade, re-shore manufacturing, and drive economic growth for the American people.” As his approval rating hits new lows, Trump continues to insist that “reciprocal tariffs are a big part of why Americans voted for President Trump.”

“Everyone knew he’d push for them once he got back in office; it’s exactly what he promised, and it’s a key reason he won the election,” the White House fact sheet said.

Critics suspect Trump’s weird tariff math came from chatbots Read More »

vast-pedophile-network-shut-down-in-europol’s-largest-csam-operation

Vast pedophile network shut down in Europol’s largest CSAM operation

Europol has shut down one of the largest dark web pedophile networks in the world, prompting dozens of arrests worldwide and threatening that more are to follow.

Launched in 2021, KidFlix allowed users to join for free to preview low-quality videos depicting child sex abuse materials (CSAM). To see higher-resolution videos, users had to earn credits by sending cryptocurrency payments, uploading CSAM, or “verifying video titles and descriptions and assigning categories to videos.”

Europol seized the servers and found a total of 91,000 unique videos depicting child abuse, “many of which were previously unknown to law enforcement,” the agency said in a press release.

KidFlix going dark was the result of the biggest child sexual exploitation operation in Europol’s history, the agency said. Operation Stream, as it was dubbed, was supported by law enforcement in more than 35 countries, including the United States.

Nearly 1,400 suspected consumers of CSAM have been identified among 1.8 million global KidFlix users, and 79 have been arrested so far. According to Europol, 39 child victims were protected as a result of the sting, and more than 3,000 devices were seized.

Police identified suspects through payment data after seizing the server. Despite cryptocurrencies offering a veneer of anonymity, cops were apparently able to use sophisticated methods to trace transactions to bank details. And in some cases cops defeated user attempts to hide their identities—such as a man who made payments using his mother’s name in Spain, a local news outlet, Todo Alicante, reported. It likely helped that most suspects were already known offenders, Europol noted.

Vast pedophile network shut down in Europol’s largest CSAM operation Read More »

not-just-signal:-michael-waltz-reportedly-used-gmail-for-government-messages

Not just Signal: Michael Waltz reportedly used Gmail for government messages

National Security Advisor Michael Waltz and a senior aide used personal Gmail accounts for government communications, according to a Washington Post report published yesterday.

Waltz has been at the center of controversy for weeks because he inadvertently invited The Atlantic Editor-in-Chief Jeffrey Goldberg to a Signal chat in which top Trump administration officials discussed a plan for bombing Houthi targets in Yemen. Yesterday’s report of Gmail use and another recent report on additional Signal chats raise more questions about the security of sensitive government communications in the Trump administration.

A senior Waltz aide used Gmail “for highly technical conversations with colleagues at other government agencies involving sensitive military positions and powerful weapons systems relating to an ongoing conflict,” The Washington Post wrote.

The Post said it reviewed the emails. “While the NSC official used his Gmail account, his interagency colleagues used government-issued accounts, headers from the email correspondence show,” the report said.

Waltz himself “had less sensitive, but potentially exploitable information sent to his Gmail, such as his schedule and other work documents, said officials, who, like others, spoke on the condition of anonymity to describe what they viewed as problematic handling of information,” the report said. “The officials said Waltz would sometimes copy and paste from his schedule into Signal to coordinate meetings and discussions.”

Separately, The Wall Street Journal described additional Signal chats in a report on Sunday about Waltz losing support inside the White House. “Two US officials also said that Waltz has created and hosted multiple other sensitive national-security conversations on Signal with cabinet members, including separate threads on how to broker peace between Russia and Ukraine as well as military operations. They declined to address if any classified information was posted in those chats,” the WSJ wrote.

We contacted the White House about the reported use of Gmail and Signal today and will update this article if we get a response.

Not just Signal: Michael Waltz reportedly used Gmail for government messages Read More »

doge-staffer’s-youtube-nickname-accidentally-revealed-his-teen-hacking-activity

DOGE staffer’s YouTube nickname accidentally revealed his teen hacking activity

A SpaceX and X engineer, Christopher Stanley—currently serving as a senior advisor in the Deputy Attorney General’s office at the Department of Justice (DOJ)—was reportedly caught bragging about hacking and distributing pirated e-books, bootleg software, and game cheats.

The boasts appeared on archived versions of websites, of which several, once flagged, were quickly deleted, Reuters reported.

Stanley was assigned to the DOJ by Elon Musk’s Department of Government Efficiency (DOGE). While Musk claims that DOGE operates transparently, not much is known about who the staffers are or what their government roles entail. It remains unclear what Stanley does at DOJ, but Reuters noted that the Deputy Attorney General’s office is in charge of investigations into various crimes, “including hacking and other malicious cyber activity.” Declining to comment further, the DOJ did confirm that as a “special government employee,” like Musk, Stanley does not draw a government salary.

The engineer’s questionable past seemingly dates back to 2006, Reuters reported, when Stanley was still in high school. The news site connected Stanley to various sites and forums by tracing various pseudonyms he still uses, including Reneg4d3, a nickname he still uses on YouTube. The outlet then further verified the connection “by cross-referencing the sites’ registration data against his old email address and by matching Reneg4d3’s biographical data to Stanley’s.”

Among his earliest sites was one featuring a “crude sketch of a penis” called fkn-pwnd.com, where Stanley, at 15, bragged about “fucking up servers,” a now-deleted Internet Archive screenshot reportedly showed. Another, reneg4d3.com, was launched when he was 16. There, Stanley branded a competing messaging board “stupid noobs” after supposedly gaining admin access through an “easy exploit,” Reuters reported. On Bluesky, an account called “doge whisperer” alleges even more hacking activity, some of which appears to be corroborated by an IA screenshot of another site Stanley created, electonic.net (sic), which as of this writing can still be accessed.

DOGE staffer’s YouTube nickname accidentally revealed his teen hacking activity Read More »

“chaos”-at-state-health-agencies-after-us-illegally-axed-grants,-lawsuit-says

“Chaos” at state health agencies after US illegally axed grants, lawsuit says

Nearly half of US states sued the federal government and Secretary of Health and Human Services Robert F. Kennedy Jr. today in a bid to halt the termination of $11 billion in public health grants. The lawsuit was filed by 23 states and the District of Columbia.

“The grant terminations, which came with no warning or legally valid explanation, have quickly caused chaos for state health agencies that continue to rely on these critical funds for a wide range of urgent public health needs such as infectious disease management, fortifying emergency preparedness, providing mental health and substance abuse services, and modernizing public health infrastructure,” said a press release issued by Colorado Attorney General Phil Weiser.

The litigation is led by Colorado, California, Minnesota, Rhode Island, and Washington. The other plaintiffs are Arizona, Connecticut, Delaware, the District of Columbia, Hawaii, Illinois, Kentucky, Maine, Maryland, Massachusetts, Michigan, Nevada, New Jersey, New Mexico, New York, North Carolina, Oregon, Pennsylvania, and Wisconsin.

Nearly all of the plaintiffs are represented by a Democratic attorney general. Kentucky and Pennsylvania have Republican attorneys general and are instead represented by their governors, both Democrats.

The complaint, filed in US District Court for the District of Rhode Island, is in response to the recent cut of grants that were originally created in response to the COVID-19 pandemic. “The sole stated basis for Defendants’ decision is that the funding for these grants or cooperative agreements was appropriated through one or more COVID-19 related laws,” the states’ lawsuit said.

The lawsuit says the US sent notices to states that grants were terminated “for cause” because “the grants and cooperative agreements were issued for a limited purpose: to ameliorate the effects of the pandemic. Now that the pandemic is over, the grants and cooperative agreements are no longer necessary as their limited purpose has run out.”

“Chaos” at state health agencies after US illegally axed grants, lawsuit says Read More »

ftc:-23andme-buyer-must-honor-firm’s-privacy-promises-for-genetic-data

FTC: 23andMe buyer must honor firm’s privacy promises for genetic data

Federal Trade Commission Chairman Andrew Ferguson said he’s keeping an eye on 23andMe’s bankruptcy proceeding and the company’s planned sale because of privacy concerns related to genetic testing data. 23andMe and its future owner must uphold the company’s privacy promises, Ferguson said in a letter sent yesterday to representatives of the US Trustee Program, a Justice Department division that oversees administration of bankruptcy proceedings.

“As Chairman of the Federal Trade Commission, I write to express the FTC’s interests and concerns relating to the potential sale or transfer of millions of American consumers’ sensitive personal information,” Ferguson wrote. He continued:

As you may know, 23andMe collects and holds sensitive, immutable, identifiable personal information about millions of American consumers who have used the Company’s genetic testing and telehealth services. This includes genetic information, biological DNA samples, health information, ancestry and genealogy information, personal contact information, payment and billing information, and other information, such as messages that genetic relatives can send each other through the platform.

23andMe’s recent bankruptcy announcement set off a wave of concern about the fate of genetic data for its 15 million customers. The company said that “any buyer of 23andMe will be required to comply with our privacy policy and with all applicable law with respect to the treatment of customer data.” Many users reacted to the news by deleting their data, though tech problems apparently related to increased website traffic made that process difficult.

23andMe’s ability to secure user data is also a reason for concern. Hackers stole ancestry data for 6.9 million 23andMe users, the company confirmed in December 2023.

The bankruptcy is being overseen in US Bankruptcy Court for the Eastern District of Missouri.

FTC: Bankruptcy law protects customers

Ferguson’s letter points to several promises made by 23andMe and says these pledges must be upheld. “The FTC believes that, consistent with Section 363(b)(1) of the Bankruptcy Code, these types of promises to consumers must be kept. This means that any bankruptcy-related sale or transfer involving 23andMe users’ personal information and biological samples will be subject to the representations the Company has made to users about both privacy and data security, and which users relied upon in providing their sensitive data to the Company,” he wrote. “Moreover, as promised by 23andMe, any purchaser should expressly agree to be bound by and adhere to the terms of 23andMe’s privacy policies and applicable law, including as to any changes it subsequently makes to those policies.”

FTC: 23andMe buyer must honor firm’s privacy promises for genetic data Read More »

doge-accesses-federal-payroll-system-and-punishes-employees-who-objected

DOGE accesses federal payroll system and punishes employees who objected

Elon Musk’s Department of Government Efficiency (DOGE) has gained access “to a payroll system that processes salaries for about 276,000 federal employees across dozens of agencies,” despite “objections from senior IT staff who feared it could compromise highly sensitive government personnel information” and lead to cyberattacks, The New York Times reported today.

The system at the Interior Department gives DOGE “visibility into sensitive employee information, such as Social Security numbers, and the ability to more easily hire and fire workers,” the NYT wrote, citing people familiar with the matter. DOGE workers had been trying to get access to the Federal Personnel and Payroll System for about two weeks and succeeded over the weekend, the report said.

“The dispute came to a head on Saturday, as the DOGE workers obtained the access and then placed two of the IT officials who had resisted them on administrative leave and under investigation, the people said,” according to the NYT report. The agency’s CIO and CISO are reportedly under investigation for their “workplace behavior.”

When contacted by Ars today, the Interior Department said, “We are working to execute the President’s directive to cut costs and make the government more efficient for the American people and have taken actions to implement President Trump’s Executive Orders.”

DOGE’s access to federal systems continues to grow despite court rulings that ordered the government to cut DOGE off from specific records, such as those held by the Social Security Administration, Treasury Department, Department of Education, and Office of Personnel Management.

DOGE accesses federal payroll system and punishes employees who objected Read More »

even-trump-may-not-be-able-to-save-elon-musk-from-his-old-tweets

Even Trump may not be able to save Elon Musk from his old tweets

A loss in the investors’ and SEC’s suits could force Musk to disgorge any ill-gotten gains from the alleged scheme, estimated at $150 million, as well as potential civil penalties.

The SEC and Musk’s X (formerly Twitter) did not respond to Ars’ request to comment. Investors’ lawyers declined to comment on the ongoing litigation.

SEC purge may slow down probes

Under the Biden administration, the SEC alleged that “Musk’s violation resulted in substantial economic harm to investors selling Twitter common stock.” For the lead plaintiffs in the investors’ suit, the Oklahoma Firefighters Pension and Retirement System, the scheme allegedly robbed retirees of gains used to sustain their quality of life at a particularly vulnerable time.

Musk has continued to argue that his alleged $200 million in savings from the scheme was minimal compared to his $44 billion purchase price. But the alleged gains represent about two-thirds of the $290 million price the billionaire paid to support Trump’s election, which won Musk a senior advisor position in the Trump administration, CNBC reported. So it’s seemingly not an insignificant amount of money in the grand scheme.

Likely bending to Musk’s influence, one of Trump’s earliest moves after taking office, CNBC reported, was reversing a 15-year-old policy allowing the SEC director of enforcement to launch probes like the one Musk is currently battling. It allowed the Tesla probe, for example, to be launched just seven days after Musk’s allegedly problematic tweets, the SEC boasted in a 2020 press release.

Now, after Trump’s rule change, investigations must be approved by a vote of SEC commissioners. That will likely slow down probes that the SEC had previously promised years ago would only speed up over time in order to more swiftly protect investors.

SEC expected to reduce corporate fines

For Musk, the SEC has long been a thorn in his side. At least two top officials (1, 2) cited the Tesla settlement as a career highlight, with the agency seeming especially proud of thinking “creatively about appropriate remedies,” the 2020 press release said. Monitoring Musk’s tweets, the SEC said, blocked “potential harm to investors” and put control over Musk’s tweets into the SEC’s hands.

Even Trump may not be able to save Elon Musk from his old tweets Read More »

france-fines-apple-e150m-for-“excessive”-pop-ups-that-let-users-reject-tracking

France fines Apple €150M for “excessive” pop-ups that let users reject tracking

A typical ATT  pop-up asks a user whether to allow an app “to track your activity across other companies’ apps and websites,” and says that “your data will be used to deliver personalized ads to you.”

Agency: “Double consent” too cumbersome

The agency said there is an “asymmetry” in which user consent for Apple’s own data collection is obtained with a single pop-up, but other publishers are “required to obtain double consent from users for tracking on third-party sites and applications.” The press release notes that “while advertising tracking only needs to be refused once, the user must always confirm their consent a second time.”

The system was said to be less harmful for big companies like Meta and Google and “particularly harmful for smaller publishers that do not enjoy alternative targeting possibilities, in particular in the absence of sufficient proprietary data.” Although France’s focus is on how ATT affects smaller companies, Apple’s privacy system has also been criticized by Facebook.

The €150 million fine won’t make much of a dent in Apple’s revenue, but Apple will apparently have to make some changes to comply with the French order. The agency’s press release said the problem “could be avoided by marginal modifications to the ATT framework.”

Benoit Coeure, the head of France’s competition authority, “told reporters the regulator had not spelled out how Apple should change its app, but that it was up to the company to make sure it now complied with the ruling,” according to Reuters. “The compliance process could take some time, he added, because Apple was waiting for rulings on regulators in Germany, Italy, Poland and Romania who are also investigating the ATT tool.”

Apple said in a statement that the ATT “prompt is consistent for all developers, including Apple, and we have received strong support for this feature from consumers, privacy advocates, and data protection authorities around the world. While we are disappointed with today’s decision, the French Competition Authority (FCA) has not required any specific changes to ATT.”

France fines Apple €150M for “excessive” pop-ups that let users reject tracking Read More »