Policy

ftc-bans-noncompete-clauses,-declares-vast-majority-unenforceable

FTC bans noncompete clauses, declares vast majority unenforceable

No more noncompetes —

Chamber of Commerce vows to sue FTC, will try to block ban on noncompetes.

Federal Trade Commission Chair Lina Khan smiles while talking with people at an event.

Enlarge / Federal Trade Commission Chair Lina Khan talks with guests during an event in the Eisenhower Executive Office Building on April 03, 2024

Getty Images | Chip Somodevilla

The Federal Trade Commission (FTC) today announced that it has issued a final rule banning noncompete clauses. The rule will render the vast majority of current noncompete clauses unenforceable, according to the agency.

“In the final rule, the Commission has determined that it is an unfair method of competition and therefore a violation of Section 5 of the FTC Act, for employers to enter into noncompetes with workers and to enforce certain noncompetes,” the FTC said.

The US Chamber of Commerce said it will sue the FTC in an effort to block the rule, claiming the ban is “a blatant power grab that will undermine American businesses’ ability to remain competitive.”

The FTC proposed the rule in January 2023 and received over 26,000 public comments on its proposal. Over 25,000 of the comments supported the proposed ban, the FTC said. The final rule announced today will take effect 120 days after it is published in the Federal Register, unless opponents of the rule secure a court order blocking it.

The FTC said that “noncompetes are a widespread and often exploitative practice imposing contractual conditions that prevent workers from taking a new job or starting a new business. Noncompetes often force workers to either stay in a job they want to leave or bear other significant harms and costs, such as being forced to switch to a lower-paying field, being forced to relocate, being forced to leave the workforce altogether, or being forced to defend against expensive litigation.”

Noncompete clauses currently bind about 30 million workers in the US, the agency said. “Under the FTC’s new rule, existing noncompetes for the vast majority of workers will no longer be enforceable after the rule’s effective date,” the FTC said.

FTC: “Noncompete clauses keep wages low”

The only existing noncompetes that won’t be nullified are those for senior executives, who represent less than 0.75 percent of workers, the FTC said. The rule defines senior executives as people earning more than $151,164 a year and who are in policy-making positions.

“The final rule allows existing noncompetes with senior executives to remain in force because this subset of workers is less likely to be subject to the kind of acute, ongoing harms currently being suffered by other workers subject to existing noncompetes and because commenters raised credible concerns about the practical impacts of extinguishing existing noncompetes for senior executives,” the FTC said.

Senior executives will be protected from new noncompete clauses after the rule takes effect. Employers will be “banned from entering into or attempting to enforce any new noncompetes, even if they involve senior executives,” the FTC said. “Employers will be required to provide notice to workers other than senior executives who are bound by an existing noncompete that they will not be enforcing any noncompetes against them.”

The FTC vote was 3-2, with Democrats supporting the noncompete ban and Republicans opposing.

“Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism, including from the more than 8,500 new startups that would be created a year once noncompetes are banned,” FTC Chair Lina Khan said. “The FTC’s final rule to ban noncompetes will ensure Americans have the freedom to pursue a new job, start a new business, or bring a new idea to market.”

Chamber of Commerce CEO Suzanne Clark argued that “the FTC has never been granted the constitutional and statutory authority to write its own competition rules… The Chamber will sue the FTC to block this unnecessary and unlawful rule and put other agencies on notice that such overreach will not go unchecked.”

FTC cites authority, urges businesses to raise wages

The FTC argues that it can impose the rule using authority under sections 5 and 6(g) of the FTC Act:

Alongside section 5, Congress adopted section 6(g) of the Act, in which it authorized the Commission to “make rules and regulations for the purpose of carrying out the provisions of” the FTC Act, which include the Act’s prohibition of unfair methods of competition. The plain text of section 5 and section 6(g), taken together, empower the Commission to promulgate rules for the purpose of preventing unfair methods of competition. That includes legislative rules defining certain conduct as an unfair method of competition.

The FTC said it found evidence that “noncompetes tend to negatively affect competitive conditions in product and service markets, inhibiting new business formation and innovation” and “lead to increased market concentration and higher prices for consumers.”

Businesses can protect trade secrets without noncompetes, the agency said:

Trade secret laws and nondisclosure agreements (NDAs) both provide employers with well-established means to protect proprietary and other sensitive information. Researchers estimate that over 95 percent of workers with a noncompete already have an NDA.

The Commission also finds that instead of using noncompetes to lock in workers, employers that wish to retain employees can compete on the merits for the worker’s labor services by improving wages and working conditions.

FTC bans noncompete clauses, declares vast majority unenforceable Read More »

grindr-users-seek-payouts-after-dating-app-shared-hiv-status-with-vendors

Grindr users seek payouts after dating app shared HIV status with vendors

A person's finger hovering over a Grindr app icon on a phone screen

Getty Images | Thomas Trutschel

Grindr is facing a class action lawsuit from hundreds of users over the sharing of HIV statuses and other sensitive personal information with third-party firms.

UK law firm Austen Hays filed the claim in the High Court in London yesterday, the firm announced. The class action “alleges the misuse of private information of thousands of affected UK Grindr users, including highly sensitive information about their HIV status and latest tested date,” the law firm said.

The law firm said it has signed up over 670 potential class members and “is in discussions with thousands of other individuals who are interested in joining the claim.” Austen Hays said that “claimants could receive thousands in damages” from Grindr, a gay dating app, if the case is successful.

Austen Hays alleges that Grindr violated UK data protection laws by sharing sensitive data for commercial purposes without users’ consent, including when it “unlawfully processed and shared users’ data with third parties, including advertising companies Localytics and Apptimize.”

While Austen Hays describes Localytics and Apptimize as advertising firms, they do not seem to be in the business of selling ads. Localytics is software for mobile app marketing and analytics, while Apptimize says it provides A/B testing and feature release management for product teams.

Grindr admitted sharing HIV status, said it stopped

Grindr has admitted sharing HIV status with the firms but stressed that it wasn’t for advertising purposes and pledged to stop sharing that information. The sharing of HIV status came to light in 2018 thanks to the work of independent researchers. At the time, Grindr said it “has never sold, nor will we ever sell, personal user information—especially information regarding HIV status or last test date—to third parties or advertisers.”

Grindr said it “consult[ed] several international health organizations” before determining in 2016 that it would be “beneficial for the health and well-being of our community to give users the option to publish, at their discretion, their HIV status and their ‘Last Tested Date’ to their public profile.”

Grindr acknowledged that it had been “sharing HIV status information with our trusted vendors, Apptimize and Localytics.” Apptimize software helped Grindr test and deploy new app features including an “HIV Testing Reminder” feature, while Localytics software was used “to confirm that the new features were not causing problems with the functioning of the Grindr app,” Grindr said.

Today, Grindr provided Ars with a statement in response to the lawsuit. “We are committed to protecting our users’ data and complying with all applicable data privacy regulations, including in the UK,” the company said. Grindr has never shared user-reported health information for ‘commercial purposes’ and has never monetized such information. We intend to respond vigorously to this claim, which appears to be based on a mischaracterization of practices from more than four years ago, prior to early 2020.”

Grindr users seek payouts after dating app shared HIV status with vendors Read More »

biden-signs-bill-criticized-as-“major-expansion-of-warrantless-surveillance”

Biden signs bill criticized as “major expansion of warrantless surveillance”

Abstract image of human eye on a digital background

Getty Images | Yuichiro Chino

Congress passed and President Biden signed a reauthorization of Title VII of the Foreign Intelligence Surveillance Act (FISA), approving a bill that opponents say includes a “major expansion of warrantless surveillance” under Section 702 of FISA.

Over the weekend, the Reforming Intelligence and Securing America Act was approved by the Senate in a 60-34 vote. The yes votes included 30 Republicans, 28 Democrats, and two independents who caucus with Democrats. The bill, which was previously passed by the House and reauthorizes Section 702 of FISA for two years, was signed by President Biden on Saturday.

“Thousands and thousands of Americans could be forced into spying for the government by this new bill and with no warrant or direct court oversight whatsoever,” Sen. Ron Wyden (D-Ore.), a member of the Senate Select Committee on Intelligence, said on Friday. “Forcing ordinary Americans and small businesses to conduct secret, warrantless spying is what authoritarian countries do, not democracies.”

Wyden and Sen. Cynthia Lummis (R-Wyo.) led a bipartisan group of eight senators who submitted an amendment to reverse what Wyden’s office called “a major expansion of warrantless surveillance under Section 702 of the Foreign Intelligence Surveillance Act that was included in the House-passed bill.” After the bill was approved by the Senate without the amendment, Wyden said it seemed “that senators were unwilling to send this bill back to the House, no matter how common-sense the amendment before them.”

Sen. Ted Cruz (R-Texas) said he voted against the reauthorization “because it failed to include the most important requirement to protect Americans’ civil rights: that law enforcement get a warrant before targeting a US citizen.”

Bill expands definition of service provider

The Wyden/Lummis amendment would have struck language that expands the definition of an electronic communication service provider to include, with some exceptions, any “service provider who has access to equipment that is being or may be used to transmit or store wire or electronic communications.” The exceptions are for public accommodation facilities, dwellings, community facilities, and food service establishments.

“Instead of using the opportunity to curb warrantless surveillance of Americans’ private communications and protect the public’s privacy, Congress passed an expansive, unchecked surveillance authority,” Sen. Edward J. Markey (D-Mass.) said after the vote. “This FISA reauthorization legislation is a step backwards, doing nothing to address the extent to which the government conducts surveillance over its own citizens.”

Under the 2008 FISA Amendments Act, electronic communication service providers already included telecommunications carriers, providers of electronic communication services, providers of remote computing services, and “any other communication service provider who has access to wire or electronic communications either as such communications are transmitted or as such communications are stored.” These entities must provide the government with information, facilities, and assistance necessary to obtain communications.

The Brennan Center for Justice at New York University School of Law called the reauthorization “the largest expansion of domestic surveillance authority since the Patriot Act.”

“The bill, which would effectively grant the federal government access to the communications equipment of almost any business in the United States, is a gift to any president who may wish to spy on political enemies,” said Elizabeth Goitein, senior director of the Brennan Center’s Liberty and National Security Program.

Biden signs bill criticized as “major expansion of warrantless surveillance” Read More »

china-orders-apple-to-remove-meta-apps-after-“inflammatory”-posts-about-president

China orders Apple to remove Meta apps after “inflammatory” posts about president

Apple and China —

WhatsApp, Threads, Telegram, and Signal removed from Apple App Store in China.

People walk past an Apple store in Shanghai, China.

Enlarge / An Apple Store in Shanghai, China, on April 11, 2024.

CFOTO/Future Publishing via Getty Images

Apple said it complied with orders from the Chinese government to remove the Meta-owned WhatsApp and Threads from its App Store in China. Apple also removed Telegram and Signal from China.

“We are obligated to follow the laws in the countries where we operate, even when we disagree,” Apple said in a statement quoted by several news outlets. “The Cyberspace Administration of China ordered the removal of these apps from the China storefront based on their national security concerns. These apps remain available for download on all other storefronts where they appear.”

The Wall Street Journal paraphrased a person familiar with the matter as saying that the Chinese cyberspace agency “asked Apple to remove WhatsApp and Threads from the App Store because both contain political content that includes problematic mentions of the Chinese president [Xi Jinping].”

The New York Times similarly wrote that “a person briefed on the situation said the Chinese government had found content on WhatsApp and Threads about China’s president, Xi Jinping, that was inflammatory and violated the country’s cybersecurity laws. The specifics of what was in the content was unclear, the person said.”

Meta apps Facebook, Instagram, and Messenger were still available for iOS in China today, according to Reuters. As Reuters noted, the four apps removed from Apple’s China store were not widely used in the country, where WeChat is the dominant service.

“These apps and many foreign apps are normally blocked on Chinese networks by the ‘Great Firewall’—the country’s extensive cybersystem of censorship—and can only be used with a virtual private network or other proxy tools,” Reuters wrote. WhatsApp, Threads, Telegram, and Signal were reportedly still available on Apple devices in Hong Kong and Macau, China’s special administrative regions.

US House moves on forcing TikTok sale or ban

China’s crackdown on foreign messaging apps comes amid US debate over whether to ban or force a sale of the Chinese-owned TikTok. The House Commerce Committee last month voted 50–0 to approve a bill that would force TikTok owner ByteDance to sell the company or lose access to the US market.

US lawmakers argue that TikTok poses national security risks, saying that China can use the app to obtain sensitive personal data and manipulate US public opinion. House leaders are reportedly planning a floor vote on the TikTok bill on Saturday.

US lawmakers raised concerns about Apple’s China ties after the recent cancellation of Apple TV+ show The Problem with Jon Stewart. Stewart reportedly told members of his staff that Apple executives were concerned about potential show topics related to China and artificial intelligence.

Apple pulled The New York Times app from its store in China in December 2016, saying that Apple was informed by China “that the app is in violation of local regulations.” The New York Times news app is still unavailable on Apple’s App Store in China, the Reuters article said.

“For years, Apple has bowed to Beijing’s demands that it block an array of apps, including newspapers, VPNs, and encrypted messaging services,” The New York Times noted yesterday. “It also built a data center in the country to house Chinese citizens’ iCloud information, which includes personal contacts, photos and email.”

China orders Apple to remove Meta apps after “inflammatory” posts about president Read More »

crypto-influencer-guilty-of-$110m-scheme-that-shut-down-mango-markets

Crypto influencer guilty of $110M scheme that shut down Mango Markets

Crypto influencer guilty of $110M scheme that shut down Mango Markets

A jury has unanimously convicted Avi Eisenberg in the US Department of Justice’s first case involving cryptocurrency open-market manipulation, the DOJ announced Thursday.

The jury found Eisenberg guilty of commodities fraud, commodities market manipulation, and wire fraud in connection with the manipulation on a decentralized cryptocurrency exchange called Mango Markets.

Eisenberg is scheduled to be sentenced on July 29 and is facing “a maximum penalty of 10 years in prison on the commodities fraud count and the commodities manipulation count, and a maximum penalty of 20 years in prison on the wire fraud count,” the DOJ said.

On the Mango Markets exchange, Eisenberg was “engaged in a scheme to fraudulently obtain approximately $110 million worth of cryptocurrency from Mango Markets and its customers by artificially manipulating the price of certain perpetual futures contracts,” the DOJ said. The scheme impacted both investors trading and the exchange itself, which had to suspend operations after Eisenberg’s attack made the exchange insolvent.

Nicole M. Argentieri, the principal deputy assistant attorney general who heads the DOJ’s criminal division, said that Eisenberg’s manipulative trading scheme “puts our financial markets and investors at risk.”

“This prosecution—the first involving the manipulation of cryptocurrency through open-market trades—demonstrates the Criminal Division’s commitment to protecting US financial markets and holding wrongdoers accountable, no matter what mechanism they use to commit manipulation and fraud,” Argentieri said.

Mango Labs has similarly sued Eisenberg over the price manipulation scheme, but that lawsuit was stayed until the DOJ’s case was resolved. Mango Labs is expecting a status update today from the US government and is hoping to proceed with its lawsuit.

Ars could not immediately reach Mango Labs for comment.

Eisenberg’s lawyer, Brian Klein, provided the same statement to Ars, confirming that Eisenberg’s legal team is “obviously disappointed” but “will keep fighting for our client.”

How the Mango Markets scheme worked

Mango Labs has accused Eisenberg of being a “notorious cryptocurrency market manipulator,” noting in its complaint that he has a “history of attacking multiple cryptocurrency platforms and manipulating cryptocurrency markets.” That history includes allegedly embezzling $14 million in 2021 while Eisenberg was working as a developer for another decentralized marketplace called Fortress, Mango Labs’ complaint said.

Eisenberg’s attack on Mango Markets intended to grab tens of millions more than the alleged Fortress attack. When Eisenberg was first charged, the DOJ explained how his Mango Markets price manipulation scheme worked.

On Mango Markets, investors can “purchase and borrow cryptocurrencies and cryptocurrency-related financial products,” including buying and selling “perpetual futures contracts.”

“When an investor buys or sells a perpetual for a particular cryptocurrency, the investor is not buying or selling that cryptocurrency but is, instead, buying or selling exposure to future movements in the value of that cryptocurrency relative to another cryptocurrency,” the DOJ explained.

Crypto influencer guilty of $110M scheme that shut down Mango Markets Read More »

netflix-doc-accused-of-using-ai-to-manipulate-true-crime-story

Netflix doc accused of using AI to manipulate true crime story

Everything is not as it seems —

Producer remained vague about whether AI was used to edit photos.

A cropped image showing Raw TV's poster for the Netflix documentary <em>What Jennifer Did</em>, which features a long front tooth that leads critics to believe it was AI-generated.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/04/What-Jennifer-Did-Netflix-poster-cropped-800×450.jpg”></img><figcaption>
<p><a data-height=Enlarge / A cropped image showing Raw TV’s poster for the Netflix documentary What Jennifer Did, which features a long front tooth that leads critics to believe it was AI-generated.

An executive producer of the Netflix hit What Jennifer Did has responded to accusations that the true crime documentary used AI images when depicting Jennifer Pan, a woman currently imprisoned in Canada for orchestrating a murder-for-hire scheme targeting her parents.

What Jennifer Did shot to the top spot in Netflix’s global top 10 when it debuted in early April, attracting swarms of true crime fans who wanted to know more about why Pan paid hitmen $10,000 to murder her parents. But quickly the documentary became a source of controversy, as fans started noticing glaring flaws in images used in the movie, from weirdly mismatched earrings to her nose appearing to lack nostrils, the Daily Mail reported, in a post showing a plethora of examples of images from the film.

Futurism was among the first to point out that these flawed images (around the 28-minute mark of the documentary) “have all the hallmarks of an AI-generated photo, down to mangled hands and fingers, misshapen facial features, morphed objects in the background, and a far-too-long front tooth.” The image with the long front tooth was even used in Netflix’s poster for the movie.

Because the movie’s credits do not mention any uses of AI, critics called out the documentary filmmakers for potentially embellishing a movie that’s supposed to be based on real-life events.

But Jeremy Grimaldi—who is also the crime reporter who wrote a book on the case and provided the documentary with research and police footage—told the Toronto Star that the images were not AI-generated.

Grimaldi confirmed that all images of Pan used in the movie were real photos. He said that some of the images were edited, though, not to blur the lines between truth and fiction, but to protect the identity of the source of the images.

“Any filmmaker will use different tools, like Photoshop, in films,” Grimaldi told The Star. “The photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source.”

While Grimaldi’s comments provide some assurance that the photos are edited versions of real photos of Pan, they are also vague enough to obscure whether AI was among the “different tools” used to edit the photos.

One photographer, Joe Foley, wrote in a post for Creative Bloq that he thought “documentary makers may have attempted to enhance old low-resolution images using AI-powered upscaling or photo restoration software to try to make them look clearer on a TV screen.”

“The problem is that even the best AI software can only take a poor-quality image so far, and such programs tend to over sharpen certain lines, resulting in strange artifacts,” Foley said.

Foley suggested that Netflix should have “at the very least” clarified that images had been altered “to avoid this kind of backlash,” noting that “any kind of manipulation of photos in a documentary is controversial because the whole point is to present things as they were.”

Hollywood’s increasing use of AI has indeed been controversial, with screenwriters’ unions opposing AI tools as “plagiarism machines” and artists stirring recent backlash over the “experimental” use of AI art in a horror film. Even using AI for a movie poster, as Civil War did, is enough to generate controversy, the Hollywood Reporter reported.

Neither Raw TV, the production company behind What Jennifer Did, nor Netflix responded to Ars’ request for comment.

Netflix doc accused of using AI to manipulate true crime story Read More »

elon-musk’s-grok-keeps-making-up-fake-news-based-on-x-users’-jokes

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

It’s all jokes until it isn’t —

X likely hopes to avoid liability with disclaimer that Grok “can make mistakes.”

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

X’s chatbot Grok is supposed to be an AI engine crunching the platform’s posts to surface and summarize breaking news, but this week, Grok’s flaws were once again exposed when the chatbot got confused and falsely accused an NBA star of criminal vandalism.

“Klay Thompson Accused in Bizarre Brick-Vandalism Spree,” Grok’s headline read in an AI-powered trending-tab post that has remained on X (formerly Twitter) for days. Beneath the headline, Grok went into even more detail to support its fake reporting:

In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento. Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.

Grok appears to be confusing a common basketball term, where players are said to be throwing “bricks” when they take an airball shot that doesn’t hit the rim. According to SF Gate, which was one of the first outlets to report the Grok error, Thompson had an “all-time rough shooting” night, hitting none of his shots on what was his emotional last game with the Golden State Warriors before becoming an unrestricted free agent.

In small type under Grok’s report, X includes a disclaimer saying, “Grok is an early feature and can make mistakes. Verify its outputs.”

But instead of verifying Grok’s outputs, it appeared that X users—in the service’s famously joke-y spirit—decided to fuel Grok’s misinformation. Under the post, X users, some NBA fans, commented with fake victim reports, using the same joke format to seemingly convince Grok that “several individuals reported their houses being damaged.” Some of these joking comments were viewed by millions.

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson

— LakeShowYo (@LakeShowYo) April 17, 2024

First off…I am ok.

My house was vandalized by bricks in Sacramento.

After my hands stopped shaking, I managed to call the Sheriff, they were quick to respond.

My window is gone, the police asked me if I knew who did it.

I said yes, it was Klay Thompson. pic.twitter.com/smrDs6Yi5M

— KeeganMuse (@KeegMuse) April 17, 2024

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson pic.twitter.com/JaWtdJhFli

— JJJ Muse (@JarenJJMuse) April 17, 2024

X did not immediately respond to Ars’ request for comment or confirm if the post will be corrected or taken down.

In the past, both Microsoft and chatbot maker OpenAI have faced defamation lawsuits over similar fabrications in which ChatGPT falsely accused a politician and a radio host of completely made-up criminal histories. Microsoft was also sued by an aerospace professor who Bing Chat falsely labeled a terrorist.

Experts told Ars that it remains unclear if disclaimers like X’s will spare companies from liability should more people decide to sue over fake AI outputs. Defamation claims might depend on proving that platforms “knowingly” publish false statements, which disclaimers suggest they do. Last July, the Federal Trade Commission launched an investigation into OpenAI, demanding that the company address the FTC’s fears of “false, misleading, or disparaging” AI outputs.

Because the FTC doesn’t comment on its investigations, it’s impossible to know if its probe will impact how OpenAI conducts business.

For people suing AI companies, the urgency of protecting against false outputs seems obvious. Last year, the radio host suing OpenAI, Mark Walters, accused the company of “sticking its head in the sand” and “recklessly disregarding whether the statements were false under circumstances when they knew that ChatGPT’s hallucinations were pervasive and severe.”

X just released Grok to all premium users this month, TechCrunch reported, right around the time that X began giving away premium access to the platform’s top users. During that wider rollout, X touted Grok’s new ability to summarize all trending news and topics, perhaps stoking interest in this feature and peaking Grok usage just before Grok spat out the potentially defamatory post about the NBA star.

Thompson has not issued any statements on Grok’s fake reporting.

Grok’s false post about Thompson may be the first widely publicized example of potential defamation from Grok, but it wasn’t the first time that Grok promoted fake news in response to X users joking around on the platform. During the solar eclipse, a Grok-generated headline read, “Sun’s Odd Behavior: Experts Baffled,” Gizmodo reported.

While it’s amusing to some X users to manipulate Grok, the pattern suggests that Grok may also be vulnerable to being manipulated by bad actors into summarizing and spreading more serious misinformation or propaganda. That’s apparently already happening, too. In early April, Grok made up a headline about Iran attacking Israel with heavy missiles, Mashable reported.

Elon Musk’s Grok keeps making up fake news based on X users’ jokes Read More »

cops-can-force-suspect-to-unlock-phone-with-thumbprint,-us-court-rules

Cops can force suspect to unlock phone with thumbprint, US court rules

A man holding up his thumb for a thumbprint scan

The US Constitution’s Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law.

The US Court of Appeals for the 9th Circuit had to grapple with the question of “whether the compelled use of Payne’s thumb to unlock his phone was testimonial,” the ruling in United States v. Jeremy Travis Payne said. “To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.”

A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court’s denial of Payne’s motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine.

There was a dispute in District Court over whether a CHP officer “forcibly used Payne’s thumb to unlock the phone.” But for the purposes of Payne’s appeal, the government “accepted the defendant’s version of the facts, i.e., ‘that defendant’s thumbprint was compelled.'”

Payne’s Fifth Amendment claim “rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination,” the ruling said. Judges rejected his claim, holding “that the compelled use of Payne’s thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking.”

“When Officer Coddington used Payne’s thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne’s mind,” the court also said.

Suspect’s mental process is key

Payne conceded that “the use of biometrics to open an electronic device is akin to providing a physical key to a safe” but argued it is still a testimonial act because it “simultaneously confirm[s] ownership and authentication of its contents,” the court said. “However, Payne was never compelled to acknowledge the existence of any incriminating information. He merely had to provide access to a source of potential information.”

The appeals court cited two Supreme Court rulings in cases involving the US government. In Doe v. United States in 1988, the government compelled a person to sign forms consenting to disclosure of bank records relating to accounts that the government already knew about. The Supreme Court “held that this was not a testimonial production, reasoning that the signing of the forms related no information about existence, control, or authenticity of the records that the bank could ultimately be forced to produce,” the 9th Circuit said.

In United States v. Hubbell in 2000, a subpoena compelled a suspect to produce 13,120 pages of documents and records and respond “to a series of questions that established that those were all of the documents in his custody or control that were responsive to the commands in the subpoena.” The Supreme Court ruled against the government, as the 9th Circuit explained:

The Court held that this act of production was of a fundamentally different kind than that at issue in Doe because it was “unquestionably necessary for respondent to make extensive use of ‘the contents of his own mind’ in identifying the hundreds of documents responsive to the requests in the subpoena.” The “assembly of those documents was like telling an inquisitor the combination to a wall safe, not like being forced to surrender the key to a strongbox.” Thus, the dividing line between Doe and Hubbell centers on the mental process involved in a compelled act, and an inquiry into whether that act implicitly communicates the existence, control, or authenticity of potential evidence.

Cops can force suspect to unlock phone with thumbprint, US court rules Read More »

feds-appoint-“ai-doomer”-to-run-ai-safety-at-us-institute

Feds appoint “AI doomer” to run AI safety at US institute

Confronting doom —

Former OpenAI researcher once predicted a 50 percent chance of AI killing all of us.

Feds appoint “AI doomer” to run AI safety at US institute

The US AI Safety Institute—part of the National Institute of Standards and Technology (NIST)—has finally announced its leadership team after much speculation.

Appointed as head of AI safety is Paul Christiano, a former OpenAI researcher who pioneered a foundational AI safety technique called reinforcement learning from human feedback (RLHF), but is also known for predicting that “there’s a 50 percent chance AI development could end in ‘doom.'” While Christiano’s research background is impressive, some fear that by appointing a so-called “AI doomer,” NIST may be risking encouraging non-scientific thinking that many critics view as sheer speculation.

There have been rumors that NIST staffers oppose the hiring. A controversial VentureBeat report last month cited two anonymous sources claiming that, seemingly because of Christiano’s so-called “AI doomer” views, NIST staffers were “revolting.” Some staff members and scientists allegedly threatened to resign, VentureBeat reported, fearing “that Christiano’s association” with effective altruism and “longtermism could compromise the institute’s objectivity and integrity.”

NIST’s mission is rooted in advancing science by working to “promote US innovation and industrial competitiveness by advancing measurement science, standards, and technology in ways that enhance economic security and improve our quality of life.” Effective altruists believe in “using evidence and reason to figure out how to benefit others as much as possible” and longtermists that “we should be doing much more to protect future generations,” both of which are more subjective and opinion-based.

On the Bankless podcast, Christiano shared his opinions last year that “there’s something like a 10–20 percent chance of AI takeover” that results in humans dying, and “overall, maybe you’re getting more up to a 50-50 chance of doom shortly after you have AI systems that are human level.”

“The most likely way we die involves—not AI comes out of the blue and kills everyone—but involves we have deployed a lot of AI everywhere… [And] if for some reason, God forbid, all these AI systems were trying to kill us, they would definitely kill us,” Christiano said.

Critics of so-called “AI doomers” have warned that focusing on any potentially overblown talk of hypothetical killer AI systems or existential AI risks may stop humanity from focusing on current perceived harms from AI, including environmental, privacy, ethics, and bias issues. Emily Bender, a University of Washington professor of computation linguistics who has warned about AI doomers thwarting important ethical work in the field, told Ars that because “weird AI doomer discourse” was included in Joe Biden’s AI executive order, “NIST has been directed to worry about these fantasy scenarios” and “that’s the underlying problem” leading to Christiano’s appointment.

“I think that NIST probably had the opportunity to take it a different direction,” Bender told Ars. “And it’s unfortunate that they didn’t.”

As head of AI safety, Christiano will seemingly have to monitor for current and potential risks. He will “design and conduct tests of frontier AI models, focusing on model evaluations for capabilities of national security concern,” steer processes for evaluations, and implement “risk mitigations to enhance frontier model safety and security,” the Department of Commerce’s press release said.

Christiano has experience mitigating AI risks. He left OpenAI to found the Alignment Research Center (ARC), which the Commerce Department described as “a nonprofit research organization that seeks to align future machine learning systems with human interests by furthering theoretical research.” Part of ARC’s mission is to test if AI systems are evolving to manipulate or deceive humans, ARC’s website said. ARC also conducts research to help AI systems scale “gracefully.”

Because of Christiano’s research background, some people think he is a good choice to helm the safety institute, such as Divyansh Kaushik, an associate director for emerging technologies and national security at the Federation of American Scientists. On X (formerly Twitter), Kaushik wrote that the safety institute is designed to mitigate chemical, biological, radiological, and nuclear risks from AI, and Christiano is “extremely qualified” for testing those AI models. Kaushik cautioned, however, that “if there’s truth to NIST scientists threatening to quit” over Christiano’s appointment, “obviously that would be serious if true.”

The Commerce Department does not comment on its staffing, so it’s unclear if anyone actually resigned or plans to resign over Christiano’s appointment. Since the announcement was made, Ars was not able to find any public announcements from NIST staffers suggesting that they might be considering stepping down.

In addition to Christiano, the safety institute’s leadership team will include Mara Quintero Campbell, a Commerce Department official who led projects on COVID response and CHIPS Act implementation, as acting chief operating officer and chief of staff. Adam Russell, an expert focused on human-AI teaming, forecasting, and collective intelligence, will serve as chief vision officer. Rob Reich, a human-centered AI expert on leave from Stanford University, will be a senior advisor. And Mark Latonero, a former White House global AI policy expert who helped draft Biden’s AI executive order, will be head of international engagement.

“To safeguard our global leadership on responsible AI and ensure we’re equipped to fulfill our mission to mitigate the risks of AI and harness its benefits, we need the top talent our nation has to offer,” Gina Raimondo, US Secretary of Commerce, said in the press release. “That is precisely why we’ve selected these individuals, who are the best in their fields, to join the US AI Safety Institute executive leadership team.”

VentureBeat’s report claimed that Raimondo directly appointed Christiano.

Bender told Ars that there’s no advantage to NIST including “doomsday scenarios” in its research on “how government and non-government agencies are using automation.”

“The fundamental problem with the AI safety narrative is that it takes people out of the picture,” Bender told Ars. “But the things we need to be worrying about are what people do with technology, not what technology autonomously does.”

Feds appoint “AI doomer” to run AI safety at US institute Read More »

billions-of-public-discord-messages-may-be-sold-through-a-scraping-service

Billions of public Discord messages may be sold through a scraping service

Discord chat-scraping service —

Cross-server tracking suggests a new understanding of “public” chat servers.

Discord logo, warped by vertical perspective over a phone displaying the app

Getty Images

It’s easy to get the impression that Discord chat messages are ephemeral, especially across different public servers, where lines fly upward at a near-unreadable pace. But someone claims to be catching and compiling that data and is offering packages that can track more than 600 million users across more than 14,000 servers.

Joseph Cox at 404 Media confirmed that Spy Pet, a service that sells access to a database of purportedly 3 billion Discord messages, offers data “credits” to customers who pay in bitcoin, ethereum, or other cryptocurrency. Searching individual users will reveal the servers that Spy Pet can track them across, a raw and exportable table of their messages, and connected accounts, such as GitHub. Ominously, Spy Pet lists more than 86,000 other servers in which it has “no bots,” but “we know it exists.”

  • An example of Spy Pet’s service from its website. Shown are a user’s nicknames, connected accounts, banner image, server memberships, and messages across those servers tracked by Spy Pet.

    Spy Pet

  • Statistics on servers, users, and messages purportedly logged by Spy Pet.

    Spy Pet

  • An example image of the publicly available data gathered by Spy Pet, in this example for a public server for the game Deep Rock Galactic: Survivor.

    Spy Pet

As Cox notes, Discord doesn’t make messages inside server channels, like blog posts or unlocked social media feeds, easy to publicly access and search. But many Discord users many not expect their messages, server memberships, bans, or other data to be grabbed by a bot, compiled, and sold to anybody wishing to pin them all on a particular user. 404 Media confirmed the service’s function with multiple user examples. Private messages are not mentioned by Spy Pet and are presumably still secure.

Spy Pet openly asks those training AI models, or “federal agents looking for a new source of intel,” to contact them for deals. As noted by 404 Media and confirmed by Ars, clicking on the “Request Removal” link plays a clip of J. Jonah Jameson from Spider-Man (the Tobey Maguire/Sam Raimi version) laughing at the idea of advance payment before an abrupt “You’re serious?” Users of Spy Pet, however, are assured of “secure and confidential” searches, with random usernames.

This author found nearly every public Discord he had ever dropped into for research or reporting in Spy Pet’s server list. Those who haven’t paid for message access can only see fairly benign public-facing elements, like stickers, emojis, and charted member totals over time. But as an indication of the reach of Spy Pet’s scraping, it’s an effective warning, or enticement, depending on your goals.

Ars has reached out to Spy Pet for comment and will update this post if we receive a response. A Discord spokesperson told Ars that the company is investigating whether Spy Pet violated its terms of service and community guidelines. It will take “appropriate steps to enforce our policies,” the company said, and could not provide further comment.

Billions of public Discord messages may be sold through a scraping service Read More »

tesla-asks-shareholders-to-approve-texas-move-and-restore-elon-musk’s-$56b-pay

Tesla asks shareholders to approve Texas move and restore Elon Musk’s $56B pay

Elon Musk wearing a suit during an event at a Tesla factory.

Enlarge / Tesla CEO Elon Musk at an opening event for Tesla’s Gigafactory on March 22, 2022, in Gruenheide, southeast of Berlin.

Getty Images | Patrick Pleul

Tesla is asking shareholders to approve a move to Texas and to re-approve a $55.8 billion pay package for CEO Elon Musk that was recently voided by a Delaware judge.

Musk’s 2018 pay package was voided in a ruling by Delaware Court of Chancery Judge Kathaleen McCormick, who found that the deal was unfair to shareholders. After the ruling, Musk said he would seek a shareholder vote on transferring Tesla’s state of incorporation from Delaware to Texas.

The proposed move to Texas and Musk’s pay package will be up for votes at Tesla’s 2024 annual meeting on June 13, Tesla Board Chairperson Robyn Denholm wrote in a letter to shareholders that was included in a regulatory filing today.

“Because the Delaware Court second-guessed your decision, Elon has not been paid for any of his work for Tesla for the past six years that has helped to generate significant growth and stockholder value,” the letter said. “That strikes us—and the many stockholders from whom we already have heard—as fundamentally unfair, and inconsistent with the will of the stockholders who voted for it.”

On the proposed move to Texas, the letter to shareholders said that “Texas is already our business home, and we are committed to it.” Moving the state of incorporation is really about operating under a state’s laws and court system, though. Incorporating in Texas “will restore Tesla’s stockholder democracy,” Denholm wrote.

Judge: Board members “were beholden to Musk”

Musk is a member of Tesla’s board. Although Musk and his brother Kimbal recused themselves from the 2018 pay-plan vote, McCormick’s ruling said that “five of the six directors who voted on the Grant were beholden to Musk or had compromising conflicts.” McCormick determined that the proxy statement given to investors for the 2018 vote “inaccurately described key directors as independent and misleadingly omitted details about the process.”

McCormick also wrote that Denholm had a “lackadaisical approach to her oversight obligations” and that she “derived the vast majority of her wealth from her compensation as a Tesla director.”

The ruling in favor of lead plaintiff and Tesla shareholder Richard Tornetta rescinded Musk’s pay package in order to “restore the parties to the position they occupied before the challenged transaction.”

Tornetta’s lawyer, Greg Varallo, declined to provide any detailed comment on Tesla’s plan for a new shareholder vote. “We are studying the Tesla proxy and will decide on any response in due course,” Varallo told Ars today.

In the new letter to shareholders, Denholm wrote that Tesla’s performance since 2018 proves that the pay package was deserved. Although Tesla’s stock price has fallen about 37 percent this year, it is up more than 630 percent since the March 2018 shareholder vote.

“We do not agree with what the Delaware Court decided, and we do not think that what the Delaware Court said is how corporate law should or does work,” Denholm wrote. “So we are coming to you now so you can help fix this issue—which is a matter of fundamental fairness and respect to our CEO. You have the chance to reinstate your vote and make it count. We are asking you to make your voice heard—once again—by voting to approve ratification of Elon’s 2018 compensation plan.”

Tesla asks shareholders to approve Texas move and restore Elon Musk’s $56B pay Read More »

isps-can-charge-extra-for-fast-gaming-under-fcc’s-internet-rules,-critics-say

ISPs can charge extra for fast gaming under FCC’s Internet rules, critics say

Fast lanes —

FCC plan rejected request to ban what agency calls “positive” discrimination.

Illustration of network data represented by curving lines flowing on a dark background.

Getty Images | Yuichiro Chino

Some net neutrality proponents are worried that soon-to-be-approved Federal Communications Commission rules will allow harmful fast lanes because the plan doesn’t explicitly ban “positive” discrimination.

FCC Chairwoman Jessica Rosenworcel’s proposed rules for Internet service providers would prohibit blocking, throttling, and paid prioritization. The rules mirror the ones imposed by the FCC during the Obama era and repealed during Trump’s presidency. But some advocates are criticizing a decision to let Internet service providers speed up certain types of applications as long as application providers don’t have to pay for special treatment.

Stanford Law Professor Barbara van Schewick, who has consistently argued for stricter net neutrality rules, wrote in a blog post on Thursday that “harmful 5G fast lanes are coming.”

“T-Mobile, AT&T and Verizon are all testing ways to create these 5G fast lanes for apps such as video conferencing, games, and video where the ISP chooses and controls what gets boosted,” van Schewick wrote. “They use a technical feature in 5G called network slicing, where part of their radio spectrum gets used as a special lane for the chosen app or apps, separated from the usual Internet traffic. The FCC’s draft order opens the door to these fast lanes, so long as the app provider isn’t charged for them.”

In an FCC filing yesterday, AT&T said that carriers will use network slicing “to better meet the needs of particular business applications and consumer preferences than they could over a best-efforts network that generally treats all traffic the same.”

Carriers could charge more for faster gaming

Van Schewick warns that carriers could charge consumers more for plans that speed up specific types of content. For example, a mobile operator could offer a basic plan alongside more expensive tiers that boost certain online games or a tier that boosts services like YouTube and TikTok.

Ericsson, a telecommunications vendor that sells equipment to carriers including AT&T, Verizon, and T-Mobile, has pushed for exactly this type of service. In a report on how network slicing can be used commercially, Ericsson said that “many gamers are willing to pay for enhanced gaming experiences” and would “pay up to $10.99 more for a guaranteed gaming experience on top of their 5G monthly subscription.”

Before the draft net neutrality order was released, van Schewick urged the FCC to “clarify that its proposed no-throttling rule prohibits ISPs from speeding up and slowing down applications and classes of applications.”

In a different filing last month, several advocacy groups similarly argued that the “no-throttling rule needs to ban selective speeding up, in addition to slowing down.” That filing was submitted by the American Civil Liberties Union, the Electronic Frontier Foundation, the Open Technology Institute at New America, Public Knowledge, Fight for the Future, and United Church of Christ Media Justice Ministry.

The request for a ban on selective speeding was denied in paragraph 492 of Rosenworcel’s draft rules, which are scheduled for an April 25 vote. The draft order argues that the FCC’s definition of “throttling” is expansive enough that an explicit ban on what the agency called positive discrimination isn’t needed:

With the no-throttling rule, we ban conduct that is not outright blocking, but inhibits the delivery of particular content, applications, or services, or particular classes of content, applications, or services. Likewise, we prohibit conduct that impairs or degrades lawful traffic to a non-harmful device or class of devices. We interpret this prohibition to include, for example, any conduct by a BIAS [Broadband Internet Access Service] provider that impairs, degrades, slows down, or renders effectively unusable particular content, services, applications, or devices, that is not reasonable network management. Our interpretation of “throttling” encompasses a wide variety of conduct that could impair or degrade an end user’s ability to access content of their choosing; thus, we decline commenters’ request to modify the rule to explicitly include positive and negative discrimination of content.

ISPs can charge extra for fast gaming under FCC’s Internet rules, critics say Read More »