Policy

critics-question-tech-heavy-lineup-of-new-homeland-security-ai-safety-board

Critics question tech-heavy lineup of new Homeland Security AI safety board

Adventures in 21st century regulation —

CEO-heavy board to tackle elusive AI safety concept and apply it to US infrastructure.

A modified photo of a 1956 scientist carefully bottling

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.

The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.

It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.

This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”

So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.

A roundtable of Big Tech CEOs attracts criticism

For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.

Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”

Critics question tech-heavy lineup of new Homeland Security AI safety board Read More »

fcc-fines-big-three-carriers-$196m-for-selling-users’-real-time-location-data

FCC fines big three carriers $196M for selling users’ real-time location data

Illustration with a Verizon logo displayed on a smartphone in front of stock market percentages in the background.

Getty Images | SOPA Images

The Federal Communications Commission today said it fined T-Mobile, AT&T, and Verizon $196 million “for illegally sharing access to customers’ location information without consent and without taking reasonable measures to protect that information against unauthorized disclosure.”

The fines relate to sharing of real-time location data that was revealed in 2018. The FCC proposed the fines in 2020, when the commission had a Republican majority, and finalized them today.

All three major carriers vowed to appeal the fines after they were announced today. The three carriers also said they discontinued the data-sharing programs that the fines relate to.

The fines are $80.1 million for T-Mobile, $57.3 million for AT&T, and $46.9 million for Verizon. T-Mobile is also on the hook for a $12.2 million fine issued to Sprint, which was bought by T-Mobile shortly after the penalties were proposed over four years ago.

Today, the FCC summarized its findings as follows:

The FCC Enforcement Bureau investigations of the four carriers found that each carrier sold access to its customers’ location information to “aggregators,” who then resold access to such information to third-party location-based service providers. In doing so, each carrier attempted to offload its obligations to obtain customer consent onto downstream recipients of location information, which in many instances meant that no valid customer consent was obtained. This initial failure was compounded when, after becoming aware that their safeguards were ineffective, the carriers continued to sell access to location information without taking reasonable measures to protect it from unauthorized access.

“Shady actors” got hold of data

The problem first came to light with reports of customer location data “being disclosed by the largest American wireless carriers without customer consent or other legal authorization to a Missouri Sheriff through a ‘location-finding service’ operated by Securus, a provider of communications services to correctional facilities, to track the location of numerous individuals,” the FCC said.

Chairwoman Jessica Rosenworcel said that news reports in 2018 “revealed that the largest wireless carriers in the country were selling our real-time location information to data aggregators, allowing this highly sensitive data to wind up in the hands of bail-bond companies, bounty hunters, and other shady actors. This ugly practice violates the law—specifically Section 222 of the Communications Act, which protects the privacy of consumer data.”

For a time after the 2018 reports, “all four carriers continued to operate their programs without putting in place reasonable safeguards to ensure that the dozens of location-based service providers with access to their customers’ location information were actually obtaining customer consent,” the FCC said.

The three carriers are ready to challenge the fines in court. “This industry-wide third-party aggregator location-based services program was discontinued more than five years ago after we took steps to ensure that critical services like roadside assistance, fraud protection and emergency response would not be disrupted,” T-Mobile said in a statement provided to Ars. “We take our responsibility to keep customer data secure very seriously and have always supported the FCC’s commitment to protecting consumers, but this decision is wrong, and the fine is excessive. We intend to challenge it.”

FCC fines big three carriers $196M for selling users’ real-time location data Read More »

uk-outlaws-awful-default-passwords-on-connected-devices

UK outlaws awful default passwords on connected devices

Tacking an S onto IoT —

The law aims to prevent global-scale botnet attacks.

UK outlaws awful default passwords on connected devices

Getty Images

If you build a gadget that connects to the Internet and sell it in the United Kingdom, you can no longer make the default password “password.” In fact, you’re not supposed to have default passwords at all.

A new version of the 2022 Product Security and Telecommunications Infrastructure Act (PTSI) is now in effect, covering just about everything that a consumer can buy that connects to the web. Under the guidelines, even the tiniest Wi-Fi board must either have a randomized password or else generate a password upon initialization (through a smartphone app or other means). This password can’t be incremental (“password1,” “password54”), and it can’t be “related in an obvious way to public information,” such as MAC addresses or Wi-Fi network names. A device should be sufficiently strong against brute-force access attacks, including credential stuffing, and should have a “simple mechanism” for changing the password.

There’s more, and it’s just as head-noddingly obvious. Software components, where reasonable, “should be securely updateable,” should actually check for updates, and should update either automatically or in a way “simple for the user to apply.” Perhaps most importantly, device owners can report security issues and expect to hear back about how that report is being handled.

Violations of the new device laws can result in fines up to 10 million pounds (roughly $12.5 million) or 4 percent of related worldwide revenue, whichever is higher.

Besides giving consumers better devices, these regulations are aimed squarely at malware like Mirai, which can conscript devices like routers, cable modems, and DVRs into armies capable of performing distributed denial-of-service attacks (DDoS) on various targets.

As noted by The Record, the European Union’s Cyber Resilience Act has been shaped but not yet passed and enforced, and even if it does pass, would not take effect until 2027. In the US, there is the Cyber Trust Mark, which would at least give customers the choice of buying decently secured or genially abandoned devices. But the particulars of that label are under debate and seemingly a ways from implementation. At the federal level, a 2020 bill tasked the National Institutes of Standard and Technology with applying related standards to connected devices deployed by the feds.

UK outlaws awful default passwords on connected devices Read More »

court-upholds-new-york-law-that-says-isps-must-offer-$15-broadband

Court upholds New York law that says ISPs must offer $15 broadband

A judge's gavel resting on a pile of one-dollar bills

Getty Images | Creativeye99

A federal appeals court today reversed a ruling that prevented New York from enforcing a law requiring Internet service providers to sell $15 broadband plans to low-income consumers. The ruling is a loss for six trade groups that represent ISPs, although it isn’t clear right now whether the law will be enforced.

New York’s Affordable Broadband Act (ABA) was blocked in June 2021 by a US District Court judge who ruled that the state law is rate regulation and preempted by federal law. Today, the US Court of Appeals for the 2nd Circuit reversed the ruling and vacated the permanent injunction that barred enforcement of the state law.

For consumers who qualify for means-tested government benefits, the state law requires ISPs to offer “broadband at no more than $15 per month for service of 25Mbps, or $20 per month for high-speed service of 200Mbps,” the ruling noted. The law allows for price increases every few years and makes exemptions available to ISPs with fewer than 20,000 customers.

“First, the ABA is not field-preempted by the Communications Act of 1934 (as amended by the Telecommunications Act of 1996), because the Act does not establish a framework of rate regulation that is sufficiently comprehensive to imply that Congress intended to exclude the states from entering the field,” a panel of appeals court judges stated in a 2-1 opinion.

Trade groups claimed the state law is preempted by former Federal Communications Commission Chairman Ajit Pai’s repeal of net neutrality rules. Pai’s repeal placed ISPs under the more forgiving Title I regulatory framework instead of the common-carrier framework in Title II of the Communications Act.

2nd Circuit judges did not find this argument convincing:

Second, the ABA is not conflict-preempted by the Federal Communications Commission’s 2018 order classifying broadband as an information service. That order stripped the agency of its authority to regulate the rates charged for broadband Internet, and a federal agency cannot exclude states from regulating in an area where the agency itself lacks regulatory authority. Accordingly, we REVERSE the judgment of the district court and VACATE the permanent injunction.

Be careful what you lobby for

The judges’ reasoning is similar to what a different appeals court said in 2019 when it rejected Pai’s attempt to preempt all state net neutrality laws. In that case, the US Court of Appeals for the District of Columbia Circuit said that “in any area where the Commission lacks the authority to regulate, it equally lacks the power to preempt state law.” In a related case, ISPs were unable to block a California net neutrality law.

Several of the trade groups that sued New York “vociferously lobbied the FCC to classify broadband Internet as a Title I service in order to prevent the FCC from having the authority to regulate them,” today’s 2nd Circuit ruling said. “At that time, Supreme Court precedent was already clear that when a federal agency lacks the power to regulate, it also lacks the power to preempt. The Plaintiffs now ask us to save them from the foreseeable legal consequences of their own strategic decisions. We cannot.”

Judges noted that there are several options for ISPs to try to avoid regulation:

If they believe a requirement to provide Internet to low-income families at a reduced price is unfair or misguided, they have several pathways available to them. They could take it up with the New York State Legislature. They could ask Congress to change the scope of the FCC’s Title I authority under the Communications Act. They could ask the FCC to revisit its classification decision, as it has done several times before But they cannot ask this Court to distort well-established principles of administrative law and federalism to strike down a state law they do not like.

Coincidentally, the 2nd Circuit issued its opinion one day after current FCC leadership reclassified broadband again in order to restore net neutrality rules. ISPs might now have a better case for preempting the New York law. The FCC itself won’t necessarily try to preempt New York’s law, but the agency’s net neutrality order does specifically reject rate regulation at the federal level.

Court upholds New York law that says ISPs must offer $15 broadband Read More »

message-scraping,-user-tracking-service-spy-pet-shut-down-by-discord

Message-scraping, user-tracking service Spy Pet shut down by Discord

Discord message privacy —

Bot-driven service was also connected to targeted harassment site Kiwi Farms.

Image of various message topics locked away in a wireframe box, with a Discord logo and lock icon nearby.

Discord

Spy Pet, a service that sold access to a rich database of allegedly more than 3 billion Discord messages and details on more than 600 million users, has seemingly been shut down.

404 Media, which broke the story of Spy Pet’s offerings, reports that Spy Pet seems mostly shut down. Spy Pet’s website was unavailable as of this writing. A Discord spokesperson told Ars that the company’s safety team had been “diligently investigating” Spy Pet and that it had banned accounts affiliated with it.

“Scraping our services and self-botting are violations of our Terms of Service and Community Guidelines,” the spokesperson wrote. “In addition to banning the affiliated accounts, we are considering appropriate legal action.” The spokesperson noted that Discord server administrators can adjust server permissions to prevent future such monitoring on otherwise public servers.

Kiwi Farms ties, GDPR violations

The number of servers monitored by Spy Pet had been fluctuating in recent days. The site’s administrator told 404 Media’s Joseph Cox that they were rewriting part of the service while admitting that Discord had banned a number of bots. The administrator had also told 404 Media that he did not “intend for my tool to be used for harassment,” despite a likely related user offering Spy Pet data on Kiwi Farms, a notorious hub for doxxing and online harassment campaigns that frequently targets trans and non-binary people, members of the LGBTQ community, and women.

Even if Spy Pet can somehow work past Discord’s bans or survive legal action, the site’s very nature runs against a number of other Internet regulations across the globe. It’s almost certainly in violation of the European Union’s General Data Protection Regulation (GDPR). As pointed out by StackDiary, Spy Pet and services like it seem to violate at least three articles of the GDPR, including the “right to be forgotten” in Article 17.

In Article 8 of the GDPR and likely in the eyes of the FTC, gathering data from what could be children’s accounts and profiting from them is almost certainly to draw scrutiny, if not legal action.

Ars was unsuccessful in reaching the administrator of Spy Pet by email and Telegram message. Their last message on Telegram stated that their domain had been suspended and a backup domain was being set up. “TL;DR: Never trust the Germans,” they wrote.

Message-scraping, user-tracking service Spy Pet shut down by Discord Read More »

tiktok-owner-has-strong-first-amendment-case-against-us-ban,-professors-say

TikTok owner has strong First Amendment case against US ban, professors say

Illustration of the United States flag and a phone with a cracked screen running the TikTok app

Getty Images | NurPhoto

TikTok owner ByteDance is preparing to sue the US government now that President Biden has signed into law a bill that will ban TikTok in the US if its Chinese owner doesn’t sell the company within 270 days. While it’s impossible to predict the outcome with certainty, law professors speaking to Ars believe that ByteDance will have a strong First Amendment case in its lawsuit against the US.

One reason for this belief is that just a few months ago, a US District Court judge blocked a Montana state law that attempted to ban TikTok. In October 2020, another federal judge in Pennsylvania blocked a Trump administration order that would have banned TikTok from operating inside the US. TikTok also won a preliminary injunction against Trump in US District Court for the District of Columbia in September 2020.

“Courts have said that a TikTok ban is a First Amendment problem,” Santa Clara University law professor Eric Goldman, who writes frequent analysis of legal cases involving technology, told Ars this week. “And Congress didn’t really try to navigate away from that. They just went ahead and disregarded the court rulings to date.”

The fact that previous attempts to ban TikTok have failed is “pretty good evidence that the government has an uphill battle justifying the ban,” Goldman said.

TikTok users engage in protected speech

The Montana law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” US District Judge Donald Molloy wrote in November 2023 when he granted a preliminary injunction that blocks the state law.

“The Montana court concluded that the First Amendment challenge would be likely to succeed. This will give TikTok some hope that other courts will follow suit with respect to a national order,” Georgetown Law Professor Anupam Chander told Ars.

Molloy’s ruling said that without TikTok, “User Plaintiffs are deprived of communicating by their preferred means of speech, and thus First Amendment scrutiny is appropriate.” TikTok’s speech interests must be considered “because the application’s decisions related to how it selects, curates, and arranges content are also protected by the First Amendment,” the ruling said.

Banning apps that let people talk to each other “is categorically impermissible,” Goldman said. While the Chinese government engaging in propaganda is a problem, “we need to address that as a government propaganda problem, and not just limited to China,” he said. In Goldman’s view, a broader approach should also be used to stop governments from siphoning user data.

TikTok and opponents of bans haven’t won every case. A federal judge in Texas ruled in favor of Texas Governor Greg Abbott in December 2023. But that ruling only concerned a ban on state employees using TikTok on government-issued devices rather than a law that potentially affects all users of TikTok.

Weighing national security vs. First Amendment

US lawmakers have alleged that the Chinese Communist Party can weaponize TikTok to manipulate public opinion and access user data. But Chander was skeptical of whether the US government could convincingly justify its new law in court on national security grounds.

“Thus far, the government has refused to make public its evidence of a national security threat,” he told Ars. “TikTok put in an elaborate set of controls to insulate the app from malign foreign influence, and the government hasn’t shown why those controls are insufficient.”

The ruling against Trump by a federal judge in Pennsylvania noted that “the Government’s own descriptions of the national security threat posed by the TikTok app are phrased in the hypothetical.”

Chander stressed that the outcome of ByteDance’s planned case against the US is difficult to predict, however. “I would vote against the law if I were a judge, but it’s unclear how judges will weigh the alleged national security risks against the real free expression incursions,” he said.

Montana case may be “bellwether”

There are at least three types of potential plaintiffs that could lodge constitutional challenges to a TikTok ban, Goldman said. There’s TikTok itself, the users of TikTok who would no longer be able to post on the platform, and app stores that would be ordered not to carry the TikTok app.

Montana was sued by TikTok and users. Lead plaintiff Samantha Alario runs a local swimwear business and uses TikTok to market her products.

Montana Attorney General Austin Knudsen appealed the ruling against his state to the US Court of Appeals for the 9th Circuit. The Montana case could make it to the Supreme Court before there is any resolution on the enforceability of the US law, Goldman said.

“It’s possible that the Montana ban is actually going to be the bellwether that’s going to set the template for the constitutional review of the Congressional action,” Goldman said.

TikTok owner has strong First Amendment case against US ban, professors say Read More »

deepfakes-in-the-courtroom:-us-judicial-panel-debates-new-ai-evidence-rules

Deepfakes in the courtroom: US judicial panel debates new AI evidence rules

adventures in 21st-century justice —

Panel of eight judges confronts deep-faking AI tech that may undermine legal trials.

An illustration of a man with a very long nose holding up the scales of justice.

On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.

The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.

In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:

A deepfake is an inauthentic audiovisual presentation prepared by software programs using artificial intelligence. Of course, photos and videos have always been subject to forgery, but developments in AI make deepfakes much more difficult to detect. Software for creating deepfakes is already freely available online and fairly easy for anyone to use. As the software’s usability and the videos’ apparent genuineness keep improving over time, it will become harder for computer systems, much less lay jurors, to tell real from fake.

During Friday’s three-hour hearing, the panel wrestled with the question of whether existing rules, which predate the rise of generative AI, are sufficient to ensure the reliability and authenticity of evidence presented in court.

Some judges on the panel, such as US Circuit Judge Richard Sullivan and US District Judge Valerie Caproni, reportedly expressed skepticism about the urgency of the issue, noting that there have been few instances so far of judges being asked to exclude AI-generated evidence.

“I’m not sure that this is the crisis that it’s been painted as, and I’m not sure that judges don’t have the tools already to deal with this,” said Judge Sullivan, as quoted by Reuters.

Last year, Chief US Supreme Court Justice John Roberts acknowledged the potential benefits of AI for litigants and judges, while emphasizing the need for the judiciary to consider its proper uses in litigation. US District Judge Patrick Schiltz, the evidence committee’s chair, said that determining how the judiciary can best react to AI is one of Roberts’ priorities.

In Friday’s meeting, the committee considered several deepfake-related rule changes. In the agenda for the meeting, US District Judge Paul Grimm and attorney Maura Grossman proposed modifying Federal Rule 901(b)(9) (see page 5), which involves authenticating or identifying evidence. They also recommended the addition of a new rule, 901(c), which might read:

901(c): Potentially Fabricated or Altered Electronic Evidence. If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.

The panel agreed during the meeting that this proposal to address concerns about litigants challenging evidence as deepfakes did not work as written and that it will be reworked before being reconsidered later.

Another proposal by Andrea Roth, a law professor at the University of California, Berkeley, suggested subjecting machine-generated evidence to the same reliability requirements as expert witnesses. However, Judge Schiltz cautioned that such a rule could hamper prosecutions by allowing defense lawyers to challenge any digital evidence without establishing a reason to question it.

For now, no definitive rule changes have been made, and the process continues. But we’re witnessing the first steps of how the US justice system will adapt to an entirely new class of media-generating technology.

Putting aside risks from AI-generated evidence, generative AI has led to embarrassing moments for lawyers in court over the past two years. In May 2023, US lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman apologized to a judge for using ChatGPT to help write court filings that inaccurately cited six nonexistent cases, leading to serious questions about the reliability of AI in legal research. Also, in November, a lawyer for Michael Cohen cited three fake cases that were potentially influenced by a confabulating AI assistant.

Deepfakes in the courtroom: US judicial panel debates new AI evidence rules Read More »

chamber-of-commerce-sues-ftc-in-texas,-asks-court-to-block-ban-on-noncompetes

Chamber of Commerce sues FTC in Texas, asks court to block ban on noncompetes

FTC vs. business lobby —

Noncompete clauses “benefit employers and workers alike,” Chamber tells court.

A man's hand holding a pen and filling out a lawsuit form.

Getty Images | eccolo74

The US Chamber of Commerce and other business groups sued the Federal Trade Commission and FTC Chair Lina Khan today in an attempt to block a newly issued ban on noncompete clauses.

The lawsuit was filed in US District Court for the Eastern District of Texas. The US Chamber of Commerce was joined in the suit by Business Roundtable, the Texas Association of Business, and the Longview Chamber of Commerce. The suit seeks a court order that would vacate the rule in its entirety.

The lawsuit claimed that noncompete clauses “benefit employers and workers alike—the employer protects its workforce investments and sensitive information, and the worker benefits from increased training, access to more information, and an opportunity to bargain for higher pay.”

“Having invested in their people and entrusted them with valuable company secrets, businesses have strong interests in preventing others from free-riding on those investments or gaining improper access to competitive, confidential information,” the lawsuit said.

Lawsuit filed one day after FTC issued rule

The lawsuit came one day after the FTC issued its rule banning noncompete clauses, determining that such clauses are an unfair method of competition and thus a violation of Section 5 of the FTC Act. The rule is scheduled to take effect in about four months and would render the vast majority of existing noncompetes unenforceable.

The only existing noncompetes that won’t be nullified are those for senior executives, defined as people earning more than $151,164 a year and who are in policymaking positions. Existing noncompetes for all other workers would be nullified, and the FTC is prohibiting employers from imposing any new noncompetes on both senior executives and other workers.

“By invalidating existing noncompete agreements and prohibiting businesses and their workers from ever entering into such agreements going forward, the rule will force businesses all over the country—including in this District—to turn to inadequate and expensive alternatives to protect their confidential information, such as nondisclosure agreements and trade-secret lawsuits,” the Chamber of Commerce said in its complaint.

The Chamber argues that the FTC overstepped its authority. “The Commission’s astounding assertion of power breaks with centuries of state and federal law and rests on novel claims of authority by the Commission. From the Founding forward, States have always regulated noncompete agreements,” the lawsuit said.

The FTC says it can impose the ban using authority under sections 5 and 6(g) of the FTC Act. Section 6(g) authorizes the Commission to “make rules and regulations for the purpose of carrying out the provisions of” the FTC Act, including the Section 5 prohibition on unfair methods of competition, the FTC said in its rule.

FTC: “Our legal authority is crystal clear”

“Our legal authority is crystal clear,” an FTC spokesperson said in a statement provided to Ars today. “In the FTC Act, Congress specifically ’empowered and directed’ the FTC to prevent ‘unfair methods of competition’ and to ‘make rules and regulations for the purposes of carrying out the provisions of’ the FTC Act. This authority has repeatedly been upheld by courts and reaffirmed by Congress. Addressing noncompetes that curtail Americans’ economic freedom is at the very heart of our mandate, and we look forward to winning in court.”

The Chamber’s lawsuit said “the sheer economic and political significance of a nationwide noncompete ban demonstrates that this is a question for Congress to decide, rather than an agency.” If the FTC’s claim of authority is upheld, it “would reflect a boundless and unconstitutional delegation of legislative power to the Executive Branch,” the lawsuit said.

If the US District Court in Texas grants an injunction blocking the ban, the FTC could challenge the ruling in a federal appeals court.

Separately, a lobby group for cable TV and broadband companies issued a statement opposing the ban on noncompetes. “It is disappointing the FTC is poised to undercut small, independent, and rural broadband providers with a sweeping ban on non-competes,” said America’s Communications Association (formerly the American Cable Association). “This unjustified action will make it more challenging to provide quality service, crush competition and allow large incumbents to raid talent and obtain propriety information.”

Khan said yesterday that “noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism.” The ban “will ensure Americans have the freedom to pursue a new job, start a new business, or bring a new idea to market,” she said.

Chamber of Commerce sues FTC in Texas, asks court to block ban on noncompetes Read More »

ftc-bans-noncompete-clauses,-declares-vast-majority-unenforceable

FTC bans noncompete clauses, declares vast majority unenforceable

No more noncompetes —

Chamber of Commerce vows to sue FTC, will try to block ban on noncompetes.

Federal Trade Commission Chair Lina Khan smiles while talking with people at an event.

Enlarge / Federal Trade Commission Chair Lina Khan talks with guests during an event in the Eisenhower Executive Office Building on April 03, 2024

Getty Images | Chip Somodevilla

The Federal Trade Commission (FTC) today announced that it has issued a final rule banning noncompete clauses. The rule will render the vast majority of current noncompete clauses unenforceable, according to the agency.

“In the final rule, the Commission has determined that it is an unfair method of competition and therefore a violation of Section 5 of the FTC Act, for employers to enter into noncompetes with workers and to enforce certain noncompetes,” the FTC said.

The US Chamber of Commerce said it will sue the FTC in an effort to block the rule, claiming the ban is “a blatant power grab that will undermine American businesses’ ability to remain competitive.”

The FTC proposed the rule in January 2023 and received over 26,000 public comments on its proposal. Over 25,000 of the comments supported the proposed ban, the FTC said. The final rule announced today will take effect 120 days after it is published in the Federal Register, unless opponents of the rule secure a court order blocking it.

The FTC said that “noncompetes are a widespread and often exploitative practice imposing contractual conditions that prevent workers from taking a new job or starting a new business. Noncompetes often force workers to either stay in a job they want to leave or bear other significant harms and costs, such as being forced to switch to a lower-paying field, being forced to relocate, being forced to leave the workforce altogether, or being forced to defend against expensive litigation.”

Noncompete clauses currently bind about 30 million workers in the US, the agency said. “Under the FTC’s new rule, existing noncompetes for the vast majority of workers will no longer be enforceable after the rule’s effective date,” the FTC said.

FTC: “Noncompete clauses keep wages low”

The only existing noncompetes that won’t be nullified are those for senior executives, who represent less than 0.75 percent of workers, the FTC said. The rule defines senior executives as people earning more than $151,164 a year and who are in policy-making positions.

“The final rule allows existing noncompetes with senior executives to remain in force because this subset of workers is less likely to be subject to the kind of acute, ongoing harms currently being suffered by other workers subject to existing noncompetes and because commenters raised credible concerns about the practical impacts of extinguishing existing noncompetes for senior executives,” the FTC said.

Senior executives will be protected from new noncompete clauses after the rule takes effect. Employers will be “banned from entering into or attempting to enforce any new noncompetes, even if they involve senior executives,” the FTC said. “Employers will be required to provide notice to workers other than senior executives who are bound by an existing noncompete that they will not be enforcing any noncompetes against them.”

The FTC vote was 3-2, with Democrats supporting the noncompete ban and Republicans opposing.

“Noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism, including from the more than 8,500 new startups that would be created a year once noncompetes are banned,” FTC Chair Lina Khan said. “The FTC’s final rule to ban noncompetes will ensure Americans have the freedom to pursue a new job, start a new business, or bring a new idea to market.”

Chamber of Commerce CEO Suzanne Clark argued that “the FTC has never been granted the constitutional and statutory authority to write its own competition rules… The Chamber will sue the FTC to block this unnecessary and unlawful rule and put other agencies on notice that such overreach will not go unchecked.”

FTC cites authority, urges businesses to raise wages

The FTC argues that it can impose the rule using authority under sections 5 and 6(g) of the FTC Act:

Alongside section 5, Congress adopted section 6(g) of the Act, in which it authorized the Commission to “make rules and regulations for the purpose of carrying out the provisions of” the FTC Act, which include the Act’s prohibition of unfair methods of competition. The plain text of section 5 and section 6(g), taken together, empower the Commission to promulgate rules for the purpose of preventing unfair methods of competition. That includes legislative rules defining certain conduct as an unfair method of competition.

The FTC said it found evidence that “noncompetes tend to negatively affect competitive conditions in product and service markets, inhibiting new business formation and innovation” and “lead to increased market concentration and higher prices for consumers.”

Businesses can protect trade secrets without noncompetes, the agency said:

Trade secret laws and nondisclosure agreements (NDAs) both provide employers with well-established means to protect proprietary and other sensitive information. Researchers estimate that over 95 percent of workers with a noncompete already have an NDA.

The Commission also finds that instead of using noncompetes to lock in workers, employers that wish to retain employees can compete on the merits for the worker’s labor services by improving wages and working conditions.

FTC bans noncompete clauses, declares vast majority unenforceable Read More »

grindr-users-seek-payouts-after-dating-app-shared-hiv-status-with-vendors

Grindr users seek payouts after dating app shared HIV status with vendors

A person's finger hovering over a Grindr app icon on a phone screen

Getty Images | Thomas Trutschel

Grindr is facing a class action lawsuit from hundreds of users over the sharing of HIV statuses and other sensitive personal information with third-party firms.

UK law firm Austen Hays filed the claim in the High Court in London yesterday, the firm announced. The class action “alleges the misuse of private information of thousands of affected UK Grindr users, including highly sensitive information about their HIV status and latest tested date,” the law firm said.

The law firm said it has signed up over 670 potential class members and “is in discussions with thousands of other individuals who are interested in joining the claim.” Austen Hays said that “claimants could receive thousands in damages” from Grindr, a gay dating app, if the case is successful.

Austen Hays alleges that Grindr violated UK data protection laws by sharing sensitive data for commercial purposes without users’ consent, including when it “unlawfully processed and shared users’ data with third parties, including advertising companies Localytics and Apptimize.”

While Austen Hays describes Localytics and Apptimize as advertising firms, they do not seem to be in the business of selling ads. Localytics is software for mobile app marketing and analytics, while Apptimize says it provides A/B testing and feature release management for product teams.

Grindr admitted sharing HIV status, said it stopped

Grindr has admitted sharing HIV status with the firms but stressed that it wasn’t for advertising purposes and pledged to stop sharing that information. The sharing of HIV status came to light in 2018 thanks to the work of independent researchers. At the time, Grindr said it “has never sold, nor will we ever sell, personal user information—especially information regarding HIV status or last test date—to third parties or advertisers.”

Grindr said it “consult[ed] several international health organizations” before determining in 2016 that it would be “beneficial for the health and well-being of our community to give users the option to publish, at their discretion, their HIV status and their ‘Last Tested Date’ to their public profile.”

Grindr acknowledged that it had been “sharing HIV status information with our trusted vendors, Apptimize and Localytics.” Apptimize software helped Grindr test and deploy new app features including an “HIV Testing Reminder” feature, while Localytics software was used “to confirm that the new features were not causing problems with the functioning of the Grindr app,” Grindr said.

Today, Grindr provided Ars with a statement in response to the lawsuit. “We are committed to protecting our users’ data and complying with all applicable data privacy regulations, including in the UK,” the company said. Grindr has never shared user-reported health information for ‘commercial purposes’ and has never monetized such information. We intend to respond vigorously to this claim, which appears to be based on a mischaracterization of practices from more than four years ago, prior to early 2020.”

Grindr users seek payouts after dating app shared HIV status with vendors Read More »

biden-signs-bill-criticized-as-“major-expansion-of-warrantless-surveillance”

Biden signs bill criticized as “major expansion of warrantless surveillance”

Abstract image of human eye on a digital background

Getty Images | Yuichiro Chino

Congress passed and President Biden signed a reauthorization of Title VII of the Foreign Intelligence Surveillance Act (FISA), approving a bill that opponents say includes a “major expansion of warrantless surveillance” under Section 702 of FISA.

Over the weekend, the Reforming Intelligence and Securing America Act was approved by the Senate in a 60-34 vote. The yes votes included 30 Republicans, 28 Democrats, and two independents who caucus with Democrats. The bill, which was previously passed by the House and reauthorizes Section 702 of FISA for two years, was signed by President Biden on Saturday.

“Thousands and thousands of Americans could be forced into spying for the government by this new bill and with no warrant or direct court oversight whatsoever,” Sen. Ron Wyden (D-Ore.), a member of the Senate Select Committee on Intelligence, said on Friday. “Forcing ordinary Americans and small businesses to conduct secret, warrantless spying is what authoritarian countries do, not democracies.”

Wyden and Sen. Cynthia Lummis (R-Wyo.) led a bipartisan group of eight senators who submitted an amendment to reverse what Wyden’s office called “a major expansion of warrantless surveillance under Section 702 of the Foreign Intelligence Surveillance Act that was included in the House-passed bill.” After the bill was approved by the Senate without the amendment, Wyden said it seemed “that senators were unwilling to send this bill back to the House, no matter how common-sense the amendment before them.”

Sen. Ted Cruz (R-Texas) said he voted against the reauthorization “because it failed to include the most important requirement to protect Americans’ civil rights: that law enforcement get a warrant before targeting a US citizen.”

Bill expands definition of service provider

The Wyden/Lummis amendment would have struck language that expands the definition of an electronic communication service provider to include, with some exceptions, any “service provider who has access to equipment that is being or may be used to transmit or store wire or electronic communications.” The exceptions are for public accommodation facilities, dwellings, community facilities, and food service establishments.

“Instead of using the opportunity to curb warrantless surveillance of Americans’ private communications and protect the public’s privacy, Congress passed an expansive, unchecked surveillance authority,” Sen. Edward J. Markey (D-Mass.) said after the vote. “This FISA reauthorization legislation is a step backwards, doing nothing to address the extent to which the government conducts surveillance over its own citizens.”

Under the 2008 FISA Amendments Act, electronic communication service providers already included telecommunications carriers, providers of electronic communication services, providers of remote computing services, and “any other communication service provider who has access to wire or electronic communications either as such communications are transmitted or as such communications are stored.” These entities must provide the government with information, facilities, and assistance necessary to obtain communications.

The Brennan Center for Justice at New York University School of Law called the reauthorization “the largest expansion of domestic surveillance authority since the Patriot Act.”

“The bill, which would effectively grant the federal government access to the communications equipment of almost any business in the United States, is a gift to any president who may wish to spy on political enemies,” said Elizabeth Goitein, senior director of the Brennan Center’s Liberty and National Security Program.

Biden signs bill criticized as “major expansion of warrantless surveillance” Read More »

china-orders-apple-to-remove-meta-apps-after-“inflammatory”-posts-about-president

China orders Apple to remove Meta apps after “inflammatory” posts about president

Apple and China —

WhatsApp, Threads, Telegram, and Signal removed from Apple App Store in China.

People walk past an Apple store in Shanghai, China.

Enlarge / An Apple Store in Shanghai, China, on April 11, 2024.

CFOTO/Future Publishing via Getty Images

Apple said it complied with orders from the Chinese government to remove the Meta-owned WhatsApp and Threads from its App Store in China. Apple also removed Telegram and Signal from China.

“We are obligated to follow the laws in the countries where we operate, even when we disagree,” Apple said in a statement quoted by several news outlets. “The Cyberspace Administration of China ordered the removal of these apps from the China storefront based on their national security concerns. These apps remain available for download on all other storefronts where they appear.”

The Wall Street Journal paraphrased a person familiar with the matter as saying that the Chinese cyberspace agency “asked Apple to remove WhatsApp and Threads from the App Store because both contain political content that includes problematic mentions of the Chinese president [Xi Jinping].”

The New York Times similarly wrote that “a person briefed on the situation said the Chinese government had found content on WhatsApp and Threads about China’s president, Xi Jinping, that was inflammatory and violated the country’s cybersecurity laws. The specifics of what was in the content was unclear, the person said.”

Meta apps Facebook, Instagram, and Messenger were still available for iOS in China today, according to Reuters. As Reuters noted, the four apps removed from Apple’s China store were not widely used in the country, where WeChat is the dominant service.

“These apps and many foreign apps are normally blocked on Chinese networks by the ‘Great Firewall’—the country’s extensive cybersystem of censorship—and can only be used with a virtual private network or other proxy tools,” Reuters wrote. WhatsApp, Threads, Telegram, and Signal were reportedly still available on Apple devices in Hong Kong and Macau, China’s special administrative regions.

US House moves on forcing TikTok sale or ban

China’s crackdown on foreign messaging apps comes amid US debate over whether to ban or force a sale of the Chinese-owned TikTok. The House Commerce Committee last month voted 50–0 to approve a bill that would force TikTok owner ByteDance to sell the company or lose access to the US market.

US lawmakers argue that TikTok poses national security risks, saying that China can use the app to obtain sensitive personal data and manipulate US public opinion. House leaders are reportedly planning a floor vote on the TikTok bill on Saturday.

US lawmakers raised concerns about Apple’s China ties after the recent cancellation of Apple TV+ show The Problem with Jon Stewart. Stewart reportedly told members of his staff that Apple executives were concerned about potential show topics related to China and artificial intelligence.

Apple pulled The New York Times app from its store in China in December 2016, saying that Apple was informed by China “that the app is in violation of local regulations.” The New York Times news app is still unavailable on Apple’s App Store in China, the Reuters article said.

“For years, Apple has bowed to Beijing’s demands that it block an array of apps, including newspapers, VPNs, and encrypted messaging services,” The New York Times noted yesterday. “It also built a data center in the country to house Chinese citizens’ iCloud information, which includes personal contacts, photos and email.”

China orders Apple to remove Meta apps after “inflammatory” posts about president Read More »