The Federal Communications Commission chair today made a final plea to Congress, asking for money to continue a broadband-affordability program that gave out its last round of $30 discounts to people with low incomes in April.
The Affordable Connectivity Program (ACP) has lowered monthly Internet bills for people who qualify for benefits, but Congress allowed funding to run out. People may receive up to $14 in May if their ISP opted into offering a partial discount during the program’s final month. After that there will be no financial help for the 23 million households enrolled in the program.
“Additional funding from Congress is the only near-term solution for keeping the ACP going,” FCC Chairwoman Jessica Rosenworcel wrote in a letter to members of Congress today. “If additional funding is not promptly appropriated, the one in six households nationwide that rely on this program will face rising bills and increasing disconnection. In fact, according to our survey of ACP beneficiaries, 77 percent of participating households report that losing this benefit would disrupt their service by making them change their plan or lead to them dropping Internet service entirely.”
Some Republican members of Congress have called the program “wasteful” and complained that most people using the discounts had broadband access before the subsidy was available. Rosenworcel’s letter today said the FCC survey found that “68 percent of ACP households stated they had inconsistent or zero connectivity prior to ACP.”
Senate Commerce Committee Chair Maria Cantwell (D-Wash.) included $7 billion for the program in a draft spectrum auction bill on Friday, but previous proposals from Democrats to extend funding have fizzled out. The White House today urged Congress to fund the program and blamed Republicans for not supporting funding proposals.
“President Biden is once again calling on Republicans in Congress to join their Democratic colleagues in support of extending funding for the Affordable Connectivity Program,” the White House said.
Some consumer advocates have called on the FCC to fund the ACP by increasing Universal Service Fund collections, which could involve raising fees on phone service or imposing Universal Service fees on broadband for the first time. Rosenworcel has instead looked to Congress to allocate funding for the ACP.
“Time is running out,” Rosenworcel’s letter said. “Additional funding is needed immediately to avoid the disruption millions of ACP households that rely on this program for essential connectivity are already starting to experience.”
In mid-June 2019, Microsoft co-founder Bill Gates and CEO Satya Nadella received a rude awakening in an email warning that Google had officially gotten too far ahead on AI and that Microsoft may never catch up without investing in OpenAI.
With the subject line “Thoughts on OpenAI,” the email came from Microsoft’s chief technology officer, Kevin Scott, who is also the company’s executive vice president of AI. In it, Scott said that he was “very, very worried” that he had made “a mistake” by dismissing Google’s initial AI efforts as a “game-playing stunt.”
It turned out, Scott suggested, that instead of goofing around, Google had been building critical AI infrastructure that was already paying off, according to a competitive analysis of Google’s products that Scott said showed that Google was competing even more effectively in search. Scott realized that while Google was already moving on to production for “larger scale, more interesting” AI models, it might take Microsoft “multiple years” before it could even attempt to compete with Google.
As just one example, Scott warned, “their auto-complete in Gmail, which is especially useful in the mobile app, is getting scarily good.”
Microsoft had tried to keep this internal email hidden, but late Tuesday it was made public as part of the US Justice Department’s antitrust trial over Google’s alleged search monopoly. The email was initially sealed because Microsoft argued that it contained confidential business information, but The New York Times intervened to get it unsealed, arguing that Microsoft’s privacy interests did not outweigh the need for public disclosure.
In an order unsealing the email among other documents requested by The Times, US District Judge Amit Mehta allowed to be redacted some of the “sensitive statements in the email concerning Microsoft’s business strategies that weigh against disclosure”—which included basically all of Scott’s “thoughts on OpenAI.” But other statements “should be disclosed because they shed light on Google’s defense concerning relative investments by Google and Microsoft in search,” Mehta wrote.
At the trial, Google sought to convince Mehta that Microsoft, for example, had failed to significantly invest in mobile early on, giving Google a competitive advantage in mobile search that it still enjoys today. Scott’s email seems to suggest that Microsoft was similarly dragging its feet on investing in AI until Scott’s wakeup call.
Nadella’s response to the email was immediate. He promptly forwarded the email to Microsoft’s chief financial officer, Amy Hood, on the same day that he received it. Scott’s “very good email,” Nadella told Hood, explained “why I want us to do this.” By “this,” Nadella presumably meant exploring investment opportunities in OpenAI.
Officially, Microsoft has said that its OpenAI partnership was formed “to accelerate AI breakthroughs to ensure these benefits are broadly shared with the world”—not to keep up with Google.
But at the Google trial, Nadella testified about the email, saying that partnering with companies like OpenAI ensured that Microsoft could continue innovating in search, as well as in other Microsoft services.
On the stand, Nadella also admitted that he had overhyped AI-powered Bing as potentially shaking up the search market, backing up the DOJ by testifying that in Silicon Valley, Internet search is “the biggest no-fly zone.” Even after partnering with OpenAI, Nadella said that for Microsoft to compete with Google in search, there are “limits to how much artificial intelligence can reshape the market as it exists today.”
During the Google trial, the DOJ argued that Google’s alleged search market dominance had hindered OpenAI’s efforts to innovate, too. “OpenAI’s ChatGPT and other innovations may have been released years ago if Google hadn’t monopolized the search market,” the DOJ argued, according to a Bloomberg report.
Closing arguments in the Google trial start tomorrow, with two days of final remarks scheduled, during which Mehta will have ample opportunity to ask lawyers on both sides the rest of his biggest remaining questions.
It’s somewhat obvious what Google will argue. Google has spent years defending its search business as competing on the merits—essentially arguing that Google dominates search simply because it’s the best search engine.
Yesterday, the US district court also unsealed Google’s proposed legal conclusions, which suggest that Mehta should reject all of the DOJ’s monopoly claims, partly due to the government’s allegedly “fatally flawed” market definitions. Throughout the trial, Google has maintained that the US government has failed to show that Google has a monopoly in any market.
According to Google, even its allegedly anticompetitive default browser agreement with Apple—which Mehta deemed the “heart” of the DOJ’s monopoly case—is not proof of monopoly powers. Rather, Google insisted, default browser agreements benefit competition by providing another avenue through which its rivals can compete.
Mehta has not yet disclosed when to expect his ruling, but it could come late this summer or early fall, AP News reported.
If Google loses, the search giant may be forced to change its business practices or potentially even break up its business. Nobody knows what that would entail, but when the trial started, a coalition of 20 civil society and advocacy groups recommended some potentially drastic remedies, including the “separation of various Google products from parent company Alphabet, including breakouts of Google Chrome, Android, Waze, or Google’s artificial intelligence lab Deepmind.”
Enlarge/ Medical marijuana growing in a facility in Canada.
The US Drug Enforcement Administration is preparing to reclassify marijuana to a lower-risk drug category, a major federal policy change that is in line with recommendations from the US health department last year. The upcoming move was first reported by the Associated Press on Tuesday afternoon and has since been confirmed by several other outlets.
The DEA currently designates marijuana as a Schedule 1 drug, defined as drugs “with no currently accepted medical use and a high potential for abuse.” It puts marijuana in league with LSD and heroin. According to the reports today, the DEA is moving to reclassify it as a Schedule 3 drug, defined as having “a moderate to low potential for physical and psychological dependence.” The move would place marijuana in the ranks of ketamine, testosterone, and products containing less than 90 milligrams of codeine.
Marijuana’s rescheduling would be a nod to its potential medical benefits and would shift federal policy in line with many states. To date, 38 states have already legalized medical marijuana.
In August, the Department of Health and Human Services advised the DEA to move marijuana from Schedule 1 to Schedule 3 based on a review of data by the Food and Drug Administration. The recommendation came after the FDA, in August, granted the first approval of a marijuana-based drug. The drug, Epidiolex (cannabidiol), is approved to treat rare and severe forms of epilepsy. The approval was expected to spur the DEA to downgrade marijuana’s scheduling, though some had predicted it would have occurred earlier. Independent expert advisors for the FDA voted unanimously in favor of approval, convinced by data from three high-quality clinical trials that indicated benefits and a “negligible abuse potential.”
The shift may have a limited effect on consumers in states that have already eased access to marijuana. In addition to the 38 states with medical marijuana access, 24 states have legalized recreational use. But, as a Schedule 3 drug, marijuana would still be regulated by the DEA. The Associated Press notes that the rule change means that roughly 15,000 dispensaries would need to register with the DEA, much like pharmacies, and follow strict reporting requirements.
One area that will clearly benefit from the change is scientific research on marijuana’s effects. Many academic scientists are federally funded and, as such, they must follow federal regulations. Researching a Schedule 1 drug carries extensive restrictions and rules, even for researchers in states where marijuana is legalized. A lower scheduling will allow researchers better access to conduct long-awaited studies.
It’s unclear exactly when the move will be announced and finalized. The DEA must get sign-off from the White House Office of Management and Budget (OMB) before proceeding. A source for NBC News said Attorney General Merrick Garland may submit the rescheduling to the OMB as early as Tuesday afternoon. After that, the DEA will open a public comment period before it can finalize the rule.
The US Department of Justice told several outlets that it “continues to work on this rule. We have no further comment at this time.”
Enlarge/ Former Binance CEO Changpeng Zhao arrives at federal court in Seattle for sentencing on Tuesday, April 30, 2024.
Getty Images | Changpeng Zhao
Binance founder Changpeng Zhao was sentenced today to four months in prison after pleading guilty of failing to take effective measures against money laundering. The billionaire who formerly ran the world’s largest cryptocurrency exchange previously agreed to a plea deal that also required him to pay a $50 million fine.
Forbes estimates Zhao’s net worth at $33 billion. He pleaded guilty to failure to maintain an effective anti-money laundering program.
Zhao’s cooperation with law enforcement was cited by US District Judge Richard Jones as a reason for imposing a significantly lower sentence than was requested by prosecutors, according to The Verge.
“Before handing down the sentence, Jones faulted Zhao for putting growth and profits before complying with US laws,” Reuters wrote. The sentencing hearing was in federal court in Seattle.
Jones was quoted as saying to Zhao that “you had the wherewithal, the finance capabilities, and the people power to make sure that every single regulation had to be complied with, and so you failed at that opportunity.”
US: Zhao willfully violated law
The government’s sentencing recommendation said that “Zhao’s willful violation of US law was no accident or oversight. He made a business decision that violating US law was the best way to attract users, build his company, and line his pockets.”
The US said Zhao bragged that if Binance complied with US law, it would not be “as big as we are today.”
“Despite knowing Binance was required to comply with US law, Zhao chose not to register the company with US regulators; he chose not to comply with fundamental US anti-money-laundering (AML) requirements; he chose not to implement and maintain an effective know-your-customer (KYC) system, which prevented effective transaction monitoring and allowed suspicious and criminal users to transact through Binance,” the US said.
Zhao also “directed Binance employees in a sophisticated scheme to disguise their customers’ locations in an effort to deceive regulators about Binance’s client base,” the US told the court.
Zhao’s sentencing memorandum denied criminal intent. “Generalized knowledge that the Company’s compliance program did not eliminate all risk of criminal activity does not mean that Mr. Zhao knew or intended for any funds to be criminally derived (he manifestly did not),” the filing said.
Zhao traveled to the US from his home in the United Arab Emirates to take responsibility, his legal team’s filing said. “He is a first-time, non-violent offender who committed an offense with no intention to harm anyone. He presents no risk of recidivism. He has appeared in this country voluntarily to accept responsibility,” the plea for lenience said.
On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term “AI,” which can apply to a broad spectrum of computer technology, it’s unclear if this group will even be able to agree on what exactly they are safeguarding us from.
President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.
The fundamental assumption posed by the board’s existence, and reflected in Biden’s AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.
It’s worth noting that the ill-defined nature of the term “Artificial Intelligence” does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there’s no guarantee any two people on the board will be thinking about the same type of AI.
This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, “By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system.”
So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.
A roundtable of Big Tech CEOs attracts criticism
For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.
Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI’s presence on the board and wrote, “I’ve now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement.”
The Federal Communications Commission today said it fined T-Mobile, AT&T, and Verizon $196 million “for illegally sharing access to customers’ location information without consent and without taking reasonable measures to protect that information against unauthorized disclosure.”
The fines relate to sharing of real-time location data that was revealed in 2018. The FCC proposed the fines in 2020, when the commission had a Republican majority, and finalized them today.
All three major carriers vowed to appeal the fines after they were announced today. The three carriers also said they discontinued the data-sharing programs that the fines relate to.
The fines are $80.1 million for T-Mobile, $57.3 million for AT&T, and $46.9 million for Verizon. T-Mobile is also on the hook for a $12.2 million fine issued to Sprint, which was bought by T-Mobile shortly after the penalties were proposed over four years ago.
The FCC Enforcement Bureau investigations of the four carriers found that each carrier sold access to its customers’ location information to “aggregators,” who then resold access to such information to third-party location-based service providers. In doing so, each carrier attempted to offload its obligations to obtain customer consent onto downstream recipients of location information, which in many instances meant that no valid customer consent was obtained. This initial failure was compounded when, after becoming aware that their safeguards were ineffective, the carriers continued to sell access to location information without taking reasonable measures to protect it from unauthorized access.
“Shady actors” got hold of data
The problem first came to light with reports of customer location data “being disclosed by the largest American wireless carriers without customer consent or other legal authorization to a Missouri Sheriff through a ‘location-finding service’ operated by Securus, a provider of communications services to correctional facilities, to track the location of numerous individuals,” the FCC said.
Chairwoman Jessica Rosenworcel said that news reports in 2018 “revealed that the largest wireless carriers in the country were selling our real-time location information to data aggregators, allowing this highly sensitive data to wind up in the hands of bail-bond companies, bounty hunters, and other shady actors. This ugly practice violates the law—specifically Section 222 of the Communications Act, which protects the privacy of consumer data.”
For a time after the 2018 reports, “all four carriers continued to operate their programs without putting in place reasonable safeguards to ensure that the dozens of location-based service providers with access to their customers’ location information were actually obtaining customer consent,” the FCC said.
The three carriers are ready to challenge the fines in court. “This industry-wide third-party aggregator location-based services program was discontinued more than five years ago after we took steps to ensure that critical services like roadside assistance, fraud protection and emergency response would not be disrupted,” T-Mobile said in a statement provided to Ars. “We take our responsibility to keep customer data secure very seriously and have always supported the FCC’s commitment to protecting consumers, but this decision is wrong, and the fine is excessive. We intend to challenge it.”
If you build a gadget that connects to the Internet and sell it in the United Kingdom, you can no longer make the default password “password.” In fact, you’re not supposed to have default passwords at all.
A new version of the 2022 Product Security and Telecommunications Infrastructure Act (PTSI) is now in effect, covering just about everything that a consumer can buy that connects to the web. Under the guidelines, even the tiniest Wi-Fi board must either have a randomized password or else generate a password upon initialization (through a smartphone app or other means). This password can’t be incremental (“password1,” “password54”), and it can’t be “related in an obvious way to public information,” such as MAC addresses or Wi-Fi network names. A device should be sufficiently strong against brute-force access attacks, including credential stuffing, and should have a “simple mechanism” for changing the password.
There’s more, and it’s just as head-noddingly obvious. Software components, where reasonable, “should be securely updateable,” should actually check for updates, and should update either automatically or in a way “simple for the user to apply.” Perhaps most importantly, device owners can report security issues and expect to hear back about how that report is being handled.
Violations of the new device laws can result in fines up to 10 million pounds (roughly $12.5 million) or 4 percent of related worldwide revenue, whichever is higher.
Besides giving consumers better devices, these regulations are aimed squarely at malware like Mirai, which can conscript devices like routers, cable modems, and DVRs into armies capable of performing distributed denial-of-service attacks (DDoS) on various targets.
As noted by The Record, the European Union’s Cyber Resilience Act has been shaped but not yet passed and enforced, and even if it does pass, would not take effect until 2027. In the US, there is the Cyber Trust Mark, which would at least give customers the choice of buying decently secured or genially abandoned devices. But the particulars of that label are under debate and seemingly a ways from implementation. At the federal level, a 2020 bill tasked the National Institutes of Standard and Technology with applying related standards to connected devices deployed by the feds.
A federal appeals court today reversed a ruling that prevented New York from enforcing a law requiring Internet service providers to sell $15 broadband plans to low-income consumers. The ruling is a loss for six trade groups that represent ISPs, although it isn’t clear right now whether the law will be enforced.
New York’s Affordable Broadband Act (ABA) was blocked in June 2021 by a US District Court judge who ruled that the state law is rate regulation and preempted by federal law. Today, the US Court of Appeals for the 2nd Circuit reversed the ruling and vacated the permanent injunction that barred enforcement of the state law.
For consumers who qualify for means-tested government benefits, the state law requires ISPs to offer “broadband at no more than $15 per month for service of 25Mbps, or $20 per month for high-speed service of 200Mbps,” the ruling noted. The law allows for price increases every few years and makes exemptions available to ISPs with fewer than 20,000 customers.
“First, the ABA is not field-preempted by the Communications Act of 1934 (as amended by the Telecommunications Act of 1996), because the Act does not establish a framework of rate regulation that is sufficiently comprehensive to imply that Congress intended to exclude the states from entering the field,” a panel of appeals court judges stated in a 2-1 opinion.
Trade groups claimed the state law is preempted by former Federal Communications Commission Chairman Ajit Pai’s repeal of net neutrality rules. Pai’s repeal placed ISPs under the more forgiving Title I regulatory framework instead of the common-carrier framework in Title II of the Communications Act.
2nd Circuit judges did not find this argument convincing:
Second, the ABA is not conflict-preempted by the Federal Communications Commission’s 2018 order classifying broadband as an information service. That order stripped the agency of its authority to regulate the rates charged for broadband Internet, and a federal agency cannot exclude states from regulating in an area where the agency itself lacks regulatory authority. Accordingly, we REVERSE the judgment of the district court and VACATE the permanent injunction.
Be careful what you lobby for
The judges’ reasoning is similar to what a different appeals court said in 2019 when it rejected Pai’s attempt to preempt all state net neutrality laws. In that case, the US Court of Appeals for the District of Columbia Circuit said that “in any area where the Commission lacks the authority to regulate, it equally lacks the power to preempt state law.” In a related case, ISPs were unable to block a California net neutrality law.
Several of the trade groups that sued New York “vociferously lobbied the FCC to classify broadband Internet as a Title I service in order to prevent the FCC from having the authority to regulate them,” today’s 2nd Circuit ruling said. “At that time, Supreme Court precedent was already clear that when a federal agency lacks the power to regulate, it also lacks the power to preempt. The Plaintiffs now ask us to save them from the foreseeable legal consequences of their own strategic decisions. We cannot.”
Judges noted that there are several options for ISPs to try to avoid regulation:
If they believe a requirement to provide Internet to low-income families at a reduced price is unfair or misguided, they have several pathways available to them. They could take it up with the New York State Legislature. They could ask Congress to change the scope of the FCC’s Title I authority under the Communications Act. They could ask the FCC to revisit its classification decision, as it has done several times before But they cannot ask this Court to distort well-established principles of administrative law and federalism to strike down a state law they do not like.
Coincidentally, the 2nd Circuit issued its opinion one day after current FCC leadership reclassified broadband again in order to restore net neutrality rules. ISPs might now have a better case for preempting the New York law. The FCC itself won’t necessarily try to preempt New York’s law, but the agency’s net neutrality order does specifically reject rate regulation at the federal level.
Spy Pet, a service that sold access to a rich database of allegedly more than 3 billion Discord messages and details on more than 600 million users, has seemingly been shut down.
404 Media, which broke the story of Spy Pet’s offerings, reports that Spy Pet seems mostly shut down. Spy Pet’s website was unavailable as of this writing. A Discord spokesperson told Ars that the company’s safety team had been “diligently investigating” Spy Pet and that it had banned accounts affiliated with it.
“Scraping our services and self-botting are violations of our Terms of Service and Community Guidelines,” the spokesperson wrote. “In addition to banning the affiliated accounts, we are considering appropriate legal action.” The spokesperson noted that Discord server administrators can adjust server permissions to prevent future such monitoring on otherwise public servers.
Kiwi Farms ties, GDPR violations
The number of servers monitored by Spy Pet had been fluctuating in recent days. The site’s administrator told 404 Media’s Joseph Cox that they were rewriting part of the service while admitting that Discord had banned a number of bots. The administrator had also told 404 Media that he did not “intend for my tool to be used for harassment,” despite a likely related user offering Spy Pet data on Kiwi Farms, a notorious hub for doxxing and online harassment campaigns that frequently targets trans and non-binary people, members of the LGBTQ community, and women.
Even if Spy Pet can somehow work past Discord’s bans or survive legal action, the site’s very nature runs against a number of other Internet regulations across the globe. It’s almost certainly in violation of the European Union’s General Data Protection Regulation (GDPR). As pointed out by StackDiary, Spy Pet and services like it seem to violate at least three articles of the GDPR, including the “right to be forgotten” in Article 17.
Ars was unsuccessful in reaching the administrator of Spy Pet by email and Telegram message. Their last message on Telegram stated that their domain had been suspended and a backup domain was being set up. “TL;DR: Never trust the Germans,” they wrote.
TikTok owner ByteDance is preparing to sue the US government now that President Biden has signed into law a bill that will ban TikTok in the US if its Chinese owner doesn’t sell the company within 270 days. While it’s impossible to predict the outcome with certainty, law professors speaking to Ars believe that ByteDance will have a strong First Amendment case in its lawsuit against the US.
One reason for this belief is that just a few months ago, a US District Court judge blocked a Montana state law that attempted to ban TikTok. In October 2020, another federal judge in Pennsylvania blocked a Trump administration order that would have banned TikTok from operating inside the US. TikTok also won a preliminary injunction against Trump in US District Court for the District of Columbia in September 2020.
“Courts have said that a TikTok ban is a First Amendment problem,” Santa Clara University law professor Eric Goldman, who writes frequent analysis of legal cases involving technology, told Ars this week. “And Congress didn’t really try to navigate away from that. They just went ahead and disregarded the court rulings to date.”
The fact that previous attempts to ban TikTok have failed is “pretty good evidence that the government has an uphill battle justifying the ban,” Goldman said.
TikTok users engage in protected speech
The Montana law “bans TikTok outright and, in doing so, it limits constitutionally protected First Amendment speech,” US District Judge Donald Molloy wrote in November 2023 when he granted a preliminary injunction that blocks the state law.
“The Montana court concluded that the First Amendment challenge would be likely to succeed. This will give TikTok some hope that other courts will follow suit with respect to a national order,” Georgetown Law Professor Anupam Chander told Ars.
Molloy’s ruling said that without TikTok, “User Plaintiffs are deprived of communicating by their preferred means of speech, and thus First Amendment scrutiny is appropriate.” TikTok’s speech interests must be considered “because the application’s decisions related to how it selects, curates, and arranges content are also protected by the First Amendment,” the ruling said.
Banning apps that let people talk to each other “is categorically impermissible,” Goldman said. While the Chinese government engaging in propaganda is a problem, “we need to address that as a government propaganda problem, and not just limited to China,” he said. In Goldman’s view, a broader approach should also be used to stop governments from siphoning user data.
TikTok and opponents of bans haven’t won every case. A federal judge in Texas ruled in favor of Texas Governor Greg Abbott in December 2023. But that ruling only concerned a ban on state employees using TikTok on government-issued devices rather than a law that potentially affects all users of TikTok.
Weighing national security vs. First Amendment
US lawmakers have alleged that the Chinese Communist Party can weaponize TikTok to manipulate public opinion and access user data. But Chander was skeptical of whether the US government could convincingly justify its new law in court on national security grounds.
“Thus far, the government has refused to make public its evidence of a national security threat,” he told Ars. “TikTok put in an elaborate set of controls to insulate the app from malign foreign influence, and the government hasn’t shown why those controls are insufficient.”
The ruling against Trump by a federal judge in Pennsylvania noted that “the Government’s own descriptions of the national security threat posed by the TikTok app are phrased in the hypothetical.”
Chander stressed that the outcome of ByteDance’s planned case against the US is difficult to predict, however. “I would vote against the law if I were a judge, but it’s unclear how judges will weigh the alleged national security risks against the real free expression incursions,” he said.
Montana case may be “bellwether”
There are at least three types of potential plaintiffs that could lodge constitutional challenges to a TikTok ban, Goldman said. There’s TikTok itself, the users of TikTok who would no longer be able to post on the platform, and app stores that would be ordered not to carry the TikTok app.
Montana was sued by TikTok and users. Lead plaintiff Samantha Alario runs a local swimwear business and uses TikTok to market her products.
Montana Attorney General Austin Knudsen appealed the ruling against his state to the US Court of Appeals for the 9th Circuit. The Montana case could make it to the Supreme Court before there is any resolution on the enforceability of the US law, Goldman said.
“It’s possible that the Montana ban is actually going to be the bellwether that’s going to set the template for the constitutional review of the Congressional action,” Goldman said.
On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial.
The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos.
In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials:
A deepfake is an inauthentic audiovisual presentation prepared by software programs using artificial intelligence. Of course, photos and videos have always been subject to forgery, but developments in AI make deepfakes much more difficult to detect. Software for creating deepfakes is already freely available online and fairly easy for anyone to use. As the software’s usability and the videos’ apparent genuineness keep improving over time, it will become harder for computer systems, much less lay jurors, to tell real from fake.
During Friday’s three-hour hearing, the panel wrestled with the question of whether existing rules, which predate the rise of generative AI, are sufficient to ensure the reliability and authenticity of evidence presented in court.
Some judges on the panel, such as US Circuit Judge Richard Sullivan and US District Judge Valerie Caproni, reportedly expressed skepticism about the urgency of the issue, noting that there have been few instances so far of judges being asked to exclude AI-generated evidence.
“I’m not sure that this is the crisis that it’s been painted as, and I’m not sure that judges don’t have the tools already to deal with this,” said Judge Sullivan, as quoted by Reuters.
Last year, Chief US Supreme Court Justice John Roberts acknowledged the potential benefits of AI for litigants and judges, while emphasizing the need for the judiciary to consider its proper uses in litigation. US District Judge Patrick Schiltz, the evidence committee’s chair, said that determining how the judiciary can best react to AI is one of Roberts’ priorities.
In Friday’s meeting, the committee considered several deepfake-related rule changes. In the agenda for the meeting, US District Judge Paul Grimm and attorney Maura Grossman proposed modifying Federal Rule 901(b)(9) (see page 5), which involves authenticating or identifying evidence. They also recommended the addition of a new rule, 901(c), which might read:
901(c): Potentially Fabricated or Altered Electronic Evidence. If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence.
The panel agreed during the meeting that this proposal to address concerns about litigants challenging evidence as deepfakes did not work as written and that it will be reworked before being reconsidered later.
Another proposal by Andrea Roth, a law professor at the University of California, Berkeley, suggested subjecting machine-generated evidence to the same reliability requirements as expert witnesses. However, Judge Schiltz cautioned that such a rule could hamper prosecutions by allowing defense lawyers to challenge any digital evidence without establishing a reason to question it.
For now, no definitive rule changes have been made, and the process continues. But we’re witnessing the first steps of how the US justice system will adapt to an entirely new class of media-generating technology.
Putting aside risks from AI-generated evidence, generative AI has led to embarrassing moments for lawyers in court over the past two years. In May 2023, US lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman apologized to a judge for using ChatGPT to help write court filings that inaccurately cited six nonexistent cases, leading to serious questions about the reliability of AI in legal research. Also, in November, a lawyer for Michael Cohen cited three fake cases that were potentially influenced by a confabulating AI assistant.
The US Chamber of Commerce and other business groups sued the Federal Trade Commission and FTC Chair Lina Khan today in an attempt to block a newly issued ban on noncompete clauses.
The lawsuit was filed in US District Court for the Eastern District of Texas. The US Chamber of Commerce was joined in the suit by Business Roundtable, the Texas Association of Business, and the Longview Chamber of Commerce. The suit seeks a court order that would vacate the rule in its entirety.
The lawsuit claimed that noncompete clauses “benefit employers and workers alike—the employer protects its workforce investments and sensitive information, and the worker benefits from increased training, access to more information, and an opportunity to bargain for higher pay.”
“Having invested in their people and entrusted them with valuable company secrets, businesses have strong interests in preventing others from free-riding on those investments or gaining improper access to competitive, confidential information,” the lawsuit said.
Lawsuit filed one day after FTC issued rule
The lawsuit came one day after the FTC issued its rule banning noncompete clauses, determining that such clauses are an unfair method of competition and thus a violation of Section 5 of the FTC Act. The rule is scheduled to take effect in about four months and would render the vast majority of existing noncompetes unenforceable.
The only existing noncompetes that won’t be nullified are those for senior executives, defined as people earning more than $151,164 a year and who are in policymaking positions. Existing noncompetes for all other workers would be nullified, and the FTC is prohibiting employers from imposing any new noncompetes on both senior executives and other workers.
“By invalidating existing noncompete agreements and prohibiting businesses and their workers from ever entering into such agreements going forward, the rule will force businesses all over the country—including in this District—to turn to inadequate and expensive alternatives to protect their confidential information, such as nondisclosure agreements and trade-secret lawsuits,” the Chamber of Commerce said in its complaint.
The Chamber argues that the FTC overstepped its authority. “The Commission’s astounding assertion of power breaks with centuries of state and federal law and rests on novel claims of authority by the Commission. From the Founding forward, States have always regulated noncompete agreements,” the lawsuit said.
The FTC says it can impose the ban using authority under sections 5 and 6(g) of the FTC Act. Section 6(g) authorizes the Commission to “make rules and regulations for the purpose of carrying out the provisions of” the FTC Act, including the Section 5 prohibition on unfair methods of competition, the FTC said in its rule.
FTC: “Our legal authority is crystal clear”
“Our legal authority is crystal clear,” an FTC spokesperson said in a statement provided to Ars today. “In the FTC Act, Congress specifically ’empowered and directed’ the FTC to prevent ‘unfair methods of competition’ and to ‘make rules and regulations for the purposes of carrying out the provisions of’ the FTC Act. This authority has repeatedly been upheld by courts and reaffirmed by Congress. Addressing noncompetes that curtail Americans’ economic freedom is at the very heart of our mandate, and we look forward to winning in court.”
The Chamber’s lawsuit said “the sheer economic and political significance of a nationwide noncompete ban demonstrates that this is a question for Congress to decide, rather than an agency.” If the FTC’s claim of authority is upheld, it “would reflect a boundless and unconstitutional delegation of legislative power to the Executive Branch,” the lawsuit said.
If the US District Court in Texas grants an injunction blocking the ban, the FTC could challenge the ruling in a federal appeals court.
Separately, a lobby group for cable TV and broadband companies issued a statement opposing the ban on noncompetes. “It is disappointing the FTC is poised to undercut small, independent, and rural broadband providers with a sweeping ban on non-competes,” said America’s Communications Association (formerly the American Cable Association). “This unjustified action will make it more challenging to provide quality service, crush competition and allow large incumbents to raid talent and obtain propriety information.”
Khan said yesterday that “noncompete clauses keep wages low, suppress new ideas, and rob the American economy of dynamism.” The ban “will ensure Americans have the freedom to pursue a new job, start a new business, or bring a new idea to market,” she said.