Speech

ucla-faculty-gets-big-win-in-suit-against-trump’s-university-attacks

UCLA faculty gets big win in suit against Trump’s university attacks


Government can’t use funding threats to override the First Amendment.

While UCLA has been most prominently targeted by the Trump Administration, the ruling protects the entire UC system. Credit: Myung J. Chun

On Friday, a US District Court issued a preliminary injunction blocking the United States government from halting federal funding at UCLA or any other school in the University of California system. The ruling came in response to a suit filed by groups representing the faculty at these schools challenging the Trump administration’s attempts to force UCLA into a deal that would substantially revise instruction and policy.

The court’s decision lays out how the Trump administration’s attacks on universities follow a standard plan: use accusations of antisemitism to justify an immediate cut to funding, then use the loss of money to compel an agreement that would result in revisions to university instruction and management. The court finds that this plan was deficient on multiple grounds, violating legal procedures for cutting funding to an illegal attempt and suppressing the First Amendment rights of faculty.

The result is a reprieve for the entire University of California system, as well as a clear pathway for any universities to fight back against the Trump administration’s attacks on research and education.

First Amendment violations

The Judge overseeing this case, Rita Lin, issued separate documents describing the reasoning behind her decision and the sanctions she has placed on the Trump administration. In the first, she lays out the argument that the threats facing the UC system, and most notably UCLA, are part of a scripted campaign deployed against many other universities, one that proceeds through several steps. The Trump administration’s Task Force to Combat Anti-Semitism is central to this effort, which starts with the opening of a civil rights investigation against a university that was the site of anti-Israel protests during the conflict in Gaza.

“Rooting out antisemitism is undisputedly a laudable and important goal,” Judge Lin wrote. But the investigations in many cases take place after those universities have already taken corrective steps, which the Trump administration seemingly never considers. Instead, while the investigations are still ongoing, agencies throughout the federal government cancel funding for research and education meant for that university and announce that there will be no future funding without an agreement.

The final step is a proposed settlement that would include large payments (over $1.2 billion in UCLA’s case) and a set of conditions that alter university governance and instruction. These conditions often have little to no connection with antisemitism.

While all of this was ostensibly meant to combat antisemitism, the plaintiffs in this case presented a huge range of quotes from administration officials, including the head of the Task Force to Combat Anti-Semitism, saying the goal was to suppress certain ideas on campus. “The unrebutted record in this case shows that Defendants have used the threat of investigations and economic sanctions to… coerce the UC to stamp out faculty, staff, and student ‘woke,’ ‘left,’ ‘anti-American,’ ‘anti-Western,’ and ‘Marxist’ speech,” Lin said.

And even before any sort of agreement was reached, there was extensive testimony that people on campus changed their teaching and research to avoid further attention from the administration. “Plaintiffs’ members express fear that researching, teaching, and speaking on disfavored topics will trigger further retaliatory funding cancellations against the UC,” Lin wrote, “and that they will be blamed for the retaliation. They also describe fears that the UC will retaliate against them to avoid further funding cuts or in order to comply with the proposed settlement agreement.”

That’s a problem, given that teaching and research topics are forms of speech, and therefore protected by the First Amendment. “These are classic, predictable First Amendment harms, and exactly what Defendants publicly said that they intended,” Lin concluded.

Beyond speech

But the First Amendment isn’t the only issue here. The Civil Rights Act, most notably Title VI, lays out a procedure for cutting federal funding, including warnings and hearings before any funds are shut off. That level of coercion is also limited to cases where there’s an indication that voluntary compliance won’t work. Any funding cut would need to target the specific programs involved and the money allocated to them. There is nothing in Title VI that enables the sort of financial payments that the government has been demanding (and, in some cases, receiving) from schools.

It’s pretty obvious that none of these procedures are being followed here. And as Lin noted in her ruling, “Defendants conceded at oral argument that, of the billions of dollars of federal university funding suspended across numerous agencies in recent months, not a single agency has followed the procedures required by Title VI and IX.”

She found that the government decided it wasn’t required to follow the Civil Rights Act procedures. (Reading through the decision, it becomes hard to tell where the government offered any defense of its actions at all.)

The decision to ignore all existing procedures, in turn, causes additional problems, including violations of the Tenth Amendment, which limits the actions that the government can take. And it runs afoul of the Administrative Procedures Act, which prohibits the government from taking actions that are “arbitrary and capricious.”

All of this provided Lin with extensive opportunities to determine that the Plaintiffs, largely organizations that represent the faculty at University of California schools, are likely to prevail in their suit, and thus are deserving of a preliminary injunction to block the federal government’s actions. But first, she had to deal with a recent Supreme Court precedent holding that cases involving federal money belong in a different court system. She did so by arguing that this case is largely about First Amendment and federal procedures rather than any sort of contract for federal money; money is being used as a lever here, so they ruling must involve restoring the money to address the free speech issues.

That issue will undoubtedly be picked up on appeal as it makes its way through the courts.

Complete relief

Lin identified a coercive program that is being deployed against many universities and is already suppressing speech throughout the University of California system, including on campuses that haven’t been targeted yet. She is issuing a ruling that targets the program broadly.

“Plaintiffs have shown that Defendants are coercing the [University of California] as a whole, through the Task Force Policy and Funding Cancellation, to stamp out their members’ disfavored speech,” Lin concluded. “Therefore, to afford Plaintiffs complete relief, the entirety of the coercive practice must be enjoined, not just the suspensions that impact Plaintiffs’ members.”

Her ruling indicates that if the federal government decides it wants to cut any grants to any school in the UC system, it has to go through the entire procedure set out in the Civil Rights Act. The government is also prohibited from demanding money from any of these schools as a fine or payment, and it can’t threaten future funding to the schools. The current hold on grants to the school by the government must also be lifted.

In short, the entire UC system should be protected from any of the ways that the government has been trying to use accusations of antisemitism to suppress ideas that it disfavors. And since those primarily involve federal funding, that has to be restored, and any future threats to it must be blocked.

While this case is likely to face a complicated appeals process, Lin’s ruling makes it extremely clear that all of these cases are exactly what they seemed. Just as members of the administration stated in public multiple times, they decided to target some ideas they disfavored and simply made up a process that would let them do so.

While it worked against a number of prominent universities, its legal vulnerabilities have been there from the start.

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

UCLA faculty gets big win in suit against Trump’s university attacks Read More »

a-neural-brain-implant-provides-near-instantaneous-speech

A neural brain implant provides near instantaneous speech


Focusing on sound production instead of word choice makes for a flexible system.

The participant’s implant gets hooked up for testing. Credit: UC Regents

Stephen Hawking, a British physicist and arguably the most famous man suffering from amyotrophic lateral sclerosis (ALS), communicated with the world using a sensor installed in his glasses. That sensor used tiny movements of a single muscle in his cheek to select characters on a screen. Once he typed a full sentence at a rate of roughly one word per minute, the text was synthesized into speech by a DECtalk TC01 synthesizer, which gave him his iconic, robotic voice.

But a lot has changed since Hawking died in 2018. Recent brain-computer-interface (BCI) devices have made it possible to translate neural activity directly into text and even speech. Unfortunately, these systems had significant latency, often limiting the user to a predefined vocabulary, and they did not handle nuances of spoken language like pitch or prosody. Now, a team of scientists at the University of California, Davis has built a neural prosthesis that can instantly translate brain signals into sounds—phonemes and words. It may be the first real step we have taken toward a fully digital vocal tract.

Text messaging

“Our main goal is creating a flexible speech neuroprosthesis that enables a patient with paralysis to speak as fluently as possible, managing their own cadence, and be more expressive by letting them modulate their intonation,” says Maitreyee Wairagkar, a neuroprosthetics researcher at UC Davis who led the study. Developing a prosthesis ticking all these boxes was an enormous challenge because it meant Wairagkar’s team had to solve nearly all the problems BCI-based communication solutions have faced in the past. And they had quite a lot of problems.

The first issue was moving beyond text—most successful neural prostheses developed so far have translated brain signals into text—the words a patient with an implanted prosthesis wanted to say simply appeared on a screen. Francis R. Willett led a team at Stanford University that achieved brain-to-text translation with around a 25 percent error rate. “When a woman with ALS was trying to speak, they could decode the words. Three out of four words were correct. That was super exciting but not enough for daily communication,” says Sergey Stavisky, a neuroscientist at UC Davis and a senior author of the study.

Delays and dictionaries

One year after the Stanford work, in 2024, Stavisky’s team published its own research on a brain-to-text system that bumped the accuracy to 97.5 percent. “Almost every word was correct, but communicating over text can be limiting, right?” Stavisky said. “Sometimes you want to use your voice. It allows you to make interjections, it makes it less likely other people interrupt you—you can sing, you can use words that aren’t in the dictionary.” But the most common approach to generating speech relied on synthesizing it from text, which led straight into another problem with BCI systems: very high latency.

In nearly all BCI speech aids, sentences appeared on a screen after a significant delay, long after the patient finished stringing the words together in their mind. The speech synthesis part usually happened after the text was ready, which caused even more delay. Brain-to-text solutions also suffered from a limited vocabulary. The latest system of this kind supported a dictionary of roughly 1,300 words. When you tried to speak a different language, use more elaborate vocabulary, or even say the unusual name of a café just around the corner, the systems failed.

So, Wairagkar designed her prosthesis to translate brain signals into sounds, not words—and do it in real time.

Extracting sound

The patient who agreed to participate in Wairagkar’s study was codenamed T15 and was a 46-year-old man suffering from ALS. “He is severely paralyzed and when he tries to speak, he is very difficult to understand. I’ve known him for several years, and when he speaks, I understand maybe 5 percent of what he’s saying,” says David M. Brandman, a neurosurgeon and co-author of the study. Before working with the UC Davis team, T15 communicated using a gyroscopic head mouse to control a cursor on a computer screen.

To use an early version of Stavisky’s brain-to-text system, the patient had 256 microelectrodes implanted into his ventral precentral gyrus, an area of the brain responsible for controlling vocal tract muscles.

For the new brain-to-speech system, Wairagkar and her colleagues relied on the same 256 electrodes. “We recorded neural activities from single neurons, which is the highest resolution of information we can get from our brain,” Wairagkar says. The signal registered by the electrodes was then sent to an AI algorithm called a neural decoder that deciphered those signals and extracted speech features such as pitch or voicing. In the next step, these features were fed into a vocoder, a speech synthesizing algorithm designed to sound like the voice that T15 had when he was still able to speak normally. The entire system worked with latency down to around 10 milliseconds—the conversion of brain signals into sounds was effectively instantaneous.

Because Wairagkar’s neural prosthesis converted brain signals into sounds, it didn’t come with a limited selection of supported words. The patient could say anything he wanted, including pseudo-words that weren’t in a dictionary and interjections like “um,” “hmm,” or “uh.” Because the system was sensitive to features like pitch or prosody, he could also vocalize questions saying the last word in a sentence with a slightly higher pitch and even sing a short melody.

But Wairagkar’s prosthesis had its limits.

Intelligibility improvements

To test the prosthesis’s performance, Wairagkar’s team first asked human listeners to match a recording of some synthesized speech by the T15 patient with one transcript from a set of six candidate sentences of similar length. Here, the results were completely perfect, with the system achieving 100 percent intelligibility.

The issues began when the team tried something a bit harder: an open transcription test where listeners had to work without any candidate transcripts. In this second test, the word error rate was 43.75 percent, meaning participants identified a bit more than half of the recorded words correctly. This was certainly an improvement compared to the intelligibility of the T15’s unaided speech where the word error in the same test with the same group of listeners was 96.43 percent. But the prosthesis, while promising, was not yet reliable enough to use it for day-to-day communication.

“We’re not at the point where it could be used in open-ended conversations. I think of this as a proof of concept,” Stavisky says. He suggested that one way to improve future designs would be to use more electrodes. “There are a lot of startups right now building BCIs that are going to have over a thousand electrodes. If you think about what we’ve achieved with just 250 electrodes versus what could be done with a thousand or two thousand—I think it would just work,” he argued. And the work to make that happen is already underway.

Paradromics, a BCI-focused startup based in Austin, Texas, wants to go ahead with clinical trials of a speech neural prosthesis and is already seeking FDA approval. “They have a 1,600 electrode system, and they publicly stated they are going to do speech,” Stavisky says. “David Brandman, our co-author, is going to be the lead principal investigator for these trials, and we’re going to do it here at UC Davis.”

Nature, 2025.  DOI: 10.1038/s41586-025-09127-3

Photo of Jacek Krywko

Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.

A neural brain implant provides near instantaneous speech Read More »