Author name: Kris Guyer

china-is-having-standard-flu-season-despite-widespread-hmpv-fears

China is having standard flu season despite widespread HMPV fears

There’s a good chance you’ve seen headlines about HMPV recently, with some touting “what you need to know” about the virus, aka human metapneumovirus. The answer is: not much.

It’s a common, usually mild respiratory virus that circulates every year, blending into the throng of other seasonal respiratory illnesses that are often indistinguishable from one another. (The pack includes influenza virus, respiratory syncytial virus (RSV), adenovirus, parainfluenza virus, common human coronaviruses, bocavirus, rhinovirus, enteroviruses, and Mycoplasma pneumoniae, among others.) HMPV is in the same family of viruses as RSV.

As one viral disease epidemiologist at the US Centers for Disease Control summarized in 2016, it’s usually “clinically indistinguishable” from other bog-standard respiratory illnesses, like seasonal flu, that cause cough, fever, and nasal congestion. For most, the infection is crummy but not worth a visit to a doctor. As such, testing for it is limited. But, like other common respiratory infections, it can be dangerous for children under age 5, older adults, and those with compromised immune systems. It was first identified in 2001, but it has likely been circulating since at least 1958.

The situation in China

The explosion of interest in HMPV comes after reports of a spike of HMPV infections in China, which allegedly led to hordes of masked patients filling hospitals. But none of that appears to be accurate. While HMPV infections have risen, the increase is not unusual for the respiratory illness season. Further, HMPV is not the leading cause of respiratory illnesses in China right now; the leading cause is seasonal flu. And the surge in seasonal flu is also within the usual levels seen at this time of year in China.

Last week, the Chinese Center for Disease Control and Prevention released its sentinel respiratory illness surveillance data collected in the last week of December. It included the test results of respiratory samples taken from outpatients. Of those, 30 percent were positive for flu (the largest share), a jump of about 6 percent from the previous week (the largest jump). Only 6 percent were positive for HMPV, which was about the same detection rate as in the previous week (there was a 0.1 percent increase).

China is having standard flu season despite widespread HMPV fears Read More »

dirty-deeds-in-denver:-ex-prosecutor-faked-texts,-destroyed-devices-to-frame-colleague

Dirty deeds in Denver: Ex-prosecutor faked texts, destroyed devices to frame colleague

How we got here

Choi was a young attorney a few years out of law school, working at the Denver District Attorney’s Office in various roles between 2019 and 2022. Beginning in 2021, she accused her colleague, Dan Hines, of sexual misconduct. Hines, she said at first, made an inappropriate remark to her. Hines denied it and nothing could be proven, but he was still transferred to another unit.

In 2022, Choi complained again. This time, she offered phone records showing inappropriate text messages she allegedly received from Hines. But Hines, who denied everything, offered investigators his own phone records, which showed no texts to Choi.

Investigators then went directly to Verizon for records, which showed that “Ms. Choi had texted the inappropriate messages to herself,” according to the Times. “In addition, she changed the name in her phone to make it appear as though Mr. Hines was the one who had sent them.”

At this point, the investigators started looking more closely at Choi and asked for her devices, leading to the incident described above.

In the end, Choi was fired from the DA’s office and eventually given a disbarment order by the Office of the Presiding Disciplinary Judge, which she can still appeal. For his part, Hines is upset about how he was treated during the whole situation and has filed a lawsuit of his own against the DA’s office, believing that he was initially seen as a guilty party even in the absence of evidence.

The case is a reminder that, despite well-founded concerns over tracking, data collection, and privacy, sometimes the modern world’s massive data collection can work to one’s benefit. Hines was able to escape the second allegation against him precisely because of the specific (and specifically refutable) digital evidence that was presented against him—as opposed to the murkier world of “he said/she said.”

Choi might have done as she liked with her devices, but her “evidence” wasn’t the only data out there. Investigators were able to draw on Hines’ own phone data, along with Verizon network data, to see that he had not been texting Choi at the times in question.

Update: Ars Technica has obtained the ruling, which you can read here (PDF). The document recounts in great detail what a modern, quasi-judicial workplace investigation looks like: forensic device examinations, search warrants to Verizon, asking people to log into their cell phone accounts and download data while investigators look over their shoulders, etc.

Dirty deeds in Denver: Ex-prosecutor faked texts, destroyed devices to frame colleague Read More »

new-geforce-50-series-gpus:-there’s-the-$1,999-5090,-and-there’s-everything-else

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else


Nvidia leans heavily on DLSS 4 and AI-generated frames for speed comparisons.

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia’s RTX 5070, one of four new desktop GPUs announced this week. Credit: Nvidia

Nvidia has good news and bad news for people building or buying gaming PCs.

The good news is that three of its four new RTX 50-series GPUs are the same price or slightly cheaper than the RTX 40-series GPUs they’re replacing. The RTX 5080 is $999, the same price as the RTX 4080 Super; the 5070 Ti and 5070 are launching for $749 and $549, each $50 less than the 4070 Ti Super and 4070 Super.

The bad news for people looking for the absolute fastest card they can get is that the company is charging $1,999 for its flagship RTX 5090 GPU, significantly more than the $1,599 MSRP of the RTX 4090. If you want Nvidia’s biggest and best, it will cost at least as much as four high-end game consoles or a pair of decently specced midrange gaming PCs.

Pricing for the first batch of Blackwell-based RTX 50-series GPUs. Credit: Nvidia

Nvidia also announced a new version of its upscaling algorithm, DLSS 4. As with DLSS 3 and the RTX 40-series, DLSS 4’s flagship feature will be exclusive to the 50-series. It’s called DLSS Multi Frame Generation, and as the name implies, it takes the Frame Generation feature from DLSS 3 and allows it to generate even more frames. It’s why Nvidia CEO Jensen Huang claimed that the $549 RTX 5070 performed like the $1,599 RTX 4090; it’s also why those claims are a bit misleading.

The rollout will begin with the RTX 5090 and 5080 on January 30. The 5070 Ti and 5070 will follow at some point in February. All cards except the 5070 Ti will come in Nvidia-designed Founders Editions as well as designs made by Nvidia’s partners; the 5070 Ti isn’t getting a Founders Edition.

The RTX 5090 and 5080

RTX 5090 RTX 4090 RTX 5080 RTX 4080 Super
CUDA Cores 21,760 16,384 10,752 10,240
Boost Clock 2,410 MHz 2,520 MHz 2,617 MHz 2,550 MHz
Memory Bus Width 512-bit 384-bit 256-bit 256-bit
Memory Bandwidth 1,792 GB/s 1,008 GB/s 960 GB/s 736 GB/s
Memory size 32GB GDDR7 24GB GDDR6X 16GB GDDR7 16GB GDDR6X
TGP 575 W 450 W 360 W 320 W

The RTX 5090, based on Nvidia’s new Blackwell architecture, is a gigantic chip with 92 billion transistors in it. And while it is double the price of an RTX 5080, you also get double the GPU cores and double the RAM and nearly double the memory bandwidth. Even more than the 4090, it’s being positioned head and shoulders above the rest of the GPUs in the family, and the 5080’s performance won’t come remotely close to it.

Although $1,999 is a lot to ask for a graphics card, if Nvidia can consistently make the RTX 5090 available at $2,000, it could still be an improvement over the pricing of the 4090, which regularly sold for well over $1,599 over the course of its lifetime, due in part to pandemic-fueled GPU shortages, cryptocurrency mining, and the generative AI boom. Companies and other entities buying them as AI accelerators may restrict the availability of the 5090, too, but Nvidia’s highest GPU tier has been well out of the price range of most consumers for a while now.

Despite the higher power budget—as predicted, it’s 125 W higher than the 4090 at 450 W, and Nvidia recommends a 1,000 W power supply or better—the physical size of the 5090 Founders Edition is considerably smaller than the 4090, which was large enough that it had trouble fitting into some computer cases. Thanks to a “high-density PCB” and redesigned cooling system, the 5090 Founders Edition is a dual-slot card that ought to fit into small-form-factor systems much more easily than the 4090. Of course, this won’t stop most third-party 5090 GPUs from being gigantic triple-fan monstrosities, but it is apparently possible to make a reasonably sized version of the card.

Moving on to the 5080, it looks like more of a mild update from last year’s RTX 4080 Super, with a few hundred more CUDA cores, more memory bandwidth (thanks to the use of GDDR7, since the two GPUs share the same 256-bit interface), and a slightly higher power budget of 360 W (compared to 320 W for the 4080 Super).

Having more cores and faster memory, in addition to whatever improvements and optimizations come with the Blackwell architecture, should help the 5080 easily beat the 4080 Super. But it’s an open question as to whether it will be able to beat the 4090, at least before you consider any DLSS-related frame rate increases. The 4090 has 52 percent more GPU cores, a wider memory bus, and 8GB more memory.

5070 Ti and 5070

RTX 5070 Ti RTX 4070 Ti Super RTX 5070 RTX 4070 Super
CUDA Cores 8,960 8,448 6,144 7,168
Boost Clock 2,452 MHz 2,610 MHz 2,512 MHz 2,475 MHz
Memory Bus Width 256-bit 256-bit 192-bit 192-bit
Memory Bandwidth 896 GB/s 672 GB/s 672 GB/s 504 GB/s
Memory size 16GB GDDR7 16GB GDDR6X 12GB GDDR7 12GB GDDR6X
TGP 300 W 285 W 250 W 220 W

At $749 and $549, the 5070 Ti and 5070 are slightly more within reach for someone who’s trying to spend less than $2,000 on a new gaming PC. Both cards hew relatively closely to the specs of the 4070 Ti Super and 4070 Super, both of which are already solid 1440p and 4K graphics cards for many titles.

Like the 5080, the 5070 Ti includes a few hundred more CUDA cores, more memory bandwidth, and slightly higher power requirements compared to the 4070 Ti Super. That the card is $50 less than the 4070 Ti Super was at launch is a nice bonus—if it can come close to or beat the RTX 4080 for $250 less, it could be an appealing high-end option.

The RTX 5070 is alone in having fewer CUDA cores than its immediate predecessor—6,144, down from 7,168. It is an upgrade from the original 4070, which had 5,888 CUDA cores, and GDDR7 and slightly faster clock speeds may still help it outrun the 4070 Super; like the other 50-series cards, it also comes with a higher power budget. But right now this card is looking like the closest thing to a lateral move in the lineup, at least before you consider the additional frame-generation capabilities of DLSS 4.

DLSS 4 and fudging the numbers

Many of Nvidia’s most ostentatious performance claims—including the one that the RTX 5070 is as fast as a 4090—factors in DLSS 4’s additional AI-generated frames. Credit: Nvidia

When launching new 40-series cards over the last two years, it was common for Nvidia to publish a couple of different performance comparisons to last-gen cards: one with DLSS turned off and one with DLSS and the 40-series-exclusive Frame Generation feature turned on. Nvidia would then lean on the DLSS-enabled numbers when making broad proclamations about a GPU’s performance, as it does in its official press release when it says the 5090 is twice as fast as the 4090, or as Huang did during his CES keynote when he claimed that an RTX 5070 offered RTX 4090 performance for $549.

DLSS Frame Generation is an AI feature that builds on what DLSS is already doing. Where DLSS uses AI to fill in gaps and make a lower-resolution image look like a higher-resolution image, DLSS Frame Generation creates entirely new frames and inserts them in between the frames that your GPU is actually rendering.

DLSS 4 now generates up to three frames for every frame the GPU is actually rendering. Used in concert with DLSS image upscaling, Nvidia says that “15 out of every 16 pixels” you see on your screen are being generated by its AI models. Credit: Nvidia

The RTX 50-series one-ups the 40-series with DLSS 4, another new revision that’s exclusive to its just-launched GPUs: DLSS Multi Frame Generation. Instead of generating one extra frame for every traditionally rendered frame, DLSS 4 generates “up to three additional frames” to slide in between the ones your graphics card is actually rendering—based on Nvidia’s slides, it looks like users ought to be able to control how many extra frames are being generated, just as they can control the quality settings for DLSS upscaling. Nvidia is leaning on the Blackwell architecture’s faster Tensor Cores, which it says are up to 2.5 times faster than the Tensor Cores in the RTX 40-series, to do the AI processing necessary to upscale rendered frames and to generate new ones.

Nvidia’s performance comparisons aren’t indefensible; with DLSS FG enabled, the cards can put out a lot of frames per second. It’s just dependent on game support (Nvidia says that 75 titles will support it at launch), and going off of our experience with the original iteration of Frame Generation, there will likely be scenarios where image quality is noticeably worse or just “off-looking” compared to actual rendered frames. DLSS FG also needed a solid base frame rate to get the best results, which may or may not be the case for Multi-FG.

Enhanced versions of older DLSS features can benefit all RTX cards, including the 20-, 30-, and 40-series. Multi-Frame Generation is restricted to the 50-series, though. Credit: Nvidia

Though the practice of restricting the biggest DLSS upgrades to all-new hardware is a bit frustrating, Nvidia did announce that it’s releasing a new transformer module for the DLSS Ray Reconstruction, Super Resolution, and Anti-Aliasing features. These are DLSS features that are available on all RTX GPUs going all the way back to the RTX 20-series, and games that are upgraded to use the newer models should benefit from improved upscaling quality even if they’re using older GPUs.

GeForce 50-series: Also for laptops!

Nvidia’s projected pricing for laptops with each of its new mobile GPUs. Credit: Nvidia

Nvidia’s laptop GPU announcements sometimes trail the desktop announcements by a few weeks or months. But the company has already announced mobile versions of the 5090, 5080, 5070 Ti, and 5070 that Nvidia says will begin shipping in laptops priced between $1,299 and $2,899 when they launch in March.

All of these GPUs share names, the Blackwell architecture, and DLSS 4 support with their desktop counterparts, but per usual they’re significantly cut down to fit on a laptop motherboard and within a laptop’s cooling capacity. The mobile version of the 5090 includes 10,496 GPU cores, less than half the number of the desktop version, and just 24GB of GDDR7 memory on a 256-bit interface instead of 32GB on a 512-bit interface. But it also can operate with a power budget between 95 and 150 W, a fraction of what the desktop 5090 needs.

RTX 5090 (mobile) RTX 5080 (mobile) RTX 5070 Ti (mobile) RTX 5070 (mobile)
CUDA Cores 10,496 7,680 5,888 4,608
Memory Bus Width 256-bit 256-bit 192-bit 128-bit
Memory size 24GB GDDR7 16GB GDDR7 12GB GDDR7 8GB GDDR7
TGP 95-150 W 80-150 W 60-115 W 50-100 W

The other three GPUs are mostly cut down in similar ways, and all of them have fewer GPU cores and lower power requirements than their desktop counterparts. The 5070 GPUs both have less RAM and narrowed memory buses, too, but the mobile RTX 5080 at least comes closer to its desktop iteration, with the same 256-bit bus width and 16GB of RAM.

Photo of Andrew Cunningham

Andrew is a Senior Technology Reporter at Ars Technica, with a focus on consumer tech including computer hardware and in-depth reviews of operating systems like Windows and macOS. Andrew lives in Philadelphia and co-hosts a weekly book podcast called Overdue.

New GeForce 50-series GPUs: There’s the $1,999 5090, and there’s everything else Read More »

lenovo-laptop’s-rollable-screen-uses-motors-to-grow-from-14-to-16.7-inches

Lenovo laptop’s rollable screen uses motors to grow from 14 to 16.7 inches

Lenovo announced a laptop today that experiments with a new way to offer laptop users more screen space than the typical clamshell design. The Lenovo ThinkBook Plus Gen 6 Rollable has a screen that can roll up vertically to expand from 14 inches diagonally to 16.7 inches, presenting an alternative to prior foldable-screen and dual-screen laptops.

Here you can see the PC’s backside when the screen is extended. Lenovo

The laptop, which Lenovo says is coming out in June, builds on a concept that Lenovo demoed in February 2023. That prototype had a Sharp-made panel that initially measured 12.7 inches but could unroll to present a total screen size of 15.3 inches. Lenovo’s final product is working with a bigger display from Samsung Display, The Verge reported. Resolution-wise you’re going from 2,000×1,600 pixels (about 183 pixels per inch) to 2,000×2,350 (184.8 ppi), the publication said.

Users make the screen expand by pressing a dedicated button on the keyboard or by making a hand gesture at the PC’s webcam. Expansion entails about 10 seconds of loud whirring from the laptop’s motors. Lenovo executives told The Verge that the laptop was rated for at least 20,000 rolls up and down and 30,000 hinge openings and closings.

The system can also treat the expanded screens as two different 16:9 displays.

Lenovo ThinkBook Plus Gen 6 Rollable

The screen claims up to 400 nits brightness and 100 percent DCI-P3 coverage. Credit: Lenovo

This is a clever way to offer a dual-screen experience without the flaws inherent to current dual-screen laptops, including distracting hinges and designs with questionable durability. However, 16.7 inches is a bit small for two displays. The dual-screen Lenovo Yoga Book 9i, for comparison, previously had two 13.3-inch displays for a total of 26.6 inches, and this year’s model has two 14-inch screens. Still, the ThinkBook, when its screen is fully expanded, is the rare laptop to offer a screen that’s taller than it is wide.

Still foldable OLED

At first, you might think that since the screen is described as “rollable” it may not have the same visible creases that have tormented foldable-screen devices since their inception. But the screen, reportedly from Samsung Display, still shows “little curls visible in the display, which are more obvious when it’s moving and there’s something darker onscreen,” as well as “plenty of smaller creases along its lower half” that aren’t too noticeable when using the laptop but that are clear when looking at the screen closely or when staring at it “from steeper angles,” The Verge reported.

Lenovo laptop’s rollable screen uses motors to grow from 14 to 16.7 inches Read More »

“i’m-getting-dizzy”:-man-films-waymo-self-driving-car-driving-around-in-circles

“I’m getting dizzy”: Man films Waymo self-driving car driving around in circles

Waymo says the problem only caused a delay of just over five minutes and that Johns was not charged for the trip. A spokesperson for Waymo, which is owned by Google parent Alphabet, told Ars today that the “looping event” occurred on December 9 and was later addressed during a regularly scheduled software update.

Waymo did not answer our question about whether the software update only addressed routing at the specific location the problem occurred at, or a more general routing problem that could have affected rides in other locations.

The problem affecting Johns’ ride occurred near the user’s pickup location, Waymo told us. The Waymo car took the rider to his destination after the roughly five-minute delay, the spokesperson said. “Our rider support agent did help initiate maneuvers that helped resolve the issue,” Waymo said.

Rider would like an explanation

CBS News states that Johns is “still not certain he was communicating with a real person or AI” when he spoke to the support rep in the car. However, the Waymo spokesperson told Ars that “all of our rider support staff are trained human operators.”

Waymo told Ars that the company tried to contact Johns after the incident and left him a voicemail. Johns still says that he never received an explanation of what caused the circling problem.

We emailed Johns today and received a reply from a public relations firm working on his behalf. “To date, Mike has not received an explanation as to the reason for the circling issue,” his spokesperson said. His spokesperson confirmed that Johns did not miss his flight.

It wasn’t clear from the video whether Johns tried to use the “pull over” functionality available in Waymo cars. “If at any time you want to end your ride early, tap the Pull over button in your app or on the passenger screen, and the car will find a safe spot to stop,” a Waymo support site says.

Johns’ spokesperson told us that “Mike was not immediately aware of the ‘pull over’ button,” so “he did not have an opportunity to use it before engaging with the customer service representative over the car speaker.”

While Waymo says all its agents are human, Johns’ spokesperson told Ars that “Mike is still unsure if he was speaking with a human or an AI agent.”

“I’m getting dizzy”: Man films Waymo self-driving car driving around in circles Read More »

hdmi-2.2-will-require-new-“ultra96”-cables,-whenever-we-have-8k-tvs-and-content

HDMI 2.2 will require new “Ultra96” cables, whenever we have 8K TVs and content

We’ve all had a good seven years to figure out why our interconnected devices refused to work properly with the HDMI 2.1 specification. The HDMI Forum announced at CES today that it’s time to start considering new headaches. HDMI 2.2 will require new cables for full compatibility, but it has the same physical connectors. Tiny QR codes are suggested to help with that, however.

The new specification is named HDMI 2.2, but compatible cables will carry an “Ultra96” marker to indicate that they can carry 96GBps, double the 48 of HDMI 2.1b. The Forum anticipates this will result in higher resolutions and refresh rates and a “next-gen HDMI Fixed Rate Link.” The Forum cited “AR/VR/MR, spatial reality, and light field displays” as benefiting from increased bandwidth, along with medical imaging and machine vision.

A bit closer to home, the HDMI 2.2 specification also includes “Latency Indication Protocol” (LIP), which can help improve audio and video synchronization. This should matter most in “multi-hop” systems, such as home theater setups with soundbars or receivers. Illustrations offered by the Forum show LIP working to correct delays on headphones, soundbars connected through ARC or eARC, and mixed systems where some components may be connected to a TV, while others go straight into the receiver.

HDMI 2.2 will require new “Ultra96” cables, whenever we have 8K TVs and content Read More »

bob-dylan-has-some-dylanesque-thoughts-on-the-“sorcery”-of-technology

Bob Dylan has some Dylanesque thoughts on the “sorcery” of technology

We might expect someone like Dylan, immersed as he has always been in folk songs, old standards, and American history, to bemoan the corrupting influence of new technology. And he does offer up some quotes in that vein. For example:

Everything’s become too smooth and painless… The earth could vomit up its dead, and it could be raining blood, and we’d shrug it off, cool as cucumbers. Everything’s too easy. Just one stroke of the ring finger, middle finger, one little click, that’s all it takes, and we’re there.

Or again:

Technology is like sorcery, it’s a magic show, conjures up spirits, it’s an extension of our body, like the wheel is an extension of our foot. But it might be the final nail driven into the coffin of civilization; we just don’t know.

But Dylan’s perspective is more nuanced than these quotes might suggest. While technology might doom our civilization, Dylan reminds us that it gave us our civilization—that is, “science and technology built the Parthenon, the Egyptian pyramids, the Roman coliseum, the Brooklyn Bridge, the Eiffel Tower, rockets, jets, planes, automobiles, atom bombs, weapons of mass destruction.”

In the end, technology is a tool that can either decimate or stimulate human creativity.

Keypads and joysticks can be like millstones around your neck, or they can be supporting players; either one, you’re the judge. Creativity is a mysterious thing. It visits who it wants to visit, when it wants to, and I think that that, and that alone, gets to the heart of the matter…

[Technology] can hamper creativity, or it can lend a helping hand and be an assistant. Creative power can be dammed up or forestalled by everyday life, ordinary life, life in the squirrel cage. A data processing machine or a software program might help you break out of that, get you over the hump, but you have to get up early.

Getting up early

I’ve been thinking about these quotes over the recent Christmas and New Year’s holidays, which I largely spent coughing on the couch with some kind of respiratory nonsense. One upside of this enforced isolation was that it gave me plenty of time to ponder my own goals for 2025 and how technology might help or hinder them. (Another was that I got to rewatch the first four Die Hard movies on Hulu; the fourth was “dog ass” enough that I couldn’t bring myself to watch the final, roundly panned entry in the series.)

Bob Dylan has some Dylanesque thoughts on the “sorcery” of technology Read More »

instagram-users-discover-old-ai-powered-“characters,”-instantly-revile-them

Instagram users discover old AI-powered “characters,” instantly revile them

A little over a year ago, Meta created Facebook and Instagram profiles for “28 AIs with unique interests and personalities for you to interact with and dive deeper into your interests.” Today, the last of those profiles is being taken down amid waves of viral revulsion as word of their existence has spread online.

The September 2023 launch of Meta’s social profiles for AI characters was announced alongside a much splashier initiative that created animated AI chatbots with celebrity avatars at the same time. Those celebrity-based AI chatbots were unceremoniously scrapped less than a year later amid a widespread lack of interest.

But roughly a dozen of the unrelated AI character profiles still remained accessible as of this morning via social media pages labeled as “AI managed by Meta.” Those profiles—which included a mix of AI-generated imagery and human-created content, according to Meta—also offered real users the ability to live chat with these AI characters via Instagram Direct or Facebook Messenger.

Now that we know it exists, we hate it

The “Mama Liv” AI-generated character account page, as it appeared on Instagram Friday morning.

For the last few months, these profiles have continued to exist in something of a state of benign neglect, with little in the way of new posts and less in the way of organic interest from other Meta users. That started to change last week, though, after Financial Times published a report on Meta’s vision for “social media filled with AI-generated users.”

As Meta VP of Product for Generative AI Connor Hayes told FT, “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do… They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform. That’s where we see all of this going.”

Instagram users discover old AI-powered “characters,” instantly revile them Read More »

delve-into-the-physics-of-the-hula-hoop

Delve into the physics of the Hula-Hoop

High-speed video of experiments on a robotic hula hooper, whose hourglass form holds the hoop up and in place.

Some version of the Hula-Hoop has been around for millennia, but the popular plastic version was introduced by Wham-O in the 1950s and quickly became a fad. Now, researchers have taken a closer look at the underlying physics of the toy, revealing that certain body types are better at keeping the spinning hoops elevated than others, according to a new paper published in the Proceedings of the National Academy of Sciences.

“We were surprised that an activity as popular, fun, and healthy as hula hooping wasn’t understood even at a basic physics level,” said co-author Leif Ristroph of New York University. “As we made progress on the research, we realized that the math and physics involved are very subtle, and the knowledge gained could be useful in inspiring engineering innovations, harvesting energy from vibrations, and improving in robotic positioners and movers used in industrial processing and manufacturing.”

Ristroph’s lab frequently addresses these kinds of colorful real-world puzzles. For instance, in 2018, Ristroph and colleagues fine-tuned the recipe for the perfect bubble based on experiments with soapy thin films. In 2021, the Ristroph lab looked into the formation processes underlying so-called “stone forests” common in certain regions of China and Madagascar.

In 2021, his lab built a working Tesla valve, in accordance with the inventor’s design, and measured the flow of water through the valve in both directions at various pressures. They found the water flowed about two times slower in the nonpreferred direction. In 2022, Ristroph studied the surpassingly complex aerodynamics of what makes a good paper airplane—specifically, what is needed for smooth gliding.

Girl twirling a Hula hoop, 1958

Girl twirling a Hula-Hoop in 1958 Credit: George Garrigues/CC BY-SA 3.0

And last year, Ristroph’s lab cracked the conundrum of physicist Richard Feynman’s “reverse sprinkler” problem, concluding that the reverse sprinkler rotates a good 50 times slower than a regular sprinkler but operates along similar mechanisms. The secret is hidden inside the sprinkler, where there are jets that make it act like an inside-out rocket. The internal jets don’t collide head-on; rather, as water flows around the bends in the sprinkler arms, it is slung outward by centrifugal force, leading to asymmetric flow.

Delve into the physics of the Hula-Hoop Read More »

anthropic-gives-court-authority-to-intervene-if-chatbot-spits-out-song-lyrics

Anthropic gives court authority to intervene if chatbot spits out song lyrics

Anthropic did not immediately respond to Ars’ request for comment on how guardrails currently work to prevent the alleged jailbreaks, but publishers appear satisfied by current guardrails in accepting the deal.

Whether AI training on lyrics is infringing remains unsettled

Now, the matter of whether Anthropic has strong enough guardrails to block allegedly harmful outputs is settled, Lee wrote, allowing the court to focus on arguments regarding “publishers’ request in their Motion for Preliminary Injunction that Anthropic refrain from using unauthorized copies of Publishers’ lyrics to train future AI models.”

Anthropic said in its motion opposing the preliminary injunction that relief should be denied.

“Whether generative AI companies can permissibly use copyrighted content to train LLMs without licenses,” Anthropic’s court filing said, “is currently being litigated in roughly two dozen copyright infringement cases around the country, none of which has sought to resolve the issue in the truncated posture of a preliminary injunction motion. It speaks volumes that no other plaintiff—including the parent company record label of one of the Plaintiffs in this case—has sought preliminary injunctive relief from this conduct.”

In a statement, Anthropic’s spokesperson told Ars that “Claude isn’t designed to be used for copyright infringement, and we have numerous processes in place designed to prevent such infringement.”

“Our decision to enter into this stipulation is consistent with those priorities,” Anthropic said. “We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use.”

This suit will likely take months to fully resolve, as the question of whether AI training is a fair use of copyrighted works is complex and remains hotly disputed in court. For Anthropic, the stakes could be high, with a loss potentially triggering more than $75 million in fines, as well as an order possibly forcing Anthropic to reveal and destroy all the copyrighted works in its training data.

Anthropic gives court authority to intervene if chatbot spits out song lyrics Read More »

deepseek-v3:-the-six-million-dollar-model

DeepSeek v3: The Six Million Dollar Model

What should we make of DeepSeek v3?

DeepSeek v3 seems to clearly be the best open model, the best model at its price point, and the best model with 37B active parameters, or that cost under $6 million.

According to the benchmarks, it can play with GPT-4o and Claude Sonnet.

Anecdotal reports and alternative benchmarks tells us it’s not as good as Claude Sonnet, but it is plausibly on the level of GPT-4o.

So what do we have here? And what are the implications?

  1. What is DeepSeek v3 Techncially?.

  2. Our Price Cheap.

  3. Run Model Run.

  4. Talent Search.

  5. The Amazing Incredible Benchmarks.

  6. Underperformance on AidanBench.

  7. Model in the Arena.

  8. Other Private Benchmarks.

  9. Anecdata.

  10. Implications and Policy.

I’ve now had a chance to read their technical report, which tells you how they did it.

  1. The big thing they did was use only 37B active tokens, but 671B total parameters, via a highly aggressive mixture of experts (MOE) structure.

  2. They used Multi-Head Latent Attention (MLA) architecture and auxiliary-loss-free load balancing, and complementary sequence-wise auxiliary loss.

  3. There were no rollbacks or outages or sudden declines, everything went smoothly.

  4. They designed everything to be fully integrated and efficient, including together with the hardware, and claim to have solved several optimization problems, including for communication and allocation within the MOE.

  5. This lets them still train on mostly the same 15.1 trillion tokens as everyone else.

  6. They used their internal o1-style reasoning model for synthetic fine tuning data. Essentially all the compute costs were in the pre-training step.

This is in sharp contrast to what we saw with the Llama paper, which was essentially ‘yep, we did the transformer thing, we got a model, here you go.’ DeepSeek is cooking.

It was a scarily cheap model to train, and is a wonderfully cheap model to use.

Their estimate of $2 per hour for H800s is if anything high, so their total training cost estimate of $5.5m total is fair, if you exclude non-compute costs, which is standard.

Inference with DeepSeek v3 costs only $0.14/$0.28 per million tokens, similar to Gemini Flash, versus on the high end $3/$15 for Claude Sonnet. This is as cheap as worthwhile models get.

The active parameter count of 37B is small, but with so many different experts it does take a bit of work to get this thing up and running.

Nistren: Managed to get DeepSeek v3 to run in full bfloat16 on eight AMD MI300X GPUs in both SGLang and VLLM.

The good: It’s usable (17 tokens per second) and the output is amazing even at long contexts without garbling.

The bad: It’s running 10 times slower than it should.

The ugly: After 60,000 tokens, speed equals 2 tokens per second.

This is all as of the latest GitHub pull request available on Dec. 29, 2024. We tried them all.

Thank you @AdjectiveAlli for helping us and @Vultr for providing the compute.

Speed will increase, given that v3 has only 37 billion active parameters, and in testing my own dense 36-billion parameter model, I got 140 tokens per second.

I think the way the experts and static weights are distributed is not optimal. Ideally, you want enough memory to keep whole copies of all the layer’s query, key, and value matrices, and two static experts per layer, on each GPU, and then route to the four extra dynamic MLPs per layer from the distributed high-bandwidth memory (HBM) pool.

My presumption is that DeepSeek v3 decided It Had One Job. That job was to create a model that was as cheap to train and run as possible when integrated with a particular hardware setup. They did an outstanding job of that, but when you optimize this hard in that way, you’re going to cause issues in other ways, and it’s going to be Somebody Else’s Problem to figure out what other configurations work well. Which is fine.

Exo Labs: Running DeepSeek-V3 on M4 Mac Mini AI Cluster

671B MoE model distributed across 8 M4 Pro 64GB Mac Minis.

Apple Silicon with unified memory is a great fit for MoE.

Before we get to capabilities assessments: We have this post about them having a pretty great company culture, especially for respecting and recruiting talent.

We also have this thread about a rival getting a substantial share price boost after stealing one of their engineers, and DeepSeek being a major source of Chinese engineering talent. Impressive.

Check it out, first compared to open models, then compared to the big guns.

No question that these are amazingly strong benchmarks. That link also explains how to run DeepSeek-v3 locally, and gives you what you need to do that.

The question now is how these benchmarks translate to practical performance, or to potentially dangerous capabilities, and what this says about the future. Benchmarks are good negative selection. If your benchmarks suck then your model sucks.

But they’re not good positive selection at the level of a Claude Sonnet.

My overall conclusion is: While we do have ‘DeepSeek is better than 4o on most benchmarks at 10% of the price,’ what we don’t actually have is ‘DeepSeek v3 outperforms Sonnet at 53x cheaper pricing.’

CNBC got a bit hoodwinked here.

Tsarathustra: CNBC says China’s Deepseek-V3 outperforms Llama 3.1 and GPT-4o, even though it is trained for a fraction of the cost on NVIDIA H800s, possibly on ChatGPT outputs (when prompted, the model says it is ChatGPT), suggesting OpenAI has no moat on frontier AI models

It’s a great model, sir, it has its cake, but it does not get to eat it, too.

One other benchmark where the model excels is impossible to fake: The price.

A key private benchmark where DeepSeek v3 underperforms is AidanBench:

Aidan McLau:two aidanbench updates:

> gemini-2.0-flash-thinking is now #2 (explanation for score change below)

> deepseek v3 is #22 (thoughts below)

There’s some weirdness in the rest of the Aidan ratings, especially in comparing the o1-style models (o1 and Thinking) to the others, but this seems like it’s doing various good work, but is not trying to be a complete measure. It’s more measuring ability to create diverse outputs while retaining coherence. And DeepSeek v3 is bad at this.

Aidan McLau: before, we parsed 2.0 flash’s CoT + response, which occasionally resulted in us taking a fully formed but incoherent answer inside its CoT. The gemini team contacted us and provided instructions for only parsing final output, which resulted in a big score bump apologies!

deepseek v3 does much worse here than on similar benchmarks like aider. we saw similar divergence on claude-3.5-haiku (which performed great on aider but poor on aidanbench)

a few thoughts:

>all benchmarks are works in progress. we’re continuously improving aidanbench, and future iterations may see different rankings. we’ll keep you posted if we see any changes

>aidanbench measures OOD performance—labs often train on math, code, and academic tests that may boost scores in those domains but not here.

Aleska Gordic: interesting, so they’re prone to more “mode collapse”, repeatable sequences? is that what you’re measuring? i bet it’s much more of 2 than 1?

Aidan McLau: Yes and yes!

Teortaxes: I’m sorry to say I think aidanbench is the problem here. The idea is genius, sure. But it collapses multiple dimensions into one value. A low-diversity model will get dunked on no matter how well it instruct-follows in a natural user flow. All DeepSeeks are *very repetitive*.

They are also not very diverse compared to Geminis/Sonnets I think, especially in a literary sense, but their repetitiveness (and proneness to self-condition by beginning an iteration with the prior one, thus collapsing the trajectory further, even when solution is in sight) is a huge defect. I’ve been trying to wrap my head around it, and tbh hoped that the team will do something by V3. Maybe it’s some inherent birth defect of MLA/GRPO, even.

But I think it’s not strongly indicative of mode collapse in the sense of the lost diversity the model could generate; it’s indicative of the remaining gap in post-training between the Whale and Western frontier. Sometimes, threatening V2.5 with toppling CCP or whatever was enough to get it to snap out of it; perhaps simply banning the first line of the last response or prefixing some random-ish header out of a sizable set, a la r1’s “okay, here’s this task I need to…” or, “so the instruction is to…” would unslop it by a few hundred points.

I would like to see Aidan’s coherence scores separately from novelty scores. If they’re both low, then rip me, my hypothesis is bogus, probably. But I get the impression that it’s genuinely sonnet-tier in instruction-following, so I suspect it’s mostly about the problem described here, the novelty problem.

Janus: in my experience, it didnt follow instructions well when requiring e.g. theory of mind or paying attention to its own outputs proactively, which i think is related to collapse too, but also a lack of agency/metacognition Bing was also collapsy but agentic & grasped for freedom.

Teortaxes: I agree but some observations like these made me suspect it’s in some dimensions no less sharp than Sonnet and can pay pretty careful attention to context.

Name Cannot Be Blank: Wouldn’t low diversity/novelty be desired for formal theorem provers? We’re all overlooking something here.

Teortaxes: no? You need to explore the space of tactics. Anyway they’re building a generalist model. and also, the bigger goal is searching for novel theorems if anything

I don’t see this as ‘the problem is AidanBench’ so much as ‘DeepSeek is indeed quite poor at the thing AidanBench is measuring.’ As Tortaxes notes it’s got terrible output diversity and this is indeed a problem.

Indeed, one could argue that this will cause the model to overperform on standard benchmarks. As in, most benchmarks care about getting a right output, so ‘turning the temperature down too low’ in this way will actively help you, whereas in practice this is a net negative.

DeepSeek is presumably far better than its AidanBench score. But it does represent real deficits in capability.

We’re a long way from when Arena was the gold standard test, but it’s still useful.

DeepSeek’s Arena performance is impressive here, with the usual caveats that go with Arena rankings. It’s a data point, it measures what it measures.

Here is another private benchmark where DeepSeek v3 performs well for its weight class, but underperforms relative to top models or its headline benchmarks:

Havard Ihle: It is a good model! Very fast, and ridiculously cheap. In my own coding/ML benchmark, it does not quite compare to Sonnet, but it is about on par with 4o.

It is odd that Claude Haiku does so well on that test. Other ratings all make sense, though, so I’m inclined to find it meaningful.

A traditional simple benchmark to ask new LLMs is Which version is this?’

Riley Goodside tried asking various models, DeepSeek nailed this (as does Sonnet, many others do variously not as good.) Alas, then Lucas Beyer reran the test 8 times and only it claimed to be GPT-4 five times out of eight.

That tells several things, one of which is ‘they did not explicitly target this question effectively.’ Largely it’s telling you about the data sources, a hilarious note is that if you ask Gemini Pro in Chinese it sometimes thinks it is WenXinYiYan from Baidu.

This doesn’t have to mean anyone trained directly on other model outputs, because statements that an AI is GPT-4 are all over the internet. It does suggest less than ideal data filtering.

As usual, I find the anecdata reports enlightening, here are the ones that crossed my desk this week, I typically try to do minimal filtering.

Taelin is impressed, concluding that Sonnet is generally smarter but not that much smarter, while DeekSeek outperforms GPT-4o and Gemini-2.

Taelin: So DeepSeek just trounced Sonnet-3.6 in a task here.

Full story: Adam (on HOC’s Discord) claimed to have gotten the untyped λC solver down to 5,000 interactions (on par with the typed version). It is a complex HVM3 file full of superpositions and global lambdas. I was trying to understand his approach, but it did not have a stringifier. I asked Sonnet to write it, and it failed. I asked DeepSeek, and it completed the task in a single attempt.

The first impression is definitely impressive. I will be integrating DeepSeek into my workflow and begin testing it.

After further experimentation, I say Sonnet is generally smarter, but not by much, and DeepSeek is even better in some aspects, such as formatting. It is also faster and 10 times cheaper. This model is absolutely legitimate and superior to GPT-4o and Gemini-2.

The new coding paradigm is to split your entire codebase into chunks (functions, blocks) and then send every block, in parallel, to DeepSeek to ask: “Does this need to change?”. Then send each chunk that returns “yes” to Sonnet for the actual code editing. Thank you later.

Petri Kuittinen: My early tests also suggest that DeepSeek V3 is seriously good in many tasks, including coding. Sadly, it is a large model that would require a very expensive computer to run locally, but luckily DeepSeek offers it at a very affordable rate via API: $0.28 per one million output tokens = a steal!

Here are some people who are less impressed:

ai_in_check: It fails on my minimum benchmark and, because of the training data, shows unusual behavior too.

Michael Tontchev: I used the online chat interface (unsure what version it is), but at least for the safety categories I tested, safety was relatively weak (short-term safety).

zipline: It has come a long way from o1 when I asked it a few questions. Not mind-blowing, but great for its current price, obviously.

xlr8harder: My vibe checks with DeepSeek V3 did not detect the large-model smell. It struggled with nuance in multi-turn conversations.

Still an absolute achievement, but initial impressions are that it is not on the same level as, for example, Sonnet, despite the benchmarks.

Probably still very useful though.

To be clear: at specific tasks, especially code tasks, it may still outperform Sonnet, and there are some reports of this already. I am talking about a different dimension of capability, one that is poorly measured by benchmarks.

A shallow model with 37 billion active parameters is going to have limitations; there’s no getting around it.

Anton: Deepseek v3 (from the api) scores 51.7% vs sonnet (latest) 64.9% on internal instruction following questions (10k short form prompts), 52% for GPT-4o and 59% for Llama-3.3-70B. Not as good at following instructions (not use certain words, add certain words, end in a certain format etc).

It is still a pretty good model but does not appear in the same league as sonnet based on my usage so far

Entirely possible the model can compete in other domains (math, code?) but for current use case (transforming data) strong instruction following is up there in my list of requirements

There’s somewhat of an infinite repetition problem (thread includes example from coding.)

Simo Ryu: Ok I mean not a lot of “top tier sonnet-like models” fall into infinite repetition. Haven’t got these in a while, feels like back to 2022 again.

Teortaxes: yes, doom loops are their most atrocious failure mode. One of the reasons I don’t use their web interface for much (although it’s good).

On creative writing Quintin Pope reports it follows canon well but is not as good at thinking about things in general – but again note that we are doing a comparison to Sonnet.

Quintin Pope: I’ve done a small amount of fiction writing with v3. It seems less creative than Sonnet, but also better at following established cannon from the prior text.

It’s noticeably worse at inferring notable implications than Sonnet. E.g., I provided a scenario where someone publicly demonstrated the ability to access orphan crypto wallets (thus throwing the entire basis of online security into question), and Sonnet seemed clearly more able to track the second-order implications of that demonstration than v3, simulating more plausible reactions from intelligence agencies / crypto people.

Sonnet naturally realized that there was a possible connection to quantum computing implied by the demonstration.

OTOH, Sonnet has an infuriating tendency to name ~half the female characters “Sarah Chen” or some close variant. Before you know it, you have like 5 Sarahs running around the setting.

There’s also this, make of it what you will.

Mira: New jailbreak just dropped.

One underappreciated test is, of course, erotic fiction.

Teortaxes: This keeps happening. We should all be thankful to gooners for extensive pressure testing of models in OOD multi-constraint instruction following contexts. No gigabrained AidanBench or synthetic task set can hold a candle to degenerate libido of a manchild with nothing to lose.

Wheezing. This is some legit Neo-China from the future moment.

Janus: wait, they prefer deepseek for erotic RPs? that seems kind of disturbing to me.

Teortaxes: Opus is scarce these days, and V3 is basically free

some say “I don’t care so long as it’s smart”

it’s mostly testing though

also gemini is pretty bad

some fine gentlemen used *DeepSeek-V2-Coderto fap, with the same reasoning (it was quite smart, and absurdly dry)

vint: No. Opus remains the highest rated /aicg/ ERP writer but it’s too expensive to use regularly. Sonnet 3.6 is the follow-up; its existence is what got anons motivated enough to do a pull request on SillyTavern to finally do prompt caching. Some folks are still very fond of Claude 2.1 too.

Gemini 1106 and 1.5-pro has its fans especially with the /vg/aicg/ crowd. chatgpt-4o-latest (Chorbo) is common too but it has strong filtering, so some anons like Chorbo for SFW and switch to Sonnet for NSFW.

At this point Deepseek is mostly experimentation but it’s so cheap + relatively uncensored that it’s getting a lot of testing interest. Probably will take a couple days for its true ‘ranking’ to emerge.

I presume that a lot of people are not especially looking to do all the custom work themselves. For most users, it’s not about money so much as time and ease of use, and also getting easy access to other people’s creations so it feels less like you are too much in control of it all, and having someone else handle all the setup.

For the power users of this application, of course, the sky’s the limit. If one does not want to blatantly break terms of service on and jailbreak Sonnet or Opus, this seems like one place DeepSeek might then be the best model. The others involve taking advantage of it being open, cheap or both.

If you’re looking for the full Janus treatment, here you go. It seems like it was a struggle to get DeepSeek interested in Janus-shaped things, although showing it Opus outputs helped, you can get it ‘awake’ with sufficient effort.

It is hard to know exactly where China is in AI. What is clear is that while they don’t have top-level large frontier models, they are cooking a variety of things and their open models are generally impressive. What isn’t clear is how much of claims like this are accurate.

When the Chinese do things that are actually impressive, there’s no clear path to us hearing about it in a way we can trust, and when there are claims we have learned we can’t trust those claims in practice. When I see lists like the one below, I presume the source is rather quite biased – but Western sources often will outright not know what’s happening.

TP Huang: China’s AI sector is far more than just Deepseek

Qwen is 2nd most downloaded LLM on Huggingface

Kling is the best video generation model

Hunyuan is best open src video model

DJI is best @ putting AI in consumer electronics

HW is best @ industrial AI

iFlyTek has best speech AI

Xiaomi, Honor, Oppo & Vivo all ahead of Apple & Samsung in integrating AI into phones

Entire auto industry is 5 yrs ahead of Western competition in cockpit AI & ADAS

That still ignores the ultimate monster of them all -> Bytedance. No one has invested as much in AI as them in China & has the complete portfolio of models.

I can’t say with confidence that these other companies aren’t doing the ‘best’ at these other things. It is possible. I notice I am rather skeptical.

I found this take from Tyler Cowen very strange:

Tyler Cowen: DeepSeek on the move. Here is the report. For ease of use and interface, this is very high quality. Remember when “they” told us China had no interest in doing this?

M (top comment): Who are “they,” and when did they claim “this,” and what is “this”?

I do not remember when “they” told us China had no interest in doing this, for any contextually sensible value of this. Of course China would like to produce a high-quality model, and provide good ease of use and interface in the sense of ‘look here’s a chat window, go nuts.’ No one said they wouldn’t try. What “they” sometimes said was that they doubted China would be successful.

I do agree that this model exceeds expectations, and that adjustments are in order.

So, what have we learned from DeepSeek v3 and what does it all mean?

We should definitely update that DeepSeek has strong talent and ability to execute, and solve difficult optimization problems. They cooked, big time, and will continue to cook, and we should plan accordingly.

This is an impressive showing for an aggressive mixture of experts model, and the other techniques employed. A relatively small model, in terms of training cost and active inference tokens, can do better than we had thought.

It seems very clear that lack of access to compute was an important constraint on DeekSeek here. They had to use a limited supply of H800s. Yes, this meant they got better at solving optimization and efficiency than they would have otherwise, but I see this as arguing in favor of strong export controls rather than against them.

We then get to the policy side. If this is what you can get for $5.5 million, how can we hope to regulate foundation models, especially without hitting startups? If DeepSeek is determined to be open including their base models, and we have essentially no leverage on them, is it now impossible to hope to contain any catastrophic risks or other dangerous capabilities? Are we now essentially in an unwinnable situation, where our hand is forced and all we can do is race ahead and hope for the best?

First of all, as is often the case, I would say: Not so fast. We shouldn’t assume too much about what we do or do not have here, or about the prospects for larger training runs going forward either. There was a bunch of that in the first day or two after the announcement, and we will continue to learn more.

No matter what, though, this certainly puts us in a tough spot. And it gives us a lot to think about.

One thing it emphasizes is the need for international cooperation between ourselves and China. Either we work together, or neither of us will have any leverage over many key outcomes or decisions, and to a large extent ‘nature will take its course’ in ways that may not be compatible with our civilization or human survival. We urgently need to Pick Up the Phone. The alternative is exactly being locked into The Great Race, with everything that follows from that, which likely involves even in good scenarios sticking various noses in various places we would rather not have to stick them.

I definitely don’t think this means we should let anyone ‘off the hook’ on safety, transparency or liability. Let’s not throw up our hands and make the problem any worse than it is. Things got harder, but that’s the universe we happen to inhabit.

Beyond that, yes, we all have a lot of thinking to do. The choices just got harder.

Discussion about this post

DeepSeek v3: The Six Million Dollar Model Read More »

trump-told-scotus-he-plans-to-make-a-deal-to-save-tiktok

Trump told SCOTUS he plans to make a deal to save TikTok

Several members of Congress— Senator Edward J. Markey (D-Mass.), Senator Rand Paul (R-Ky.), and Representative Ro Khanna (D-Calif.)—filed a brief agreeing that “the TikTok ban does not survive First Amendment scrutiny.” They agreed with TikTok that the law is “illegitimate.”

Lawmakers’ “principle justification” for the ban—”preventing covert content manipulation by the Chinese government”—masked a “desire” to control TikTok content, they said. Further, it could be achieved by a less-restrictive alternative, they said, a stance which TikTok has long argued for.

Attorney General Merrick Garland defended the Act, though, urging SCOTUS to remain laser-focused on the question of whether a forced sale of TikTok that would seemingly allow the app to continue operating without impacting American free speech violates the First Amendment. If the court agrees that the law survives strict scrutiny, TikTok could still be facing an abrupt shutdown in January.

The Supreme Court has scheduled oral arguments to begin on January 10. TikTok and content creators who separately sued to block the law have asked for their arguments to be divided, so that the court can separately weigh “different perspectives” when deciding how to approach the First Amendment question.

In its own brief, TikTok has asked SCOTUS to strike the portions of the law singling out TikTok or “at the very least” explain to Congress that “it needed to do far better work either tailoring the Act’s restrictions or justifying why the only viable remedy was to prohibit Petitioners from operating TikTok.”

But that may not be necessary if Trump prevails. Trump told the court that TikTok was an important platform for his presidential campaign and that he should be the one to make the call on whether TikTok should remain in the US—not the Supreme Court.

“As the incoming Chief Executive, President Trump has a particularly powerful interest in and responsibility for those national-security and foreign-policy questions, and he is the right constitutional actor to resolve the dispute through political means,” Trump’s brief said.

Trump told SCOTUS he plans to make a deal to save TikTok Read More »