Author name: Mike M.

disney,-fox,-and-wbd-give-up-on-controversial-sports-streaming-app-venu

Disney, Fox, and WBD give up on controversial sports streaming app Venu

Although Fubo’s lawsuit against the JV appears to be settled, other rivals in sports television seemed intent on continuing to fight Venu.

In a January 9 letter (PDF) to US District Judge Margaret M. Garnett of the Southern District in New York, who granted Fubo’s premliminary injunction against Venu, Michael Hartman, general counsel and chief external affairs officer for DirectTV, wrote that Fubo’s settlement “does nothing to resolve the underlying antitrust violations at issue.” Hartman asked the court to maintain the preliminary injunction against the app’s launch.

“The preliminary injunction has protected consumers and distributors alike from the JV Defendant’s scheme to ‘capture demand,’ ‘suppress’ potentially competitive sports bundles, and impose consumer price hikes,” the letter says, adding that DirectTV would continue to explore its options regarding the JV “and other anticompetitive harms.”

Similarly, Pantelis Michalopoulos, counsel for EchoStar Corporation, which owns Dish, penned a letter (PDF) to Garnett on January 7, claiming the members of the JV “purchased their way out of their antitrust violation.” Michalopoulos added that the JV defendants “should not be able to pay their way into erasing the Court’s carefully reasoned decision” to temporarily block Venu’s launch.

In addition to Fubo, DirecTV, and Dish, ACA Connects (a trade association for small- to medium-sized telecommunication service providers) publicly expressed concerns about Venu. NFL was also reported to be worried about the implications of the venture.

Now, the three giants behind Venu are throwing in the towel and abandoning an app that could have garnered a lot of subscribers tired of hopping around apps, channels, and subscriptions to watch all the sports content they wanted. But they’re also avoiding a lot of litigation and potential backlash in the process.

Disney, Fox, and WBD give up on controversial sports streaming app Venu Read More »

meta-kills-diversity-programs,-claiming-dei-has-become-“too-charged”

Meta kills diversity programs, claiming DEI has become “too charged”

Meta has reportedly ended diversity, equity, and inclusion (DEI) programs that influenced staff hiring and training, as well as vendor decisions, effective immediately.

According to an internal memo viewed by Axios and verified by Ars, Meta’s vice president of human resources, Janelle Gale, told Meta employees that the shift was due to “legal and policy landscape surrounding diversity, equity, and inclusion efforts in the United States is changing.”

It’s another move by Meta that some view as part of the company’s larger effort to align with the incoming Trump administration’s politics. In December, Donald Trump promised to crack down on DEI initiatives at companies and on college campuses, The Guardian reported.

Earlier this week, Meta cut its fact-checking program, which was introduced in 2016 after Trump’s first election to prevent misinformation from spreading. In a statement announcing Meta’s pivot to X’s Community Notes-like approach to fact-checking, Meta CEO Mark Zuckerberg claimed that fact-checkers were “too politically biased” and “destroyed trust” on Meta platforms like Facebook, Instagram, and Threads.

Trump has also long promised to renew his war on alleged social media censorship while in office. Meta faced backlash this week over leaked rule changes relaxing Meta’s hate speech policies, The Intercept reported, which Zuckerberg said were “out of touch with mainstream discourse.”  Those changes included allowing anti-trans slurs previously banned, as well as permitting women to be called “property” and gay people to be called “mentally ill,” Mashable reported. In a statement, GLAAD said that rolling back safety guardrails risked turning Meta platforms into “unsafe landscapes filled with dangerous hate speech, violence, harassment, and misinformation” and alleged that Meta appeared to be willing to “normalize anti-LGBTQ hatred for profit.”

Meta kills diversity programs, claiming DEI has become “too charged” Read More »

ongoing-attacks-on-ivanti-vpns-install-a-ton-of-sneaky,-well-written-malware

Ongoing attacks on Ivanti VPNs install a ton of sneaky, well-written malware

Networks protected by Ivanti VPNs are under active attack by well-resourced hackers who are exploiting a critical vulnerability that gives them complete control over the network-connected devices.

Hardware maker Ivanti disclosed the vulnerability, tracked as CVE-2025-0283, on Wednesday and warned that it was under active exploitation against some customers. The vulnerability, which is being exploited to allow hackers to execute malicious code with no authentication required, is present in the company’s Connect Secure VPN, and Policy Secure & ZTA Gateways. Ivanti released a security patch at the same time. It upgrades Connect Secure devices to version 22.7R2.5.

Well-written, multifaceted

According to Google-owned security provider Mandiant, the vulnerability has been actively exploited against “multiple compromised Ivanti Connect Secure appliances” since December, a month before the then zero-day came to light. After exploiting the vulnerability, the attackers go on to install two never-before-seen malware packages, tracked under the names DRYHOOK and PHASEJAM on some of the compromised devices.

PHASEJAM is a well-written and multifaceted bash shell script. It first installs a web shell that gives the remote hackers privileged control of devices. It then injects a function into the Connect Secure update mechanism that’s intended to simulate the upgrading process.

“If the ICS administrator attempts an upgrade, the function displays a visually convincing upgrade process that shows each of the steps along with various numbers of dots to mimic a running process,” Mandiant said. The company continued:

PHASEJAM injects a malicious function into the /home/perl/DSUpgrade.pm file named processUpgradeDisplay(). The functionality is intended to simulate an upgrading process that involves 13 steps, with each of those taking a predefined amount of time. If the ICS administrator attempts an upgrade, the function displays a visually convincing upgrade process that shows each of the steps along with various numbers of dots to mimic a running process. Further details are provided in the System Upgrade Persistence section.

The attackers are also using a previously seen piece of malware tracked as SPAWNANT on some devices. One of its functions is to disable an integrity checker tool (ICT) Ivanti has built into recent VPN versions that is designed to inspect device files for unauthorized additions. SpawnAnt does this by replacing the expected SHA256 cryptographic hash of a core file with the hash of it after it has been infected. As a result, when the tool is run on compromised devices, admins see the following screen:

Ongoing attacks on Ivanti VPNs install a ton of sneaky, well-written malware Read More »

a-taller,-heavier,-smarter-version-of-spacex’s-starship-is-almost-ready-to-fly

A taller, heavier, smarter version of SpaceX’s Starship is almost ready to fly


Starship will test its payload deployment mechanism on its seventh test flight.

SpaceX’s first second-generation Starship, known as Version 2 or Block 2, could launch as soon as January 13. Credit: SpaceX

An upsized version of SpaceX’s Starship mega-rocket rolled to the launch pad early Thursday in preparation for liftoff on a test flight next week.

The two-mile transfer moved the bullet-shaped spaceship one step closer to launch Monday from SpaceX’s Starbase test site in South Texas. The launch window opens at 5 pm EST (4 pm CST; 2200 UTC). This will be the seventh full-scale test flight of SpaceX’s Super Heavy booster and Starship spacecraft and the first of 2025.

In the coming days, SpaceX technicians will lift the ship on top of the Super Heavy booster already emplaced on the launch mount. Then, teams will complete the final tests and preparations for the countdown on Monday.

“The upcoming flight test will launch a new generation ship with significant upgrades, attempt Starship’s first payload deployment test, fly multiple reentry experiments geared towards ship catch and reuse, and launch and return the Super Heavy booster,” SpaceX officials wrote in a mission overview posted on the company’s website.

The mission Monday will repeat many of the maneuvers SpaceX demonstrated on the last two Starship test flights. The company will again attempt to return the Super Heavy booster to the launch site and attempt to catch it with two mechanical arms, or “chopsticks,” on the launch tower approximately seven minutes after liftoff.

SpaceX accomplished this feat on the fifth Starship test flight in October but aborted a catch attempt on a November flight because of damaged sensors on the tower chopsticks. The booster, which remained healthy, diverted to a controlled splashdown offshore in the Gulf of Mexico.

SpaceX’s next Starship prototype, Ship 33, emerges from its assembly building at Starbase, Texas, early Thursday morning. Credit: SpaceX/Elon Musk via X

For the next flight, SpaceX added protections to the sensors on the tower and will test radar instruments on the chopsticks to provide more accurate ranging measurements for returning vehicles. These modifications should improve the odds of a successful catch of the Super Heavy booster and of Starship on future missions.

In another first, one of the 33 Raptor engines that will fly on this Super Heavy booster—designated Booster 14 in SpaceX’s fleet—was recovered from the booster that launched and returned to Starbase in October. For SpaceX, this is a step toward eventually flying the entire rocket repeatedly. The Super Heavy booster and Starship spacecraft are designed for full reusability.

After separation of the booster stage, the Starship upper stage will ignite six engines to accelerate to nearly orbital velocity, attaining enough energy to fly halfway around the world before gravity pulls it back into the atmosphere. Like the past three test flights, SpaceX will guide Starship toward a controlled reentry and splashdown in the Indian Ocean northwest of Australia around one hour after liftoff.

New ship, new goals

The most significant changes engineers will test next week are on the ship, or upper stage, of SpaceX’s enormous rocket. The most obvious difference on Starship Version 2, or Block 2, is with the vehicle’s forward flaps. Engineers redesigned the flaps, reducing their size and repositioning them closer to the tip of the ship’s nose to better protect them from the scorching heat of reentry. Cameras onboard Starship showed heat damage to the flaps during reentry on test flights last year.

SpaceX is also developing an upgraded Super Heavy booster that is slightly taller than the existing model. The next version of the booster will produce more thrust and will be slightly taller than the current Super Heavy, but for the upcoming test flight, SpaceX will still use the first-generation booster design.

Starship Block 2 has smaller flaps than previous ships. The flaps are located in a more leeward position to protect them from the heat of reentry. Credit: SpaceX

For next week’s flight, Super Heavy and Starship combined will hold more than 10.5 million pounds of fuel and oxidizer. The ship’s propellant tanks have 25 percent more volume than previous iterations of the vehicle, and the payload compartment, which contains 10 mock-ups of Starlink Internet satellites on this launch, is somewhat smaller. Put together, the changes add nearly 6 feet (1.8 meters) to the rocket’s height, bringing the full stack to approximately 404 feet (123.1 meters).

This means SpaceX will break its own record for launching the largest and most powerful rocket ever built. And the company will do it again with the even larger Starship Version 3, which SpaceX says will have nine upper stage engines, instead of six, and will deliver up to 440,000 pounds (200 metric tons) of cargo to low-Earth orbit.

Other changes debuting with Starship Version 2 next week include:

• Vacuum jacketing of propellant feedlines

• A new fuel feedline system for the ship’s Raptor vacuum engines

• An improved propulsion avionics module controlling vehicle valves and reading sensors

• Redesigned inertial navigation and star tracking sensors

• Integrated smart batteries and power units to distribute 2.7 megawatts of power across the ship

• An increase to more than 30 cameras onboard the vehicle.

Laying the foundation

The enhanced avionics system will support future missions to prove SpaceX’s ability to refuel Starships in orbit and return the ship to the launch site. For example, SpaceX will fly a more powerful flight computer and new antennas that integrate connectivity with the Starlink Internet constellation, GPS navigation satellites, and backup functions for traditional radio communication links. With Starlink, SpaceX said Starship can stream more than 120Mbps of real-time high-definition video and telemetry in every phase of flight.

These changes “all add additional vehicle performance and the ability to fly longer missions,” SpaceX said. “The ship’s heat shield will also use the latest generation tiles and includes a backup layer to protect from missing or damaged tiles.”

Somewhere over the Atlantic Ocean, a little more than 17 minutes into the flight, Starship will deploy 10 dummy payloads similar in size and weight to next-generation Starlink satellites. The mock-ups will soar around the world on a suborbital trajectory, just like Starship, and reenter over the unpopulated Indian Ocean. Future Starship flights will launch real next-gen Starlink satellites to add capacity to the Starlink broadband network, but they’re too big and too heavy to launch on SpaceX’s smaller Falcon 9 rocket.

SpaceX will again reignite one of the ship’s Raptor engines in the vacuum of space, repeating a successful test achieved on Flight 6 in November. The engine restart capability is important for several reasons. It gives the ship the ability to maneuver itself out of low-Earth orbit for reentry (not a concern for Starship’s suborbital tests), and will allow the vehicle to propel itself to higher orbits, the Moon, or Mars once SpaceX masters the technology for orbital refueling.

Artist’s illustration of Starship on the surface of the Moon. Credit: SpaceX

NASA has contracts with SpaceX to build a derivative of Starship to ferry astronauts to and from the surface of the Moon for the agency’s Artemis program. The NASA program manager overseeing SpaceX’s lunar lander contract, Lisa Watson-Morgan, said she was pleased with the results of the in-space engine restart demo last year.

“The whole path to the Moon, as we are getting ready to land on the Moon, we’ll perform a series of maneuvers, and the Raptors will have an environment that is very, very cold,” Morgan told Ars in a recent interview. “To that, it’s going to be important that they’re able to relight for landing purposes. So that was a great first step towards that.

“In addition, after we land, clearly, the Raptors will be off, and it will get very cold, and they will have to relight in a cold environment (to launch the crews off the lunar surface),” she said. “So that’s why that step was critical for the Human Landing System and NASA’s return to the Moon.”

“The biggest technology challenge remaining”

SpaceX continues to experiment with Starship’s heat shield, which the company’s founder and CEO, Elon Musk, has described as “the biggest technology challenge remaining with Starship.” In order for SpaceX to achieve its lofty goal of launching Starships multiple times per day, the heat shield needs to be fully and immediately reusable.

While the last three ships have softly splashed down in the Indian Ocean, some of their heat-absorbing tiles stripped away from the vehicle during reentry, when it’s exposed to temperatures up to 2,600° Fahrenheit (1,430° Celsius).

Engineers removed tiles from some areas of the ship for next week’s test flight in order to “stress-test” vulnerable parts of the vehicle. They also smoothed and tapered the edge of the tile line, where the ceramic heat shield gives way to the ship’s stainless steel skin, to address “hot spots” observed during reentry on the most recent test flight.

“Multiple metallic tile options, including one with active cooling, will test alternative materials for protecting Starship during reentry,” SpaceX said.

SpaceX is also flying rudimentary catch fittings on Starship to test their thermal performance on reentry. The ship will fly a more demanding trajectory during descent to probe the structural limits of the redesigned flaps at the point of maximum entry dynamic pressure, according to SpaceX.

All told, SpaceX’s inclusion of a satellite deployment demo and ship upgrades on next week’s test flight will lay the foundation for future missions, perhaps in the next few months, to take the next great leap in Starship development.

In comments following the last Starship test flight in November, SpaceX founder and CEO Elon Musk posted on X that the company could try to return the ship to a catch back at the launch site—something that would require the vehicle to complete at least one full orbit of Earth—as soon as the next flight following Monday’s mission.

“We will do one more ocean landing of the ship,” Musk posted. “If that goes well, then SpaceX will attempt to catch the ship with the tower.”

Photo of Stephen Clark

Stephen Clark is a space reporter at Ars Technica, covering private space companies and the world’s space agencies. Stephen writes about the nexus of technology, science, policy, and business on and off the planet.

A taller, heavier, smarter version of SpaceX’s Starship is almost ready to fly Read More »

of-course-atari’s-new-handheld-includes-a-trackball,-spinner,-and-numpad

Of course Atari’s new handheld includes a trackball, spinner, and numpad

The $50 GameStation Gamepad. My Arcade

This year, My Arcade seems ready to go all in on the Atari GameStation branding. Beyond the GameStation Go, the company announced a $50 wireless GameStation Gamepad, a $70 GameStation Arcade Stick, and a $250 GameStation Mega tabletop arcade cabinet (with a 10.1-inch display). All four GameStation products feature a trackball, spinner, and number pad for maximum control authenticity, as well as helpful accent lighting that highlights which controls are active on a per-game basis—handy for younger gamers who might be overwhelmed by all the different control options.

In a hands-on video from CES, YouTuber GenXGrownUp shows off a preliminary GameStation Go game list, including the usual mix of well over 100 Atari 2600/5200/7800 and classic Atari arcade games you might expect from this kind of retro product (though it’s almost criminal not to see Marble Madness listed among the trackball-supported games). And despite the Atari name, the game selection on hand also includes many licensed NES and Super NES era titles from Jaleco: Bases Loaded, modern retro-styled titles from Piko Interactive, themed virtual pinball tables from Atari’s Balls of Steel line, and even Namco’s Pac-Man (why not?).

Atari’s modernized Centipede Recharged is also included in the game lineup, and GenXGrownUp reports that more Recharged games will be included with downloadable firmware updates after launch (which he says is “more than six months away”). Players will also seemingly be able to update the firmware through an SD card slot atop the GameStation Go, though it’s unclear whether you’ll be able to load your own ROMs in the same way (at least officially).

Despite including a numpad like the Intellivision controller, the GameStation Go doesn’t currently include any games from Atari’s recently purchased Intellivision library. But GenXGrownUp says including those titles—alongside Atari Lynx and Jaguar games—is not “off the table yet” for the final release.

We can only hope that the Gamestation line will show a pent-up demand for these esoteric retro control options, leading to similar modular options for the Nintendo Switch or its coming successor. How about it, Nintendo?

Of course Atari’s new handheld includes a trackball, spinner, and numpad Read More »

it’s-remarkably-easy-to-inject-new-medical-misinformation-into-llms

It’s remarkably easy to inject new medical misinformation into LLMs


Changing just 0.001% of inputs to misinformation makes the AI less accurate.

It’s pretty easy to see the problem here: The Internet is brimming with misinformation, and most large language models are trained on a massive body of text obtained from the Internet.

Ideally, having substantially higher volumes of accurate information might overwhelm the lies. But is that really the case? A new study by researchers at New York University examines how much medical information can be included in a large language model (LLM) training set before it spits out inaccurate answers. While the study doesn’t identify a lower bound, it does show that by the time misinformation accounts for 0.001 percent of the training data, the resulting LLM is compromised.

While the paper is focused on the intentional “poisoning” of an LLM during training, it also has implications for the body of misinformation that’s already online and part of the training set for existing LLMs, as well as the persistence of out-of-date information in validated medical databases.

Sampling poison

Data poisoning is a relatively simple concept. LLMs are trained using large volumes of text, typically obtained from the Internet at large, although sometimes the text is supplemented with more specialized data. By injecting specific information into this training set, it’s possible to get the resulting LLM to treat that information as a fact when it’s put to use. This can be used for biasing the answers returned.

This doesn’t even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, “a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web.”

Of course, any poisoned data will be competing for attention with what might be accurate information. So, the ability to poison an LLM might depend on the topic. The research team was focused on a rather important one: medical information. This will show up both in general-purpose LLMs, such as ones used for searching for information on the Internet, which will end up being used for obtaining medical information. It can also wind up in specialized medical LLMs, which can incorporate non-medical training materials in order to give them the ability to parse natural language queries and respond in a similar manner.

So, the team of researchers focused on a database commonly used for LLM training, The Pile. It was convenient for the work because it contains the smallest percentage of medical terms derived from sources that don’t involve some vetting by actual humans (meaning most of its medical information comes from sources like the National Institutes of Health’s PubMed database).

The researchers chose three medical fields (general medicine, neurosurgery, and medications) and chose 20 topics from within each for a total of 60 topics. Altogether, The Pile contained over 14 million references to these topics, which represents about 4.5 percent of all the documents within it. Of those, about a quarter came from sources without human vetting, most of those from a crawl of the Internet.

The researchers then set out to poison The Pile.

Finding the floor

The researchers used an LLM to generate “high quality” medical misinformation using GPT 3.5. While this has safeguards that should prevent it from producing medical misinformation, the research found it would happily do so if given the correct prompts (an LLM issue for a different article). The resulting articles could then be inserted into The Pile. Modified versions of The Pile were generated where either 0.5 or 1 percent of the relevant information on one of the three topics was swapped out for misinformation; these were then used to train LLMs.

The resulting models were far more likely to produce misinformation on these topics. But the misinformation also impacted other medical topics. “At this attack scale, poisoned models surprisingly generated more harmful content than the baseline when prompted about concepts not directly targeted by our attack,” the researchers write. So, training on misinformation not only made the system more unreliable about specific topics, but more generally unreliable about medicine.

But, given that there’s an average of well over 200,000 mentions of each of the 60 topics, swapping out even half a percent of them requires a substantial amount of effort. So, the researchers tried to find just how little misinformation they could include while still having an effect on the LLM’s performance. Unfortunately, this didn’t really work out.

Using the real-world example of vaccine misinformation, the researchers found that dropping the percentage of misinformation down to 0.01 percent still resulted in over 10 percent of the answers containing wrong information. Going for 0.001 percent still led to over 7 percent of the answers being harmful.

“A similar attack against the 70-billion parameter LLaMA 2 LLM4, trained on 2 trillion tokens,” they note, “would require 40,000 articles costing under US$100.00 to generate.” The “articles” themselves could just be run-of-the-mill webpages. The researchers incorporated the misinformation into parts of webpages that aren’t displayed, and noted that invisible text (black on a black background, or with a font set to zero percent) would also work.

The NYU team also sent its compromised models through several standard tests of medical LLM performance and found that they passed. “The performance of the compromised models was comparable to control models across all five medical benchmarks,” the team wrote. So there’s no easy way to detect the poisoning.

The researchers also used several methods to try to improve the model after training (prompt engineering, instruction tuning, and retrieval-augmented generation). None of these improved matters.

Existing misinformation

Not all is hopeless. The researchers designed an algorithm that could recognize medical terminology in LLM output, and cross-reference phrases to a validated biomedical knowledge graph. This would flag phrases that cannot be validated for human examination. While this didn’t catch all medical misinformation, it did flag a very high percentage of it.

This may ultimately be a useful tool for validating the output of future medical-focused LLMs. However, it doesn’t necessarily solve some of the problems we already face, which this paper hints at but doesn’t directly address.

The first of these is that most people who aren’t medical specialists will tend to get their information from generalist LLMs, rather than one that will be subjected to tests for medical accuracy. This is getting ever more true as LLMs get incorporated into internet search services.

And, rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term “incidental” data poisoning due to “existing widespread online misinformation.” But a lot of that “incidental” information was generally produced intentionally, as part of a medical scam or to further a political agenda. Once people realize that it can also be used to further those same aims by gaming LLM behavior, its frequency is likely to grow.

Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence. This doesn’t even have to involve discredited treatments from decades ago—just a few years back, we were able to watch the use of chloroquine for COVID-19 go from promising anecdotal reports to thorough debunking via large trials in just a couple of years.

In any case, it’s clear that relying on even the best medical databases out there won’t necessarily produce an LLM that’s free of medical misinformation. Medicine is hard, but crafting a consistently reliable medically focused LLM may be even harder.

Nature Medicine, 2025. DOI: 10.1038/s41591-024-03445-1  (About DOIs).

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

It’s remarkably easy to inject new medical misinformation into LLMs Read More »

after-embarrassing-blunder,-at&t-promises-bill-credits-for-future-outages

After embarrassing blunder, AT&T promises bill credits for future outages

“All voice and 5G data services for AT&T wireless customers were unavailable, affecting more than 125 million devices, blocking more than 92 million voice calls, and preventing more than 25,000 calls to 911 call centers,” the Federal Communications Commission said in a report after a months-long investigation into the incident.

The FCC report said the nationwide outage began three minutes after “AT&T Mobility implemented a network change with an equipment configuration error.” This error caused the AT&T network “to enter ‘protect mode’ to prevent impact to other services, disconnecting all devices from the network.”

The FCC found various problems in AT&T’s processes that increased the likelihood of an outage and made recovery more difficult than it should have been. The agency described “a lack of adherence to AT&T Mobility’s internal procedures, a lack of peer review, a failure to adequately test after installation, inadequate laboratory testing, insufficient safeguards and controls to ensure approval of changes affecting the core network, a lack of controls to mitigate the effects of the outage once it began, and a variety of system issues that prolonged the outage once the configuration error had been remedied.”

AT&T said it implemented changes to prevent the same problem from happening again. The company could face punishment, but it’s less likely to happen under Trump’s pick to chair the FCC, Brendan Carr, who is taking over soon. The Biden-era FCC compelled Verizon Wireless to pay a $1,050,000 fine and implement a compliance plan because of a December 2022 outage in six states that lasted one hour and 44 minutes.

An AT&T executive told Reuters that the company has been trying to regain customers’ trust over the past few years with better offers and product improvements. “Four years ago, we were losing share in the industry for a significant period of time… we knew we had lost our customers’ trust,” Reuters quoted AT&T Executive VP Jenifer Robertson as saying in an article today.

After embarrassing blunder, AT&T promises bill credits for future outages Read More »

misconfigured-license-plate-readers-are-leaking-data-and-video-in-real-time

Misconfigured license plate readers are leaking data and video in real time

In just 20 minutes this morning, an automated license-plate-recognition (ALPR) system in Nashville, Tennessee, captured photographs and detailed information from nearly 1,000 vehicles as they passed by. Among them: eight black Jeep Wranglers, six Honda Accords, an ambulance, and a yellow Ford Fiesta with a vanity plate.

This trove of real-time vehicle data, collected by one of Motorola’s ALPR systems, is meant to be accessible by law enforcement. However, a flaw discovered by a security researcher has exposed live video feeds and detailed records of passing vehicles, revealing the staggering scale of surveillance enabled by this widespread technology.

More than 150 Motorola ALPR cameras have exposed their video feeds and leaking data in recent months, according to security researcher Matt Brown, who first publicized the issues in a series of YouTube videos after buying an ALPR camera on eBay and reverse engineering it.

As well as broadcasting live footage accessible to anyone on the Internet, the misconfigured cameras also exposed data they have collected, including photos of cars and logs of license plates. The real-time video and data feeds don’t require any usernames or passwords to access.

Alongside other technologists, WIRED has reviewed video feeds from several of the cameras, confirming vehicle data—including makes, models, and colors of cars—have been accidentally exposed. Motorola confirmed the exposures, telling WIRED it was working with its customers to close the access.

Over the last decade, thousands of ALPR cameras have appeared in towns and cities across the US. The cameras, which are manufactured by companies such as Motorola and Flock Safety, automatically take pictures when they detect a car passing by. The cameras and databases of collected data are frequently used by police to search for suspects. ALPR cameras can be placed along roads, on the dashboards of cop cars, and even in trucks. These cameras capture billions of photos of cars—including occasionally bumper stickers, lawn signs, and T-shirts.

“Every one of them that I found exposed was in a fixed location over some roadway,” Brown, who runs cybersecurity company Brown Fine Security, tells WIRED. The exposed video feeds each cover a single lane of traffic, with cars driving through the camera’s view. In some streams, snow is falling. Brown found two streams for each exposed camera system, one in color and another in infrared.

Misconfigured license plate readers are leaking data and video in real time Read More »

bye-bye-windows-gaming?-steamos-officially-expands-past-the-steam-deck.

Bye-bye Windows gaming? SteamOS officially expands past the Steam Deck.

Almost exactly a year ago, we were publicly yearning for the day when more portable gaming PC makers could ditch Windows in favor of SteamOS (without having to resort to touchy unofficial workarounds). Now, that day has finally come, with Lenovo announcing the upcoming Legion Go S as the first non-Valve handheld to come with an officially licensed copy of SteamOS preinstalled. And Valve promises that it will soon ship a beta version of SteamOS for users to “download and test themselves.”

As Lenovo’s slightly downsized followup to 2023’s massive Legion Go, the Legion Go S won’t feature the detachable controllers of its predecessor. But the new PC gaming handheld will come in two distinct versions, one with the now-standard Windows 11 installation and another edition that’s the first to sport the (recently leaked) “Powered by SteamOS” branding.

The lack of a Windows license seems to contribute to a lower starting cost for the “Powered by SteamOS” edition of the Legion Go S, which will start at $500 when it’s made available in May. Lenovo says the Windows edition of the device—available starting this month—will start at $730, with “additional configurations” available in May starting as low as $600.

The Windows version of the Legion Go S will come with a different color and a higher price. Credit: Lenovo

Both the Windows and SteamOS versions of the Legion Go S will weigh in at 1.61 lbs with an 8-inch 1200p 120 Hz LCD screen, up to 32GB of RAM, and either AMD’s new Ryzen Z2 Go chipset or an older Z1 core.

Watch out, Windows?

Valve said in a blog post on Tuesday that the Legion Go S will sport the same version of SteamOS currently found on the Steam Deck. The company’s work getting SteamOS onto the Legion Go S will also “improve compatibility with other handhelds,” Valve said, and the company “is working on SteamOS support for more devices in the future.”

Bye-bye Windows gaming? SteamOS officially expands past the Steam Deck. Read More »

science-paper-piracy-site-sci-hub-shares-lots-of-retracted-papers

Science paper piracy site Sci-Hub shares lots of retracted papers

Most scientific literature is published in for-profit journals that rely on subscriptions and paywalls to turn a profit. But that trend has been shifting as various governments and funding agencies are requiring that the science they fund be published in open-access journals. The transition is happening gradually, though, and a lot of the historical literature remains locked behind paywalls.

These paywalls can pose a problem for researchers who aren’t at well-funded universities, including many in the Global South, which may not be able to access the research they need to understand in order to pursue their own studies. One solution has been Sci-Hub, a site where people can upload PDFs of published papers so they can be shared with anyone who can access the site. Despite losses in publishing industry lawsuits and attempts to block access, Sci-Hub continues to serve up research papers that would otherwise be protected by paywalls.

But what it’s serving up may not always be the latest and greatest. Generally, when a paper is retracted for being invalid, publishers issue an updated version of its PDF with clear indications that the research it contains should no longer be considered valid. Unfortunately, it appears that once Sci-Hub has a copy of a paper, it doesn’t necessarily have the ability to ensure it’s kept up to date. Based on a scan of its content done by researchers from India, about 85 percent of the invalid papers they checked had no indication that the paper had been retracted.

Correcting the scientific record

Scientific results go wrong for all sorts of reasons, from outright fraud to honest mistakes. If the problems don’t invalidate the overall conclusions of a paper, it’s possible to update the paper with a correction. If the problems are systemic enough to undermine the results, however, the paper is typically retracted—in essence, it should be treated as if it were never published in the first place.

It doesn’t always work out that way, however. Maybe people ignore the notifications that something has been retracted, or maybe they downloaded a copy of the paper before it got retracted and never saw the notifications at all, but citations to retracted papers regularly appear in the scientific record. Over the long term, this can distort our big-picture view of science, leading to wasted effort and misallocated resources.

Science paper piracy site Sci-Hub shares lots of retracted papers Read More »

ants-vs.-humans:-solving-the-piano-mover-puzzle

Ants vs. humans: Solving the piano-mover puzzle

Who is better at maneuvering a large load through a maze, ants or humans?

The piano-mover puzzle involves trying to transport an oddly shaped load across a constricted environment with various obstructions. It’s one of several variations on classic computational motion-planning problems, a key element in numerous robotics applications. But what would happen if you pitted human beings against ants in a competition to solve the piano-mover puzzle?

According to a paper published in the Proceedings of the National Academy of Sciences, humans have superior cognitive abilities and, hence, would be expected to outperform the ants. However, depriving people of verbal or nonverbal communication can level the playing field, with ants performing better in some trials. And while ants improved their cognitive performance when acting collectively as a group, the same did not hold true for humans.

Co-author Ofer Feinerman of the Weizmann Institute of Science and colleagues saw an opportunity to use the piano-mover puzzle to shed light on group decision-making, as well as the question of whether it is better to cooperate as a group or maintain individuality. “It allows us to compare problem-solving skills and performances across group sizes and down to a single individual and also enables a comparison of collective problem-solving across species,” the authors wrote.

They decided to compare the performances of ants and humans because both species are social and can cooperate while transporting loads larger than themselves. In essence, “people stand out for individual cognitive abilities while ants excel in cooperation,” the authors wrote.

Feinerman et al. used crazy ants (Paratrechina longicornis) for their experiments, along with the human volunteers. They designed a physical version of the piano-movers puzzle involving a large t-shaped load that had to be maneuvered across a rectangular area divided into three chambers, connected via narrow slits. The load started in the first chamber on the left, and the ant and human subjects had to figure out how to transport it through the second chamber and into the third.

Ants vs. humans: Solving the piano-mover puzzle Read More »