Author name: Kris Guyer

elon-musk’s-x-faces-big-eu-fines-as-paid-checkmarks-are-ruled-deceptive

Elon Musk’s X faces big EU fines as paid checkmarks are ruled deceptive

Blue checkmarks —

Paid “verification” deceives X users and violates Digital Services Act, EU says.

Elon Musk's X account profile displayed on a phone screen

Getty Images | NurPhoto

Elon Musk’s overhaul of the Twitter verification system deceives users and violates the Digital Services Act, the European Commission said today in an announcement of preliminary findings that could lead to a big financial penalty.

The social media platform now called X “designs and operates its interface for the ‘verified accounts’ with the ‘Blue checkmark’ in a way that does not correspond to industry practice and deceives users,” the EU regulator said. “Since anyone can subscribe to obtain such a ‘verified’ status, it negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the ‘verified account’ to deceive users.”

Blue checkmarks “used to mean trustworthy sources of information,” Commissioner for Internal Market Thierry Breton said. The EC said it “informed X of its preliminary view that it is in breach of the Digital Services Act (DSA) in areas linked to dark patterns, advertising transparency and data access for researchers.”

X will have an opportunity to respond in writing. If the preliminary finding is upheld, the EC said it would adopt a non-compliance decision that “could entail fines of up to 6 percent of the total worldwide annual turnover of the provider, and order the provider to take measures to address the breach.”

A non-compliance decision may also “trigger an enhanced supervision period to ensure compliance with the measures the provider intends to take to remedy the breach,” and “periodic penalty payments to compel a platform to comply.” X is allowed to “exercise its rights of defense by examining the documents in the Commission’s investigation file and by replying in writing to the Commission’s preliminary findings,” the announcement said.

We contacted X today and will update this article if the company provides a response to the EU findings.

Advertising and data access charges

As for the second alleged violation, the EC said that “X does not comply with the required transparency on advertising, as it does not provide a searchable and reliable advertisement repository, but instead put in place design features and access barriers that make the repository unfit for its transparency purpose towards users. In particular, the design does not allow for the required supervision and research into emerging risks brought about by the distribution of advertising online.”

Thirdly, the commission said it found that “X fails to provide access to its public data to researchers in line with the conditions set out in the DSA. In particular, X prohibits eligible researchers from independently accessing its public data, such as by scraping, as stated in its terms of service. In addition, X’s process to grant eligible researchers access to its application programming interface (API) appears to dissuade researchers from carrying out their research projects or leave them with no other choice than to pay disproportionately high fees.”

In December 2023, the EC announced that Musk’s X platform was subject to the first formal investigation into possible DSA violations. X said at the time that it “remains committed to complying with the Digital Services Act and is cooperating with the regulatory process. It is important that this process remains free of political influence and follows the law.”

With today’s announcement, X is the first company to face preliminary findings of DSA non-compliance.

“The DSA has transparency at its very core, and we are determined to ensure that all platforms, including X, comply with EU legislation,” said EC competition official Margrethe Vestager.

Elon Musk’s X faces big EU fines as paid checkmarks are ruled deceptive Read More »

nearly-all-at&t-subscribers’-call-records-stolen-in-snowflake-cloud-hack

Nearly all AT&T subscribers’ call records stolen in Snowflake cloud hack

AT&T data breach —

Six months of call and text records taken from AT&T workspace on cloud platform.

AT&T logo displayed on a smartphone with a stock exchange index graph in the background.

Getty Images | SOPA Images

AT&T today said a breach on a third-party cloud platform exposed the call and text records of nearly all its cellular customers. The leaked data is said to include phone numbers that AT&T subscribers communicated with, but not names.

An AT&T spokesperson confirmed to Ars that the data was exposed in the recently reported attack on “AI data cloud” provider Snowflake, which also affected Ticketmaster and many other companies. As previously reported, Snowflake was compromised by a group that obtained login credentials through information-stealing malware.

“In April, AT&T learned that customer data was illegally downloaded from our workspace on a third-party cloud platform,” AT&T announced today. AT&T said it is working with law enforcement and “understands that at least one person has been apprehended.”

AT&T said it does not believe the stolen call data has been made publicly available. “The call and text records identify the phone numbers with which an AT&T number interacted during this period, including AT&T landline (home phone) customers. It also included counts of those calls or texts and total call durations for specific days or months,” AT&T said.

Records of “nearly all” AT&T customers

The data does not include the content of calls or text messages, AT&T said.

“Based on our investigation, the compromised data includes files containing AT&T records of calls and texts of nearly all of AT&T’s cellular customers, customers of mobile virtual network operators (MVNOs) using AT&T’s wireless network, as well as AT&T’s landline customers who interacted with those cellular numbers between May 1, 2022 – October 31, 2022. The compromised data also includes records from January 2, 2023, for a very small number of customers,” AT&T said.

The carrier said the breach does not include Social Security numbers, dates of birth, other personally identifiable information, or the time stamps for calls and texts. “While the data does not include customer names, there are often ways, using publicly available online tools, to find the name associated with a specific telephone number,” an AT&T filing with the Securities and Exchange Commission said.

AT&T’s SEC filing said the “records identify the telephone numbers with which an AT&T or MVNO wireless number interacted during these periods, including telephone numbers of AT&T wireline customers and customers of other carriers, counts of those interactions, and aggregate call duration for a day or month. For a subset of records, one or more cell site identification number(s) are also included.”

AT&T said it has “clos[ed] off the point of unlawful access” and is notifying current and former customers of the breach. AT&T’s current and former customers can obtain the data that was compromised, and details on how to make those data requests are available on this page.

FBI and FCC comment

The Federal Bureau of Investigation said AT&T and law enforcement agreed to delay public reporting of the incident when the investigation began in April. The FBI provided this statement to Ars:

Shortly after identifying a potential breach to customer data and before making its materiality decision, AT&T contacted the FBI to report the incident. In assessing the nature of the breach, all parties discussed a potential delay to public reporting under Item 1.05(c) of the SEC Rule, due to potential risks to national security and/or public safety. AT&T, FBI, and DOJ worked collaboratively through the first and second delay process, all while sharing key threat intelligence to bolster FBI investigative equities and to assist AT&T’s incident response work.

The FBI declined to provide any information on the person who was apprehended. The Federal Communications Commission said it has “an ongoing investigation into the AT&T breach and we’re coordinating with our law enforcement partners.”

An AT&T spokesperson told Ars that the Snowflake breach is unrelated to another recent leak involving the data of 73 million current and former subscribers.

Nearly all AT&T subscribers’ call records stolen in Snowflake cloud hack Read More »

giant-salamander-species-found-in-what-was-thought-to-be-an-icy-ecosystem

Giant salamander species found in what was thought to be an icy ecosystem

Feeding time —

Found after its kind were thought extinct, and where it was thought to be too cold.

A black background with a brown fossil at the center, consisting of the head and a portion of the vertebral column.

C. Marsicano

Gaiasia jennyae, a newly discovered freshwater apex predator with a body length reaching 4.5 meters, lurked in the swamps and lakes around 280 million years ago. Its wide, flattened head had powerful jaws full of huge fangs, ready to capture any prey unlucky enough to swim past.

The problem is, to the best of our knowledge, it shouldn’t have been that large, should have been extinct tens of millions of years before the time it apparently lived, and shouldn’t have been found in northern Namibia. “Gaiasia is the first really good look we have at an entirely different ecosystem we didn’t expect to find,” says Jason Pardo, a postdoctoral fellow at Field Museum of Natural History in Chicago. Pardo is co-author of a study on the Gaiasia jennyae discovery recently published in Nature.

Common ancestry

“Tetrapods were the animals that crawled out of the water around 380 million years ago, maybe a little earlier,” Pardo explains. These ancient creatures, also known as stem tetrapods, were the common ancestors of modern reptiles, amphibians, mammals, and birds. “Those animals lived up to what we call the end of Carboniferous, about 370–300 million years ago. Few made it through, and they lasted longer, but they mostly went extinct around 370 million ago,” he adds.

This is why the discovery of Gaiasia jennyae in the 280 million-year-old rocks of Namibia was so surprising. Not only wasn’t it extinct when the rocks it was found in were laid down, but it was dominating its ecosystem as an apex predator. By today’s standards, it was like stumbling upon a secluded island hosting animals that should have been dead for 70 million years, like a living, breathing T-rex.

“The skull of gaiasia we have found is about 67 centimeters long. We also have a front end of her upper body. We know she was at minimum 2.5 meters long, probably 3.5, 4.5 meters—big head and a long, salamander-like body,” says Pardo. He told Ars that gaiasia was a suction feeder: she opened her jaws under water, which created a vacuum that sucked her prey right in. But the large, interlocked fangs reveal that a powerful bite was also one of her weapons, probably used to hunt bigger animals. “We suspect gaiasia fed on bony fish, freshwater sharks, and maybe even other, smaller gaiasia,” says Pardo, suggesting it was a rather slow, ambush-based predator.

But considering where it was found, the fact that it had enough prey to ambush is perhaps even more of a shocker than the animal itself.

Location, location, location

“Continents were organized differently 270–280 million years ago,” says Pardo. Back then, one megacontinent called Pangea had already broken into two supercontinents. The northern supercontinent called Laurasia included parts of modern North America, Russia, and China. The southern supercontinent, the home of gaiasia, was called Gondwana, which consisted of today’s India, Africa, South America, Australia, and Antarctica. And Gondwana back then was pretty cold.

“Some researchers hypothesize that the entire continent was covered in glacial ice, much like we saw in North America and Europe during the ice ages 10,000 years ago,” says Pardo. “Others claim that it was more patchy—there were those patches where ice was not present,” he adds. Still, 280 million years ago, northern Namibia was around 60 degrees southern latitude—roughly where the northernmost reaches of Antarctica are today.

“Historically, we thought tetrapods [of that time] were living much like modern crocodiles. They were cold-blooded, and if you are cold-blooded the only way to get large and maintain activity would be to be in a very hot environment. We believed such animals couldn’t live in colder environments. Gaiasia shows that it is absolutely not the case,” Pardo claims. And this turned upside-down lots of what we knew about life on Earth back in gaiasia’s time.

Giant salamander species found in what was thought to be an icy ecosystem Read More »

arm-tweaks-amd’s-fsr-to-bring-battery-saving-gpu-upscaling-to-phones-and-tablets

Arm tweaks AMD’s FSR to bring battery-saving GPU upscaling to phones and tablets

situation: there are 14 competing standards —

Arm “Accuracy Super Resolution” is optimized for power use and integrated GPUs.

An Arm sample image meant to show off its new

Enlarge / An Arm sample image meant to show off its new “Accuracy Super Resolution” upscaling tech.

Arm

Some of the best Arm processors come from companies like Apple and Qualcomm, which license Arm’s processor instruction set but create their own custom or semi-custom CPU designs. But Arm continues to plug away on its own CPU and GPU architectures and related technologies, and the company has announced that it’s getting into the crowded field of graphics upscaling technology.

Arm’s Accuracy Super Resolution (ASR) is a temporal upscaler that is based on AMD’s open source FidelityFX Super Resolution 2, which Arm says allows developers to “benefit from the familiar API and configuration options.” (This AMD presentation from GDC 2023 gets into some of the differences between different kinds of upscalers.)

AMD’s FSR and Nvidia’s DLSS on gaming PCs are mostly sold as a way to boost graphical fidelity—increasing frame rates beyond 60 fps or rendering “4K” images on graphics cards that are too slow to do those things natively, for example. But since Arm devices are still (mostly, for now) phones and tablets, Arm is leaning into the potential power savings that are possible with lower GPU use. A less-busy GPU also runs cooler, reducing the likelihood of thermal throttling; Arm mentions reduced throttling as a benefit of ASR, though it doesn’t say how much of ASR’s performance advantage over FSR is attributable to reduced throttling.

“Using [ASR] rendered high-quality results at a stable, low temperature,” writes Arm Director for Ecosystem Strategy Peter Hodges. “Rendering at a native resolution inevitably led to undesirable thermal throttling, which in games can ruin the user experience and shorten engagement.”

Why not just use FSR2 without modification? Arm claims that the ASR upscaling tech has been tuned to reduce GPU usage and to run well on devices without a ton of memory bandwidth—think low-power mobile GPUs with integrated graphics rather than desktop-class graphics cards. ASR’s GPU use is as little as one-third of FSR2’s at the same target resolutions and scaling factors. Arm also claims that ASR delivers roughly 20 to 40 percent better frame rates than FSR2 on Arm devices, depending on the settings you’re using.

  • Arm also says that reduced GPU usage when using ASR can lead to lower heat and improved battery life.

    Arm

  • Arm says that ASR runs faster and uses less power than FSR on the same mobile hardware.

    Arm

Arm says it used “a commercial mobile device that features an Arm Immortalis-G720 GPU” for its performance testing and that it worked with MediaTek to corroborate its power consumption numbers “using a Dimensity 9300 handset.”

When the ASR spec is released, it will be up to OS makers and game developers to implement it. Apple will likely stick with its own MetalFX upscaling technology—also derived from AMD’s FSR, for what that’s worth. Microsoft is pushing “Automatic Super Resolution” on Arm devices while also attempting to develop a vendor-agnostic upscaling API in “DirectSR.” Qualcomm announced Snapdragon Game Super Resolution a little over a year ago.

Arm’s upscaler has the benefit of being hardware-agnostic and also open-source (Arm says it “want[s] to share [ASR] with the developer community under an MIT open-source license”) so that other upscalers can benefit from its improvements. Qualcomm’s upscaler is also a simpler spatial upscaler a la AMD’s first-generation FSR algorithm, so Arm’s upscaler could also end up producing superior image quality on the same GPUs.

We’re undeniably getting into that one xkcd comic about the proliferation of standards territory here, but it’s at least interesting to see different companies using graphics upscaling technology to solve problems other than “make games look nicer.”

Listing image by Arm

Arm tweaks AMD’s FSR to bring battery-saving GPU upscaling to phones and tablets Read More »

republicans-angry-that-isps-receiving-us-grants-must-offer-low-cost-plans

Republicans angry that ISPs receiving US grants must offer low-cost plans

Illustration of ones and zeroes overlaid on a US map.

Getty Images | Matt Anderson Photography

Republican lawmakers are fighting a Biden administration attempt to bring cheap broadband service to low-income people, claiming it is an illegal form of rate regulation. GOP leaders of the House Energy and Commerce Committee announced an investigation into the National Telecommunications and Information Administration (NTIA), which is administering the $42.45 billion Broadband Equity, Access, and Deployment (BEAD) program that was approved by Congress in November 2021.

“States have reported that the NTIA is directing them to set rates and conditioning approval of initial proposals on doing so. This undoubtedly constitutes rate regulation by the NTIA,” states a letter to the NTIA from Committee Chair Cathy McMorris Rodgers (R-Wash.), Subcommittee on Communications and Technology Chair Bob Latta (R-Ohio), and Subcommittee on Oversight and Investigations Chair Morgan Griffith (R-Va.).

As evidence, the letter points to a statement by Virginia that described feedback received from the NTIA. The federal agency told Virginia that “the low-cost option must be established in the Initial proposal as an exact price or formula.”

The Republicans said anecdotal evidence suggests “the NTIA may be evaluating initial proposals counter to Congressional intent and in violation of the law.” They asked the agency for all communications about the grants between NTIA officials and state broadband offices.

The US law that ordered NTIA to distribute the money requires that Internet providers receiving federal funds offer at least one “low-cost broadband service option for eligible subscribers.” But the law also says the NTIA may not “regulate the rates charged for broadband service.”

We’re following the law, agency says

An NTIA spokesperson told Ars that the agency is working to implement the law’s requirement that grant recipients offer an affordable service tier to qualifying low-income households. “We’ve received the letter and will respond through the appropriate channels. NTIA is working to implement BEAD in a manner that is faithful to the statute,” the agency said.

NTIA Administrator Alan Davidson tried to deflect Republican criticism of the low-cost requirements at a hearing in May. He said that requiring a low-cost option, as the law demands, is not the same as regulating broadband rates.

“The statute requires that there be a low-cost service option,” Davidson told Latta at the hearing, according to Broadband Breakfast. “We do not believe the states are regulating rates here. We believe that this is a condition to get a federal grant. Nobody’s requiring a service provider to follow these rates, people do not have to participate in the program.”

The NTIA needs to evaluate specific proposals to determine whether plans are low-cost, he said. “You have to be able to understand what is affordable,” Davidson was quoted as saying. “Every state has to submit a low-cost option that we can understand is affordable. When states do that, we will approve their plans.”

Republicans angry that ISPs receiving US grants must offer low-cost plans Read More »

congress-apparently-feels-a-need-for-“reaffirmation”-of-sls-rocket

Congress apparently feels a need for “reaffirmation” of SLS rocket

Stuart Smalley is here to help with daily affirmations of SLS.

Enlarge / Stuart Smalley is here to help with daily affirmations of SLS.

Aurich Lawson | SNL

There is a curious section in the new congressional reauthorization bill for NASA that concerns the agency’s large Space Launch System rocket.

The section is titled “Reaffirmation of the Space Launch System,” and in it Congress asserts its commitment to a flight rate of twice per year for the rocket. The reauthorization legislation, which cleared a House committee on Wednesday, also said NASA should identify other customers for the rocket.

“The Administrator shall assess the demand for the Space Launch System by entities other than NASA and shall break out such demand according to the relevant Federal agency or nongovernment sector,” the legislation states.

Congress directs NASA to report back, within 180 days of the legislation passing, on several topics. First, the legislators want an update on NASA’s progress toward achieving a flight rate of twice per year for the SLS rocket, and the Artemis mission by which this capability will be in place.

Additionally, Congress is asking for NASA to study demand for the SLS rocket and estimate “cost and schedule savings for reduced transit times” for deep space missions due to the “unique capabilities” of the rocket. The space agency also must identify any “barriers or challenges” that could impede use of the rocket by other entities other than NASA, and estimate the cost of overcoming those barriers.

Is someone afraid?

There is a fair bit to unpack here, but the inclusion of this section—there is no “reaffirmation” of the Orion spacecraft, for example—suggests that either the legacy space companies building the SLS rocket, local legislators, or both feel the need to protect the SLS rocket. As one source on Capitol Hill familiar with the legislation told Ars, “It’s a sign that somebody’s afraid.”

Congress created the SLS rocket 14 years ago with the NASA Authorization Act of 2010. The large rocket kept a river of contracts flowing to large aerospace companies, including Boeing and Northrop Grumman, who had been operating the Space Shuttle. Congress then lavished tens of billions of dollars on the contractors over the years for development, often authorizing more money than NASA said it needed. Congressional support was unwavering, at least in part because the SLS program boasts that it has jobs in every state.

Under the original law, the SLS rocket was supposed to achieve “full operational capability” by the end of 2016. The first launch of the SLS vehicle did not take place until late 2022, six years later. It was entirely successful. However, due to various reasons, the rocket will not fly again until September 2025 at the earliest.

Congress apparently feels a need for “reaffirmation” of SLS rocket Read More »

airbag-problems-force-massive-recalls-at-alfa-romeo,-bmw,-fiat,-and-jeep

Airbag problems force massive recalls at Alfa Romeo, BMW, Fiat, and Jeep

blow me up —

Takata airbags and problematic sensors lead to recall across four car brands.

Red lighting air bag control symbol in car

Getty Images

Both BMW and Stellantis are recalling hundreds of thousands of vehicles in the US this month due to airbag problems. For BMW, the problem, which potentially affects 394,029 cars, is a continuation of the Takata airbag recall, the largest automotive recall in history. Stellantis has slightly fewer potentially affected cars, with 322,000 subject to recall, but for a different problem caused by a suspect sensor in the seat belt buckle.

BMW

While the BMW recall will be sent to almost 400,000 owners, the company suspects only 1 percent of that population will have a problem that needs remedying. That’s because it wants dealers to check any cars where the owner has replaced the factory-fitted steering wheel with a Sport or M-Sport version equipped with a PSDI-5 inflator.

These inflators lack a desiccant or drying agent that would otherwise prevent the ammonium nitrate airbag propellant from taking on moisture, degrading the airbag’s performance to the point where it could overinflate and shower the interior with metal fragments. At least 24 people have been killed by defective Takata airbags in the US, which led to 42 million cars being recalled to fix the problem.

BMW’s recall affects the model-years 2006–11 323i, 325i, 330i, 330Xi, 335i, 335Xi; the model-years 2006–12 325Xi, 328i, 328Xi; and the model-years 2009–11 335d. Should inspection find a replacement wheel with a Takata inflator, it will be replaced with a new airbag module, BMW says.

Stellantis

The Stellantis recall appears to affect cars produced in Italy: the model-years 2017–24 Alfa Romeo Giulia, model-years 2018–24 Alfa Romeo Stelvio, model-year 2024 Fiat 500E, model-years 2019–23 Fiat 500X, and model-years 2019–23 Jeep Renegade.

Here, the problem is not an airbag inflator but the Hall effect sensor, supplied by ZF, on the seat belt buckle—or, more specifically, the wiring that connects that sensor to the car’s internal network. Suspect connectors were used in different models at different times, some as early as 2016 and some as late as this June. In cars with faulty Hall effect sensor wiring, the airbag may not trigger during a crash.

Stellantis says that dealers will directly wire the sensor to the wiring harness with a solder tube in affected cars.

Airbag problems force massive recalls at Alfa Romeo, BMW, Fiat, and Jeep Read More »

users-must-prove-amazon-ripped-them-off-to-revive-buy-box-rigging-suit

Users must prove Amazon ripped them off to revive Buy Box rigging suit

Better come with receipts —

Users want Amazon held accountable for hiding cheaper items with faster delivery.

Users must prove Amazon ripped them off to revive Buy Box rigging suit

A court has dismissed a proposed class-action lawsuit alleging that Amazon’s Buy Box was rigged to rip off customers seeking the best deals on the platform.

The suit followed 2022 antitrust probes in the European Union and United Kingdom that found that Amazon’s Buy Box hid cheaper items with faster delivery times to preference Fulfilled By Amazon (FBA) sellers since at least 2016.

As a result, Amazon had to change its Buy Box practices and earn back the trust of customers and sellers, the company said in a 2022 blog. Among changes, Amazon agreed to treat all sellers equally when featuring offers in the Buy Box and to promote a second competing offer when a comparable deal is available at either a lower price or with a faster delivery time.

Those steps apparently didn’t satisfy users who sued: Jeffrey Taylor and Robert Selway. They asked courts to find a “reasonable inference of injury” since they were Amazon customers for years while the price rigging occurred. They claimed that “but for Amazon’s deceptive conduct concerning the Buy Box algorithm, Plaintiffs and members of the Class would have purchased the lower priced offers from non-FBA sellers with equivalent or better delivery.”

But this week, a US district judge in Seattle, Marsha Pechman, told users suing that it wasn’t enough to show evidence of Amazon’s proven misconduct. To satisfy a claim under Washington’s Consumer Protection Act (CPA), they needed to provide receipts from transactions showing that Amazon charged them higher prices while cheaper items were available. Instead, their complaint seemingly contradicted their claim, only showing one example of a Buy Box screenshot that Pechman said showed a hand soap that was offered by other sellers for prices significantly higher than Amazon’s featured offer.

“Plaintiffs have not adequately shown that they made any specific transaction with Amazon, let alone one from the Buy Box,” Pechman wrote in her order. And they “do not allege any specific purchases in which they were deceived via the Buy Box, let alone provide receipts.”

This doesn’t necessarily end the fight to hold Amazon accountable, though. The judge granted leave for users to amend their complaint and either provide “information regarding specific orders (i.e., receipts)” or “make allegations regarding discrete transactions with Amazon.”

Now, the Amazon users have 30 days to track down receipts or otherwise show evidence of specific transactions where they were injured, Pechman wrote.

“Without a showing of a specific transaction, Plaintiffs cannot possibly allege that they themselves were overcharged for any particular purchase—which is the injury in dispute,” Pechman wrote.

It will likely be challenging for the Amazon users to establish that they paid higher prices for items purchased on the platform years ago, and Pechman admitted this much in her order.

“The Court recognizes that Plaintiffs may be unable to ultimately prove that they overpaid for specific purchases,” Pechman wrote, but the CPA requires more than a “mere possibility of injury.”

Ars could not immediately reach plaintiffs’ lawyers for comment. Amazon declined to comment.

Users must prove Amazon ripped them off to revive Buy Box rigging suit Read More »

why-every-quantum-computer-will-need-a-powerful-classical-computer

Why every quantum computer will need a powerful classical computer

Image of a set of spheres with arrows within them, with all the arrows pointing in the same direction.

Enlarge / A single logical qubit is built from a large collection of hardware qubits.

One of the more striking things about quantum computing is that the field, despite not having proven itself especially useful, has already spawned a collection of startups that are focused on building something other than qubits. It might be easy to dismiss this as opportunism—trying to cash in on the hype surrounding quantum computing. But it can be useful to look at the things these startups are targeting, because they can be an indication of hard problems in quantum computing that haven’t yet been solved by any one of the big companies involved in that space—companies like Amazon, Google, IBM, or Intel.

In the case of a UK-based company called Riverlane, the unsolved piece that is being addressed is the huge amount of classical computations that are going to be necessary to make the quantum hardware work. Specifically, it’s targeting the huge amount of data processing that will be needed for a key part of quantum error correction: recognizing when an error has occurred.

Error detection vs. the data

All qubits are fragile, tending to lose their state during operations, or simply over time. No matter what the technology—cold atoms, superconducting transmons, whatever—these error rates put a hard limit on the amount of computation that can be done before an error is inevitable. That rules out doing almost every useful computation operating directly on existing hardware qubits.

The generally accepted solution to this is to work with what are called logical qubits. These involve linking multiple hardware qubits together and spreading the quantum information among them. Additional hardware qubits are linked in so that they can be measured to monitor errors affecting the data, allowing them to be corrected. It can take dozens of hardware qubits to make a single logical qubit, meaning even the largest existing systems can only support about 50 robust logical qubits.

Riverlane’s founder and CEO, Steve Brierley, told Ars that error correction doesn’t only stress the qubit hardware; it stresses the classical portion of the system as well. Each of the measurements of the qubits used for monitoring the system needs to be processed to detect and interpret any errors. We’ll need roughly 100 logical qubits to do some of the simplest interesting calculations, meaning monitoring thousands of hardware qubits. Doing more sophisticated calculations may mean thousands of logical qubits.

That error-correction data (termed syndrome data in the field) needs to be read between each operation, which makes for a lot of data. “At scale, we’re talking a hundred terabytes per second,” said Brierley. “At a million physical qubits, we’ll be processing about a hundred terabytes per second, which is Netflix global streaming.”

It also has to be processed in real time, otherwise computations will get held up waiting for error correction to happen. To avoid that, errors must be detected in real time. For transmon-based qubits, syndrome data is generated roughly every microsecond, so real time means completing the processing of the data—possibly Terabytes of it—with a frequency of around a Megahertz. And Riverlane was founded to provide hardware that’s capable of handling it.

Handling the data

The system the company has developed is described in a paper that it has posted on the arXiv. It’s designed to handle syndrome data after other hardware has already converted the analog signals into digital form. This allows Riverlane’s hardware to sit outside any low-temperature hardware that’s needed for some forms of physical qubits.

That data is run through an algorithm the paper terms a “Collision Clustering decoder,” which handles the error detection. To demonstrate its effectiveness, they implement it based on a typical Field Programmable Gate Array from Xilinx, where it occupies only about 5 percent of the chip but can handle a logical qubit built from nearly 900 hardware qubits (simulated, in this case).

The company also demonstrated a custom chip that handled an even larger logical qubit, while only occupying a tiny fraction of a square millimeter and consuming just 8 milliwatts of power.

Both of these versions are highly specialized; they simply feed the error information for other parts of the system to act on. So, it is a highly focused solution. But it’s also quite flexible in that it works with various error-correction codes. Critically, it also integrates with systems designed to control a qubit based on very different physics, including cold atoms, trapped ions, and transmons.

“I think early on it was a bit of a puzzle,” Brierley said. “You’ve got all these different types of physics; how are we going to do this?” It turned out not to be a major challenge. “One of our engineers was in Oxford working with the superconducting qubits, and in the afternoon he was working with the iron trap qubits. He came back to Cambridge and he was all excited. He was like, ‘They’re using the same control electronics.'” It turns out that, regardless of the physics involved in controlling the qubits, everybody had borrowed the same hardware from a different field (Brierley said it was a Xilinx radiofrequency system-on-a-chip built for 5G base stationed prototyping.) That makes it relatively easy to integrate Riverlane’s custom hardware with a variety of systems.

Why every quantum computer will need a powerful classical computer Read More »

new-blast-radius-attack-breaks-30-year-old-protocol-used-in-networks-everywhere

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere

AUTHENTICATION PROTOCOL SHATTERED —

Ubiquitous RADIUS scheme uses homegrown authentication based on MD5. Yup, you heard right.

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere

Getty Images

One of the most widely used network protocols is vulnerable to a newly discovered attack that can allow adversaries to gain control over a range of environments, including industrial controllers, telecommunications services, ISPs, and all manner of enterprise networks.

Short for Remote Authentication Dial-In User Service, RADIUS harkens back to the days of dial-in Internet and network access through public switched telephone networks. It has remained the de facto standard for lightweight authentication ever since and is supported in virtually all switches, routers, access points, and VPN concentrators shipped in the past two decades. Despite its early origins, RADIUS remains an essential staple for managing client-server interactions for:

  • VPN access
  • DSL and Fiber to the Home connections offered by ISPs,
  • Wi-Fi and 802.1X authentication
  • 2G and 3G cellular roaming
  • 5G Data Network Name authentication
  • Mobile data offloading
  • Authentication over private APNs for connecting mobile devices to enterprise networks
  • Authentication to critical infrastructure management devices
  • Eduroam and OpenRoaming Wi-Fi

RADIUS provides seamless interaction between clients—typically routers, switches, or other appliances providing network access—and a central RADIUS server, which acts as the gatekeeper for user authentication and access policies. The purpose of RADIUS is to provide centralized authentication, authorization, and accounting management for remote logins.

The protocol was developed in 1991 by a company known as Livingston Enterprises. In 1997 the Internet Engineering Task Force made it an official standard, which was updated three years later. Although there is a draft proposal for sending RADIUS traffic inside of a TLS-encrypted session that’s supported by some vendors, many devices using the protocol only send packets in clear text through UDP (User Datagram Protocol).

XKCD

A more detailed illustration of RADIUS using Password Authentication Protocol over UDP.

Enlarge / A more detailed illustration of RADIUS using Password Authentication Protocol over UDP.

Goldberg et al.

Roll-your-own authentication with MD5? For real?

Since 1994, RADIUS has relied on an improvised, home-grown use of the MD5 hash function. First created in 1991 and adopted by the IETF in 1992, MD5 was at the time a popular hash function for creating what are known as “message digests” that map an arbitrary input like a number, text, or binary file to a fixed-length 16-byte output.

For a cryptographic hash function, it should be computationally impossible for an attacker to find two inputs that map to the same output. Unfortunately, MD5 proved to be based on a weak design: Within a few years, there were signs that the function might be more susceptible than originally thought to attacker-induced collisions, a fatal flaw that allows the attacker to generate two distinct inputs that produce identical outputs. These suspicions were formally verified in a paper published in 2004 by researchers Xiaoyun Wang and Hongbo Yu and further refined in a research paper published three years later.

The latter paper—published in 2007 by researchers Marc Stevens, Arjen Lenstra, and Benne de Weger—described what’s known as a chosen-prefix collision, a type of collision that results from two messages chosen by an attacker that, when combined with two additional messages, create the same hash. That is, the adversary freely chooses two distinct input prefixes 𝑃 and 𝑃′ of arbitrary content that, when combined with carefully corresponding suffixes 𝑆 and 𝑆′ that resemble random gibberish, generate the same hash. In mathematical notation, such a chosen-prefix collision would be written as 𝐻(𝑃‖𝑆)=𝐻(𝑃′‖𝑆′). This type of collision attack is much more powerful because it allows the attacker the freedom to create highly customized forgeries.

To illustrate the practicality and devastating consequences of the attack, Stevens, Lenstra, and de Weger used it to create two cryptographic X.509 certificates that generated the same MD5 signature but different public keys and different Distinguished Name fields. Such a collision could induce a certificate authority intending to sign a certificate for one domain to unknowingly sign a certificate for an entirely different, malicious domain.

In 2008, a team of researchers that included Stevens, Lenstra, and de Weger demonstrated how a chosen prefix attack on MD5 allowed them to create a rogue certificate authority that could generate TLS certificates that would be trusted by all major browsers. A key ingredient for the attack is software named hashclash, developed by the researchers. Hashclash has since been made publicly available.

Despite the undisputed demise of MD5, the function remained in widespread use for years. Deprecation of MD5 didn’t start in earnest until 2012 after malware known as Flame, reportedly created jointly by the governments of Israel and the US, was found to have used a chosen prefix attack to spoof MD5-based code signing by Microsoft’s Windows update mechanism. Flame used the collision-enabled spoofing to hijack the update mechanism so the malware could spread from device to device inside an infected network.

More than 12 years after Flame’s devastating damage was discovered and two decades after collision susceptibility was confirmed, MD5 has felled yet another widely deployed technology that has resisted common wisdom to move away from the hashing scheme—the RADIUS protocol, which is supported in hardware or software provided by at least 86 distinct vendors. The result is “Blast RADIUS,” a complex attack that allows an attacker with an active adversary-in-the-middle position to gain administrator access to devices that use RADIUS to authenticate themselves to a server.

“Surprisingly, in the two decades since Wang et al. demonstrated an MD5 hash collision in 2004, RADIUS has not been updated to remove MD5,” the research team behind Blast RADIUS wrote in a paper published Tuesday and titled RADIUS/UDP Considered Harmful. “In fact, RADIUS appears to have received notably little security analysis given its ubiquity in modern networks.”

The paper’s publication is being coordinated with security bulletins from at least 90 vendors whose wares are vulnerable. Many of the bulletins are accompanied by patches implementing short-term fixes, while a working group of engineers across the industry drafts longer-term solutions. Anyone who uses hardware or software that incorporates RADIUS should read the technical details provided later in this post and check with the manufacturer for security guidance.

New Blast-RADIUS attack breaks 30-year-old protocol used in networks everywhere Read More »

samsung’s-abandoned-nx-cameras-can-be-brought-online-with-a-$20-lte-stick

Samsung’s abandoned NX cameras can be brought online with a $20 LTE stick

Samsung: The Next Big Thing is Here (And Gone) —

All it took was a reverse-engineered camera firmware and a custom API rewrite.

Samsung camera display next to a 4G LTE modem stick

Enlarge / Under-powered Samsung camera, meet over-powered 4G LTE dongle. Now work together to move pictures over the air.

Georg Lukas

Back in 2010—after the first iPhone, but before its camera was any good—a mirrorless, lens-swapping camera that could upload photos immediately to social media or photo storage sites was a novel proposition. That’s what Samsung’s NX cameras promised.

Unsurprisingly, Samsung didn’t keep that promise too much longer after it dropped its camera business and sales numbers disappeared. It tried out the quirky idea of jamming together Android phones and NX cameras in 2013, providing a more direct means of sending shots and clips to Instagram or YouTube. But it shut down its Social Network Services (SNS) entirely in 2021, leaving NX owners with the choices of manually transferring their photos or ditching their cameras (presuming they had not already moved on).

Some people, wonderfully, refuse to give up. People like Georg Lukas, who reverse-engineered Samsung’s SNS API to bring back a version of direct picture posting to Wi-Fi-enabled NX models, and even expand it. It was not easy, but at least the hardware is cheap. By reflashing the surprisingly capable board on a USB 4G dongle, Lukas is able to create a Wi-Fi hotspot with LTE uplink and run his modified version of Samsung’s (woefully insecure) service natively on the stick.

What is involved should you have such a camera? Here’s the shorter version of Lukas’ impressive redux:

  • Installing Debian on the LTE dongle’s board
  • Creating a Wi-Fi hotspot on the stick using NetworkManager
  • Compiling Lukas’ own upload server, written in Flask and Python
  • Configuring the web server now running on that dongle

The details of how Lukas reverse-engineered the firmware from a Samsung WB850F are posted on his blog. It is one of those Internet blog posts in which somebody describes something incredibly arcane, requiring a dozen kinds of knowledge backed by experience, with the casualness with which one might explain how to plant seeds in soil.

The hardest part of the whole experiment might be obtaining the 4G LTE stick itself. The Hackaday blog has detailed this stick (and also tipped us to this camera rebirth project), which is a purpose-built device that can be turned into a single-board computer again, on the level of a Pi Zero W2, should you apply a new bootloader and stick Linux on it. You can find it on Alibaba for very cheap—or seemingly find it, because some versions of what looks like the same stick come with a far more limited CPU. You’re looking for a stick with the MSM8916 inside, sometimes listed as a “QualComm 8916.”

Lukas’ new version posts images to Mastodon, as demonstrated in his proof of life post. It could likely be extended to more of today’s social or backup services, should he or anybody else have the time and deep love for what are not kinda cruddy cameras. Here’s hoping today’s connected devices have similarly dedicated hackers in the future.

Samsung’s abandoned NX cameras can be brought online with a $20 LTE stick Read More »

fcc-to-block-phone-company-over-robocalls-pushing-scam-“tax-relief-program”

FCC to block phone company over robocalls pushing scam “Tax Relief Program”

Tax debt scam —

Veriwave Telco “identified one client as the source of all of the calls.”

A smartphone on a wooden table displaying an incoming call from an unknown phone number.

Getty Images | Diy13

The Federal Communications Commission said it is preparing to block a phone company that carried illegal robocalls pushing fake programs that promised to wipe out consumers’ tax debt. Veriwave Telco “has not complied with FCC call blocking rules for providers suspected of carrying illegal traffic” and now has two weeks to contest an order that would require all downstream voice providers to block all of the telco’s call traffic, the FCC announced yesterday.

Robocalls sent in the months before tax filing season “purported to provide information about a ‘National Tax Relief Program’ and, in some instances, also discussed a ‘Tax Dismissal Program,'” the FCC order said. “The [Enforcement] Bureau has found no evidence of the existence of either program. Many of the messages further appealed to recipients with the offer to ‘rapidly clear’ their tax debt.”

Call recipients who listened to the prerecorded message and chose to speak to an operator were then asked to provide private information. Nearly 16 million calls were sent, though it’s unclear how many went through Veriwave.

Veriwave is an “originating provider” that distributes call traffic to other phone companies before calls are delivered to landline and cellphone users. The Industry Traceback Group (ITG), which is run by the USTelecom trade association and coordinates with the FCC, conducted tracebacks on about two dozen calls and determined that Veriwave was the originating provider.

“The ITG notified Veriwave of these calls and provided the Company with supporting data identifying each call,” the FCC said in a previous order. “Veriwave did not contest it had originated the calls and identified one client as the source of all of the calls. Veriwave did not offer evidence of consent for the calls or contest the unlawful nature of the calls. Nor did Veriwave contest that any exceptions to the rules applied.”

No reply

The robocalls began, “I’ve been tasked to personally contact you and make sure that you have been provided the information about the new National Tax Relief Program. This relevant information is extremely important with helping those that owe back taxes to rapidly clear their debt.” The calls then listed eligibility requirements for the nonexistent program and instructed recipients to press 1 to speak to a person.

“If the recipient connected to a live operator, the live operator reportedly asked for personal information, including date of birth and Social Security number,” the FCC said.

The FCC said it reached out to Veriwave “about its robocall mitigation efforts, but the email was returned as undeliverable.” The FCC then sent a formal notice to the company but received no response.

The FCC on April 4 notified all US-based voice providers that they were permitted—but not required—to block calls from Veriwave. Under the FCC’s blocking procedures, yesterday’s order triggered a 14-day period in which Veriwave can respond and “demonstrate compliance” with the rules. After that, all phone companies “immediately downstream from Veriwave will then be required to block and cease accepting all traffic received directly from Veriwave beginning 30 days after release of the Final Determination Order.”

The FCC said the ITG conducted tracebacks of 23 illegal robocalls between November 30, 2023, and January 29, 2024, but the actual number of illegal robocalls is apparently much higher. “YouMail, a software app company, estimates that approximately 15.8 million calls of this nature were transmitted in the three months immediately preceding the start of the 2024 tax filing season,” the FCC said. “The Industry Traceback Group and the FCC traced a number of these calls to Veriwave as the originating provider.”

FCC records show that Veriwave, based in Delaware, testified under penalty of perjury in November 2023 that it completed implementation of the STIR/SHAKEN technology that inhibits robocalls by authenticating Caller ID information.

FCC to block phone company over robocalls pushing scam “Tax Relief Program” Read More »