Author name: Kris Guyer

elon-musk’s-x-tests-letting-users-request-community-notes-on-bad-posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Elon Musk’s X tests letting users request Community Notes on bad posts

Continuing to evolve the fact-checking service that launched as Twitter’s Birdwatch, X has announced that Community Notes can now be requested to clarify problematic posts spreading on Elon Musk’s platform.

X’s Community Notes account confirmed late Thursday that, due to “popular demand,” X had launched a pilot test on the web-based version of the platform. The test is active now and the same functionality will be “coming soon” to Android and iOS, the Community Notes account said.

Through the current web-based pilot, if you’re an eligible user, you can click on the “•••” menu on any X post on the web and request fact-checking from one of Community Notes’ top contributors, X explained. If X receives five or more requests within 24 hours of the post going live, a Community Note will be added.

Only X users with verified phone numbers will be eligible to request Community Notes, X said, and to start, users will be limited to five requests a day.

“The limit may increase if requests successfully result in helpful notes, or may decrease if requests are on posts that people don’t agree need a note,” X’s website said. “This helps prevent spam and keep note writers focused on posts that could use helpful notes.”

Once X receives five or more requests for a Community Note within a single day, top contributors with diverse views will be alerted to respond. On X, top contributors are constantly changing, as their notes are voted as either helpful or not. If at least 4 percent of their notes are rated “helpful,” X explained on its site, and the impact of their notes meets X standards, they can be eligible to receive alerts.

“A contributor’s Top Writer status can always change as their notes are rated by others,” X’s website said.

Ultimately, X considers notes helpful if they “contain accurate, high-quality information” and “help inform people’s understanding of the subject matter in posts,” X said on another part of its site. To gauge the former, X said that the platform partners with “professional reviewers” from the Associated Press and Reuters. X also continually monitors whether notes marked helpful by top writers match what general X users marked as helpful.

“We don’t expect all notes to be perceived as helpful by all people all the time,” X’s website said. “Instead, the goal is to ensure that on average notes that earn the status of Helpful are likely to be seen as helpful by a wide range of people from different points of view, and not only be seen as helpful by people from one viewpoint.”

X will also be allowing half of the top contributors to request notes during the pilot phase, which X said will help the platform evaluate “whether it is beneficial for Community Notes contributors to have both the ability to write notes and request notes.”

According to X, the criteria for requesting a note have intentionally been designed to be simple during the pilot stage, but X expects “these criteria to evolve, with the goal that requests are frequently found valuable to contributors, and not noisy.”

It’s hard to tell from the outside looking in how helpful Community Notes are to X users. The most recent Community Notes survey data that X points to is from 2022 when the platform was still called Twitter and the fact-checking service was still called Birdwatch.

That data showed that “on average,” users were “20–40 percent less likely to agree with the substance of a potentially misleading Tweet than someone who sees the Tweet alone.” And based on Twitter’s “internal data” at that time, the platform also estimated that “people on Twitter who see notes are, on average, 15–35 percent less likely to Like or Retweet a Tweet than someone who sees the Tweet alone.”

Elon Musk’s X tests letting users request Community Notes on bad posts Read More »

coal-filled-trains-are-likely-sending-people-to-the-hospital

Coal-filled trains are likely sending people to the hospital

Training for black lung —

Coal-filled trains trail a cloud of particulates shaken free from their cargo.

a long line of open-top rail cars filled with coal against a parched, scrub filled hill.

Although US coal consumption has fallen dramatically since 2005, the country still consumes millions of tons a year, and exports tons more—much of it transported by train. Now, new research shows that these trains can affect the health of people living near where they pass.

The study found that residents living near railroad tracks likely have higher premature mortality rates due to air pollutants released during the passage of uncovered coal trains. The analysis of the San Francisco Bay Area cities of Oakland, Richmond, and Berkeley shows that increases in air pollutants such as small particulate matter (PM 2.5) are also associated with increases in asthma-related episodes and hospital admissions.

“This has never been studied in the world. There’s been a couple studies trying to measure just the air pollution, usually in rural areas, but this was the first to both measure air pollution and trains in an urban setting,” said Bart Ostro, author of the study and an epidemiologist at the University of California, Davis.

Persistent coal pollution

Trains carry nearly 70 percent of coal shipments in the United States, leaving a trail of pollution in their wake. And coal exports will have a similar impact during transit. Ostro explained that when uncovered coal trains travel, the coal particles disperse around the railroad tracks. Levels of PM 2.5 “[spread] almost a mile away,” he added.

As a result, the mere passage of coal trains could affect the health of surrounding communities. Ostro was particularly concerned about how these pollutants could harm vulnerable populations living near the coal export terminal in Richmond. Previous census data had already shown that those in Richmond who live around the rail line have mortality rates 10 to 50 percent higher than the county average. Communities in Oakland could be at risk, too, since discussions are underway to build a new coal export terminal in the region.

But before researchers could study the health effects of these air pollutants, they first had to understand how much was spread by passing trains. This was a challenge in itself because coal trains aren’t scheduled like regular passenger trains.

To ensure that researchers could measure all trains and pollutants, Ostro and his team developed a monitoring system with three main components: a weather station to provide meteorological parameters, an air quality sensor to track air pollution levels, and an AI-trained camera to recognize coal trains. The trained cameras were critical to the entire project, identifying different types of trains: full coal trains, empty coal trains, freight trains, and passenger trains.

With the system in place, Ostro’s team measured pollution levels and was able to attribute them directly to coal trains. Their results, published last year, showed that coal trains and terminal operations added a significant amount of PM 2.5 pollution to urban areas, more than other freight or passenger trains. Passing coal trains added an average of eight μg/m3 to ambient pollution. This is two to three micrograms more than freight trains contribute. Even empty coal cars contribute to increased pollution levels due to traces of coal dust.

Particulate problems

This year, in a follow-up study, researchers combined these findings with US Census data and health studies to understand how this increase might affect local communities. They estimated that more than 260,000 people would be exposed to some increase in annual PM 2.5, and that such exposure was associated with significant mortality and morbidity.

Health effects were quantified for three different scenarios based on different wind conditions. In the worst-case scenario, where there’s an increase of about two μg/m3 near the railway line, modeling suggests that premature mortality would increase by 1.3 percent. Hospital admissions for conditions such as chronic lung disease, pneumonia, and cardiovascular disease would also increase by 4.7 percent, 6.2 percent, and 2.2 percent, respectively. Although these are relatively small numbers in a small population, Ostro points out that they could be extrapolated to larger populations in other countries.

“The way I see it, this is a microcosm of what could be happening globally,” he added. While coal use—and the transportation of that coal—is declining in the US and the European Union, the same isn’t happening everywhere. In countries like China and India, for example, coal use is increasing, and populations living near the railroads that transport that coal could be at risk.

“These findings have major implications beyond San Francisco and the US,” said Michel Abramson from Monash University in Australia, who wasn’t involved in the study. The researcher thinks Ostro’s assessment “fills an important gap” by looking at the health effects of transporting coal in uncovered rail cars but doesn’t think there are any solutions to mitigate the problem other than stopping the use of coal.

“Covering the coal cars might not solve the problem, because it could increase the risk of fires,” he added. “Ultimately the world needs to phase out the mining, transport, and combustion of coal, not only to reduce the risks of climate change, but also to improve the health of the population.”

Environmental Research, 2024.  DOI: 10.1016/j.envres.2024.118787

Bárbara Pinho is a science journalist specializing in climate, health, and agriculture, based in Porto, Portugal. Learn more about her work at barbarapinho.com or follow her on X (formerly Twitter) @BarbPinho

Coal-filled trains are likely sending people to the hospital Read More »

google,-its-cat-fully-escaped-from-bag,-shows-off-the-pixel-9-pro-weeks-early

Google, its cat fully escaped from bag, shows off the Pixel 9 Pro weeks early

Google Pixel 9 Series —

Upcoming phone is teased with an AI breakup letter to “the same old thing.”

Top part of rear of Pixel 9 Pro, with

Enlarge / You can have confirmation of one of our upcoming four phones, but you have to hear us talk about AI again. Deal?

Google

After every one of its house-brand phones, and even its new wall charger, have been meticulously photographed, sized, and rated for battery capacity, what should Google do to keep the anticipation up for the Pixel 9 series’ August 13 debut?

Lean into it, it seems, and Google is doing so with an eye toward further promoting its Gemini-based AI aims. In a video post on X (formerly Twitter), Google describes a “phone built for the Gemini era,” one that can, through the power of Gemini, “even let your old phone down easy” with a breakup letter. The camera pans out, and the shape of the Pixel 9 Pro appears and turns around to show off the now-standard Pixel camera bar across the upper back.

There’s also a disclaimer to this tongue-in-cheek request for a send-off to a phone that is “just the same old thing”: “Screen simulated. Limitations apply. Check responses for accuracy.”

Over at the Google Store, you can see a static image of the Pixel 9 Pro and sign up for alerts about its availability. The image confirms that the photos taken by Taiwanese regulatory authority NCC were legitimate, right down to the coloring on the back of the Pixel 9 Pro and the camera and flash placement.

Those NCC photos confirmed that Google intends to launch four different phone-ish devices at its August 13 “Made by Google” event. The Pixel 9 and Pixel 9 Pro are both roughly 6.1-inch devices, but the Pro will likely offer more robust Gemini AI integration due to increased RAM and other spec bumps. The Pixel 9 Pro XL should have similarly AI-ready specs, just in a larger size. And the Pixel 9 Pro Fold is an iteration on Google’s first Pixel Fold model, with seemingly taller dimensions and a daringly smaller battery.

Google, its cat fully escaped from bag, shows off the Pixel 9 Pro weeks early Read More »

aventon,-a-major-e-bike-maker,-tries-its-hand-with-a-hardtail

Aventon, a major e-bike maker, tries its hand with a hardtail

Image of a large, rugged frame with hefty wheels and a straight handlebar.

Enlarge / Aventon’s Ramblas hardtail mountain bike.

John TImmer

Full suspension mountain bikes are complicated beasts, with sections of the frame that pivot and a shock absorber to moderate that pivot. These parts help limit the bumps that reach your body and keep your rear tire in contact with the trail across all sorts of terrain and obstacles. The complexity and additional parts, however, boost the costs of full suspension bikes considerably, a situation that only gets worse when you electrify things.

As a result, some of the electric mountain bikes we’ve looked at are either very expensive or make a few too many compromises to bring the price down. Even aiming for middle-of-the-road compromise hardware costs in the area of $5,000.

But there’s one easy way to lower the price considerably: lose the full suspension. The electric “hardtails” from major manufacturers typically cost considerably less than a full suspension bike with similar components. And because the engineering demands are considerably lower than in a full suspension bike, it’s easier for some of the smaller e-bike companies to put together a solid offering.

So over the course of the spring and into the summer, I’ve been testing two hardtail mountain bikes that were recently introduced by e-bike specialists. First up is the Aventon Ramblas.

The hardware

Aventon is one of the larger dedicated e-bike makers and offers a wide range of bikes at competitive prices. Most of them fall into a sort of generic “commuter” category, though; the Ramblas is the first offering from the company made for a specific audience (though it’s also categorized as a commuter option on the company’s website). It’s also the first bike the company is offering above the $2,000 price point. At $2,899, it’s actually more expensive than one of the electric hardtail models being cleared out by Trek, a company that does not have a reputation for affordability.

What do you get for that price? Solid low/mid-range components from SRAM, including its NX Eagle drive train. There’s a dropper seat, a front suspension from RockShox, and Maxxis tires. The fork is coil based, so it doesn’t offer much in the way of adjustment—what you start the ride with is pretty much what you’ll spend the entire ride experiencing, unlike many alternatives that let you firm up the ride for pavement. (It has a rebound adjustment at the bottom of the fork, but the effects are subtle.) Aventon doesn’t list who makes the rims on its website, and there are no external indications of the manufacturer there.

A mid-motor combined with a huge range of gearing ratios makes for a winning combination.

Enlarge / A mid-motor combined with a huge range of gearing ratios makes for a winning combination.

John TImmer

Overall, it’s about what you’d expect from an entry-level offering. I don’t have any concerns about the durability of the components, and their performance was mostly fine. The one thing that did concern me was the plastic cover over the battery, which didn’t fit against the frame snugly and was only held in place by relatively weak contacts at each end. It’s enough to handle some water splashed off the front wheel, but I wouldn’t trust it to protect the battery while fording anything significant.

Saddle and pedals are matters of personal taste, and many people will argue they’re irrelevant because any serious cyclist will want to replace them anyway. But that’s far less likely to be true on the budget end of the scale, so I did most of my riding on what came with the bike. The pedals, while lacking the threatening-looking screws of serious mountain bike offerings, worked out fine when paired with a sticky set of mountain bike shoes, though I felt I had a bit more confidence going over bumps on a ride where I swapped in my clipless pedals.

The saddle, however, was a problem, in part because the frame was a bit too small for my relatively long legs. The saddle has a relatively slick surface that, when combined with my road biking shorts, meant I tended to slide toward the back of the seat over time. A better-fitting frame might have solved this issue (the large version was supposedly rated up to my height, but I clearly should have gone for the XL).

The RockShox forks don't offer much in the way of adjustments, but they work reliably.

Enlarge / The RockShox forks don’t offer much in the way of adjustments, but they work reliably.

John Timmer

Speaking of the frame, Aventon has detailed measurements of the geometry available if those make sense to you. But my experience was that the bike was fairly compact in the seat-to-handlebar dimension, leaving me feeling that I was leaning over the handlebars a bit more than I do in other bikes. It wasn’t uncomfortable; it just felt different.

Aventon, a major e-bike maker, tries its hand with a hardtail Read More »

researchers-track-individual-neurons-as-they-respond-to-words

Researchers track individual neurons as they respond to words

Pondering phrasing —

When processing language, individual neurons respond to words with similar meanings.

Human Neuron, Digital Light Microscope. (Photo By BSIP/Universal Images Group via Getty Images)

Enlarge / Human Neuron, Digital Light Microscope. (Photo By BSIP/Universal Images Group via Getty Images)

BSIP/Universal Images Group via Getty Images

“Language is a huge field, and we are novices in this. We know a lot about how different areas of the brain are involved in linguistic tasks, but the details are not very clear,” says Mohsen Jamali, a computational neuroscience researcher at Harvard Medical School who led a recent study into the mechanism of human language comprehension.

“What was unique in our work was that we were looking at single neurons. There is a lot of studies like that on animals—studies in electrophysiology, but they are very limited in humans. We had a unique opportunity to access neurons in humans,” Jamali adds.

Probing the brain

Jamali’s experiment involved playing recorded sets of words to patients who, for clinical reasons, had implants that monitored the activity of neurons located in their left prefrontal cortex—the area that’s largely responsible for processing language. “We had data from two types of electrodes: the old-fashioned tungsten microarrays that can pick the activity of a few neurons; and the Neuropixel probes which are the latest development in electrophysiology,” Jamali says. The Neuropixels were first inserted in human patients in 2022 and could record the activity of over a hundred neurons.

“So we were in the operation room and asked the patient to participate. We had a mixture of sentences and words, including gibberish sounds that weren’t actual words but sounded like words. We also had a short story about Elvis,” Jamali explains. He said the goal was to figure out if there was some structure to the neuronal response to language. Gibberish words were used as a control to see if the neurons responded to them in a different way.

“The electrodes we used in the study registered voltage—it was a continuous signal at 30 kHz sampling rate—and the critical part was to dissociate how many neurons we had in each recording channel. We used statistical analysis to separate individual neurons in the signal,” Jamali says. Then, his team synchronized the neuronal activity signals with the recordings played to the patients down to a millisecond and started analyzing the data they gathered.

Putting words in drawers

“First, we translated words in our sets to vectors,” Jamali says. Specifically, his team used the Word2Vec, a technique used in computer science to find relationships between words contained in a large corpus of text. What Word2Vec can do is tell if certain words have something in common—if they are synonyms, for example. “Each word was represented by a vector in a 300-dimensional space. Then we just looked at the distance between those vectors and if the distance was close, we concluded the words belonged in the same category,” Jamali explains.

Then the team used these vectors to identify words that clustered together, which suggested they had something in common (something they later confirmed by examining which words were in a cluster together). They then determined whether specific neurons responded differently to different clusters of words. It turned out they did.

“We ended up with nine clusters. We looked at which words were in those clusters and labeled them,” Jamali says. It turned out that each cluster corresponded to a neat semantic domain. Specialized neurons responded to words referring to animals, while other groups responded to words referring to feelings, activities, names, weather, and so on. “Most of the neurons we registered had one preferred domain. Some had more, like two or three,” Jamali explained.

The mechanics of comprehension

The team also tested if the neurons were triggered by the mere sound of a word or by its meaning. “Apart from the gibberish words, another control we used in the study was homophones,” Jamali says. The idea was to test if the neurons responded differently to the word “sun” and the word “son,” for example.

It turned out that the response changed based on context. When the sentence made it clear the word referred to a star, the sound triggered neurons triggered by weather phenomena. When it was clear that the same sound referred to a person, it triggered neurons responsible for relatives. “We also presented the same words at random without any context and found that it didn’t elicit as strong a response as when the context was available,” Jamali claims.

But the language processing in our brains will need to involve more than just different semantic categories being processed by different groups of neurons.

“There are many unanswered questions in linguistic processing. One of them is how much a structure matters, the syntax. Is it represented by a distributed network, or can we find a subset of neurons that encode structure rather than meaning?” Jamali asked. Another thing his team wants to study is what the neural processing looks like during speech production, in addition to comprehension. “How are those two processes related in terms of brain areas and the way the information is processed,” Jamali adds.

The last thing—and according to Jamali the most challenging thing—is using the Neuropixel probes to see how information is processed across different layers of the brain. “The Neuropixel probe travels through the depths of the cortex, and we can look at the neurons along the electrode and say like, ‘OK, the information from this layer, which is responsible for semantics, goes to this layer, which is responsible for something else.’ We want to learn how much information is processed by each layer. This should be challenging, but it would be interesting to see how different areas of the brain are involved at the same time when presented with linguistic stimuli,” Jamali concludes.

Nature, 2024.  DOI: 10.1038/s41586-024-07643-2

Researchers track individual neurons as they respond to words Read More »

craig-wright’s-claim-of-inventing-bitcoin-may-get-him-arrested-for-perjury

Craig Wright’s claim of inventing bitcoin may get him arrested for perjury

Not the real Satoshi —

UK judge refers Wright to prosecutors, suggests arrest warrant and extradition.

Craig Wright walking on the street, wearing a suit and tie.

Enlarge / Dr. Craig Wright arrives at the Rolls Building, part of the Royal Courts of Justice, on February 6, 2024, in London, England.

A British judge is referring self-proclaimed bitcoin inventor Craig Wright to the Crown Prosecution Service (CPS) to consider criminal charges of perjury and forgery. The judge said that CPS can decide whether Wright should be arrested and granted two injunctions that prohibit Wright from re-litigating his claim to be bitcoin inventor Satoshi Nakamoto.

“I have no doubt that I should refer the relevant papers in this case to the CPS for consideration of whether a prosecution should be commenced against Dr. Wright for his wholescale perjury and forgery of documents and/or whether a warrant for his arrest should be issued and/or whether his extradition should be sought from wherever he now is. All those matters are to be decided by the CPS,” Justice James Mellor of England’s High Court of Justice wrote in a ruling issued today.

If Wright actually believes he is Nakamoto, “he is deluding himself,” Mellor wrote.

Mellor previously found that Wright “lied repeatedly and extensively” and forged documents “on a grand scale” in a case related to Wright’s claim that he is Nakamoto. The case began when Wright was sued by the nonprofit Crypto Open Patent Alliance (COPA), which said its goal was to disprove Wright’s bitcoin-inventing claim and stop him from claiming intellectual property rights to the system.

Wright’s location unknown

Wright’s location is unknown, today’s ruling said. “The evidence shows that Dr. Wright has left his previous residence in Wimbledon, appears to have left the UK, has been said to be traveling and was last established to be in the time zone of UTC +7,” Mellor wrote.

COPA asked Mellor “to dispense with personal service of the final Order on Dr. Wright” because his whereabouts are a mystery. COPA told the court that “Dr. Wright may either be deliberately evading service or at least is peripatetic and is very difficult to locate.” Mellor wrote that COPA’s view “seems to me to be fully justified and warrants the order which COPA seeks as to service of my final Order on Dr. Wright at his solicitors.”

After the events of the trial, Mellor’s decision to refer Wright for a perjury prosecution was apparently an easy one. “As COPA submitted, if what happened in this case does not warrant referral to the CPS, it is difficult to envisage a case which would… In advancing his false claim to be Satoshi through multiple legal actions, Dr. Wright committed ‘a most serious abuse’ of the process of the courts of the UK, Norway and the USA,” Mellor wrote.

Anti-lawsuit injunction

Mellor also approved COPA’s request for injunctions that prohibit Wright from bringing certain kinds of lawsuits based on his bitcoin-inventing claim. As the Associated Press reported, the approved injunctions are intended to prevent Wright “from threatening to sue or filing lawsuits aimed at developers.”

The COPA requests approved by Mellor were for “an anti-suit injunction preventing Dr. Wright or the other Claimants in the related claims from pursuing further proceedings in this or other jurisdictions to re-litigate his claim to be Satoshi,” and “a related order preventing him from threatening such proceedings.”

Mellor declined to issue additional orders preventing Wright from asserting legal rights as Nakamoto, preventing re-publication of Wright’s fraudulent claims, and requiring him to delete previously published statements. The judge said there was some overlap between the injunction requests that were approved and those that were not. Moreover, Wright would have difficulty convincing anyone that he invented bitcoin without violating the two approved injunctions.

Although there is a slight risk that “certain people may start to change their minds or begin to believe that Dr. Wright is Satoshi… I am inclined to the view that the effect would be small. Right-thinking people are likely to regard those assertions as hot air or empty rhetoric, even faintly ridiculous,” Mellor wrote.

Similarly, an order to delete statements “would be disproportionate” and is unnecessary because “anyone with an interest in Bitcoin will have been aware of the COPA Trial and know of the outcome,” Mellor wrote. However, the judge decided that COPA can make the requests again if it turns out to be necessary.

“I accept that my assessment may turn out to be off the mark. Furthermore, the evidence shows that whilst Dr. Wright has modified his public statements following the outcome of the COPA Trial, that may well turn out to be temporary. Dr. Wright is perfectly capable, once the dust has settled, of ramping up his public pronouncements again,” Mellor wrote.

Because of that possibility, Mellor said COPA has “permission to apply, for a period of 2 years, for any further injunctive relief they consider they can establish to be required to protect the interests of the corporate entities they represent as well as the individuals in the Bitcoin community who have suffered due to Dr. Wright’s false claim to be Satoshi.”

Craig Wright’s claim of inventing bitcoin may get him arrested for perjury Read More »

google’s-$500m-effort-to-wreck-microsoft-eu-cloud-deal-failed,-report-says

Google’s $500M effort to wreck Microsoft EU cloud deal failed, report says

Google’s $500M effort to wreck Microsoft EU cloud deal failed, report says

Google tried to derail a Microsoft antitrust settlement over anticompetitive software licensing in the European Union by offering a $500 million alternative deal to the group of cloud providers behind the EU complaint, Bloomberg reported.

According to Bloomberg, Google’s offer to the Cloud Infrastructure Services Providers in Europe (CISPE) required that the group maintain its EU antitrust complaint. It came “just days” before CISPE settled with Microsoft, and it was apparently not compelling enough to stop CISPE from inking a deal with the software giant that TechCrunch noted forced CISPE to accept several compromises.

Bloomberg uncovered Google’s attempted counteroffer after reviewing confidential documents and speaking to “people familiar with the matter.” Apparently, Google sought to sway CISPE with a package worth nearly $500 million for more than five years of software licenses and about $15 million in cash.

But CISPE did not take the bait, announcing last week that an agreement was reached with Microsoft, seemingly frustrating Google.

CISPE initially raised its complaint in 2022, alleging that Microsoft was “irreparably damaging the European cloud ecosystem and depriving European customers of choice in their cloud deployments” by spiking costs to run Microsoft’s software on rival cloud services. In February, CISPE said that “any remedies and resolution must apply across the sector and to be accessible to all cloud customers in Europe.” They also promised that “any agreements will be made public.”

But the settlement reached last week excluded major rivals, including Amazon, which is a CISPE member, and Google, which is not. And despite CISPE’s promise, the terms of the deal were not published, apart from a CISPE blog roughly outlining central features that it claimed resolved the group’s concerns over Microsoft’s allegedly anticompetitive behaviors.

What is clear is that CISPE agreed to drop their complaint by taking the deal, but no one knows exactly how much Microsoft paid in a “lump sum” to cover CISPE legal fees for three years, TechCrunch noted. However, “two people with direct knowledge of the matter” told Reuters that Microsoft offered about $22 million.

Google has been trying to catch up with Microsoft and Amazon in the cloud market and has recently begun gaining ground. Last year, Google’s cloud operation broke even for the first time, and the company earned a surprising $900 million in profits in the first quarter of 2024, which bested analysts’ projections by more than $200 million, Bloomberg reported. For Google, the global cloud market has become a key growth area, Bloomberg noted, as potential growth opportunities in search advertising slow. Seemingly increasing regulatory pressure on Microsoft while taking a chunk of its business in the EU was supposed to be one of Google’s next big moves.

A CISPE spokesperson, Ben Maynard, told Ars that its “members were presented with alternative options to accepting the Microsoft deal,” while not disclosing the terms of the other options. “However, the members voted by a significant majority to accept the Microsoft offer, which, in their view, presented the best opportunity for the European cloud sector,” Maynard told Ars.

Neither Microsoft nor Google has commented directly on the reported counteroffer. A Google spokesperson told Bloomberg that Google “has long supported the principles of fair software licensing and that the firm was having discussions about joining CISPE, to fight anticompetitive licensing practices.” A person familiar with the matter told Ars that Google did not necessarily make the counteroffer contingent on dropping the EU complaint, but had long been exploring joining CISPE and would only do so if CISPE upheld its mission to defend fair licensing deals. Microsoft reiterated a past statement from its president, Brad Smith, confirming that Microsoft was “pleased” to resolve CISPE’s antitrust complaint.

For CISPE, the resolution may not have been perfect, but it “will enable European cloud providers to offer Microsoft applications and services on their local cloud infrastructures, meeting the demand for sovereign cloud solutions.” In 2022, CISPE Secretary-General Francisco Mingorance told Ars that although CISPE had been clear that it intended to force Microsoft to make changes allowing all cloud rivals to compete, “a key reason behind filing the complaint was to support” two smaller cloud service providers, Aruba and OVH.

Google’s $500M effort to wreck Microsoft EU cloud deal failed, report says Read More »

elon-musk’s-x-faces-big-eu-fines-as-paid-checkmarks-are-ruled-deceptive

Elon Musk’s X faces big EU fines as paid checkmarks are ruled deceptive

Blue checkmarks —

Paid “verification” deceives X users and violates Digital Services Act, EU says.

Elon Musk's X account profile displayed on a phone screen

Getty Images | NurPhoto

Elon Musk’s overhaul of the Twitter verification system deceives users and violates the Digital Services Act, the European Commission said today in an announcement of preliminary findings that could lead to a big financial penalty.

The social media platform now called X “designs and operates its interface for the ‘verified accounts’ with the ‘Blue checkmark’ in a way that does not correspond to industry practice and deceives users,” the EU regulator said. “Since anyone can subscribe to obtain such a ‘verified’ status, it negatively affects users’ ability to make free and informed decisions about the authenticity of the accounts and the content they interact with. There is evidence of motivated malicious actors abusing the ‘verified account’ to deceive users.”

Blue checkmarks “used to mean trustworthy sources of information,” Commissioner for Internal Market Thierry Breton said. The EC said it “informed X of its preliminary view that it is in breach of the Digital Services Act (DSA) in areas linked to dark patterns, advertising transparency and data access for researchers.”

X will have an opportunity to respond in writing. If the preliminary finding is upheld, the EC said it would adopt a non-compliance decision that “could entail fines of up to 6 percent of the total worldwide annual turnover of the provider, and order the provider to take measures to address the breach.”

A non-compliance decision may also “trigger an enhanced supervision period to ensure compliance with the measures the provider intends to take to remedy the breach,” and “periodic penalty payments to compel a platform to comply.” X is allowed to “exercise its rights of defense by examining the documents in the Commission’s investigation file and by replying in writing to the Commission’s preliminary findings,” the announcement said.

We contacted X today and will update this article if the company provides a response to the EU findings.

Advertising and data access charges

As for the second alleged violation, the EC said that “X does not comply with the required transparency on advertising, as it does not provide a searchable and reliable advertisement repository, but instead put in place design features and access barriers that make the repository unfit for its transparency purpose towards users. In particular, the design does not allow for the required supervision and research into emerging risks brought about by the distribution of advertising online.”

Thirdly, the commission said it found that “X fails to provide access to its public data to researchers in line with the conditions set out in the DSA. In particular, X prohibits eligible researchers from independently accessing its public data, such as by scraping, as stated in its terms of service. In addition, X’s process to grant eligible researchers access to its application programming interface (API) appears to dissuade researchers from carrying out their research projects or leave them with no other choice than to pay disproportionately high fees.”

In December 2023, the EC announced that Musk’s X platform was subject to the first formal investigation into possible DSA violations. X said at the time that it “remains committed to complying with the Digital Services Act and is cooperating with the regulatory process. It is important that this process remains free of political influence and follows the law.”

With today’s announcement, X is the first company to face preliminary findings of DSA non-compliance.

“The DSA has transparency at its very core, and we are determined to ensure that all platforms, including X, comply with EU legislation,” said EC competition official Margrethe Vestager.

Elon Musk’s X faces big EU fines as paid checkmarks are ruled deceptive Read More »

nearly-all-at&t-subscribers’-call-records-stolen-in-snowflake-cloud-hack

Nearly all AT&T subscribers’ call records stolen in Snowflake cloud hack

AT&T data breach —

Six months of call and text records taken from AT&T workspace on cloud platform.

AT&T logo displayed on a smartphone with a stock exchange index graph in the background.

Getty Images | SOPA Images

AT&T today said a breach on a third-party cloud platform exposed the call and text records of nearly all its cellular customers. The leaked data is said to include phone numbers that AT&T subscribers communicated with, but not names.

An AT&T spokesperson confirmed to Ars that the data was exposed in the recently reported attack on “AI data cloud” provider Snowflake, which also affected Ticketmaster and many other companies. As previously reported, Snowflake was compromised by a group that obtained login credentials through information-stealing malware.

“In April, AT&T learned that customer data was illegally downloaded from our workspace on a third-party cloud platform,” AT&T announced today. AT&T said it is working with law enforcement and “understands that at least one person has been apprehended.”

AT&T said it does not believe the stolen call data has been made publicly available. “The call and text records identify the phone numbers with which an AT&T number interacted during this period, including AT&T landline (home phone) customers. It also included counts of those calls or texts and total call durations for specific days or months,” AT&T said.

Records of “nearly all” AT&T customers

The data does not include the content of calls or text messages, AT&T said.

“Based on our investigation, the compromised data includes files containing AT&T records of calls and texts of nearly all of AT&T’s cellular customers, customers of mobile virtual network operators (MVNOs) using AT&T’s wireless network, as well as AT&T’s landline customers who interacted with those cellular numbers between May 1, 2022 – October 31, 2022. The compromised data also includes records from January 2, 2023, for a very small number of customers,” AT&T said.

The carrier said the breach does not include Social Security numbers, dates of birth, other personally identifiable information, or the time stamps for calls and texts. “While the data does not include customer names, there are often ways, using publicly available online tools, to find the name associated with a specific telephone number,” an AT&T filing with the Securities and Exchange Commission said.

AT&T’s SEC filing said the “records identify the telephone numbers with which an AT&T or MVNO wireless number interacted during these periods, including telephone numbers of AT&T wireline customers and customers of other carriers, counts of those interactions, and aggregate call duration for a day or month. For a subset of records, one or more cell site identification number(s) are also included.”

AT&T said it has “clos[ed] off the point of unlawful access” and is notifying current and former customers of the breach. AT&T’s current and former customers can obtain the data that was compromised, and details on how to make those data requests are available on this page.

FBI and FCC comment

The Federal Bureau of Investigation said AT&T and law enforcement agreed to delay public reporting of the incident when the investigation began in April. The FBI provided this statement to Ars:

Shortly after identifying a potential breach to customer data and before making its materiality decision, AT&T contacted the FBI to report the incident. In assessing the nature of the breach, all parties discussed a potential delay to public reporting under Item 1.05(c) of the SEC Rule, due to potential risks to national security and/or public safety. AT&T, FBI, and DOJ worked collaboratively through the first and second delay process, all while sharing key threat intelligence to bolster FBI investigative equities and to assist AT&T’s incident response work.

The FBI declined to provide any information on the person who was apprehended. The Federal Communications Commission said it has “an ongoing investigation into the AT&T breach and we’re coordinating with our law enforcement partners.”

An AT&T spokesperson told Ars that the Snowflake breach is unrelated to another recent leak involving the data of 73 million current and former subscribers.

Nearly all AT&T subscribers’ call records stolen in Snowflake cloud hack Read More »

giant-salamander-species-found-in-what-was-thought-to-be-an-icy-ecosystem

Giant salamander species found in what was thought to be an icy ecosystem

Feeding time —

Found after its kind were thought extinct, and where it was thought to be too cold.

A black background with a brown fossil at the center, consisting of the head and a portion of the vertebral column.

C. Marsicano

Gaiasia jennyae, a newly discovered freshwater apex predator with a body length reaching 4.5 meters, lurked in the swamps and lakes around 280 million years ago. Its wide, flattened head had powerful jaws full of huge fangs, ready to capture any prey unlucky enough to swim past.

The problem is, to the best of our knowledge, it shouldn’t have been that large, should have been extinct tens of millions of years before the time it apparently lived, and shouldn’t have been found in northern Namibia. “Gaiasia is the first really good look we have at an entirely different ecosystem we didn’t expect to find,” says Jason Pardo, a postdoctoral fellow at Field Museum of Natural History in Chicago. Pardo is co-author of a study on the Gaiasia jennyae discovery recently published in Nature.

Common ancestry

“Tetrapods were the animals that crawled out of the water around 380 million years ago, maybe a little earlier,” Pardo explains. These ancient creatures, also known as stem tetrapods, were the common ancestors of modern reptiles, amphibians, mammals, and birds. “Those animals lived up to what we call the end of Carboniferous, about 370–300 million years ago. Few made it through, and they lasted longer, but they mostly went extinct around 370 million ago,” he adds.

This is why the discovery of Gaiasia jennyae in the 280 million-year-old rocks of Namibia was so surprising. Not only wasn’t it extinct when the rocks it was found in were laid down, but it was dominating its ecosystem as an apex predator. By today’s standards, it was like stumbling upon a secluded island hosting animals that should have been dead for 70 million years, like a living, breathing T-rex.

“The skull of gaiasia we have found is about 67 centimeters long. We also have a front end of her upper body. We know she was at minimum 2.5 meters long, probably 3.5, 4.5 meters—big head and a long, salamander-like body,” says Pardo. He told Ars that gaiasia was a suction feeder: she opened her jaws under water, which created a vacuum that sucked her prey right in. But the large, interlocked fangs reveal that a powerful bite was also one of her weapons, probably used to hunt bigger animals. “We suspect gaiasia fed on bony fish, freshwater sharks, and maybe even other, smaller gaiasia,” says Pardo, suggesting it was a rather slow, ambush-based predator.

But considering where it was found, the fact that it had enough prey to ambush is perhaps even more of a shocker than the animal itself.

Location, location, location

“Continents were organized differently 270–280 million years ago,” says Pardo. Back then, one megacontinent called Pangea had already broken into two supercontinents. The northern supercontinent called Laurasia included parts of modern North America, Russia, and China. The southern supercontinent, the home of gaiasia, was called Gondwana, which consisted of today’s India, Africa, South America, Australia, and Antarctica. And Gondwana back then was pretty cold.

“Some researchers hypothesize that the entire continent was covered in glacial ice, much like we saw in North America and Europe during the ice ages 10,000 years ago,” says Pardo. “Others claim that it was more patchy—there were those patches where ice was not present,” he adds. Still, 280 million years ago, northern Namibia was around 60 degrees southern latitude—roughly where the northernmost reaches of Antarctica are today.

“Historically, we thought tetrapods [of that time] were living much like modern crocodiles. They were cold-blooded, and if you are cold-blooded the only way to get large and maintain activity would be to be in a very hot environment. We believed such animals couldn’t live in colder environments. Gaiasia shows that it is absolutely not the case,” Pardo claims. And this turned upside-down lots of what we knew about life on Earth back in gaiasia’s time.

Giant salamander species found in what was thought to be an icy ecosystem Read More »

arm-tweaks-amd’s-fsr-to-bring-battery-saving-gpu-upscaling-to-phones-and-tablets

Arm tweaks AMD’s FSR to bring battery-saving GPU upscaling to phones and tablets

situation: there are 14 competing standards —

Arm “Accuracy Super Resolution” is optimized for power use and integrated GPUs.

An Arm sample image meant to show off its new

Enlarge / An Arm sample image meant to show off its new “Accuracy Super Resolution” upscaling tech.

Arm

Some of the best Arm processors come from companies like Apple and Qualcomm, which license Arm’s processor instruction set but create their own custom or semi-custom CPU designs. But Arm continues to plug away on its own CPU and GPU architectures and related technologies, and the company has announced that it’s getting into the crowded field of graphics upscaling technology.

Arm’s Accuracy Super Resolution (ASR) is a temporal upscaler that is based on AMD’s open source FidelityFX Super Resolution 2, which Arm says allows developers to “benefit from the familiar API and configuration options.” (This AMD presentation from GDC 2023 gets into some of the differences between different kinds of upscalers.)

AMD’s FSR and Nvidia’s DLSS on gaming PCs are mostly sold as a way to boost graphical fidelity—increasing frame rates beyond 60 fps or rendering “4K” images on graphics cards that are too slow to do those things natively, for example. But since Arm devices are still (mostly, for now) phones and tablets, Arm is leaning into the potential power savings that are possible with lower GPU use. A less-busy GPU also runs cooler, reducing the likelihood of thermal throttling; Arm mentions reduced throttling as a benefit of ASR, though it doesn’t say how much of ASR’s performance advantage over FSR is attributable to reduced throttling.

“Using [ASR] rendered high-quality results at a stable, low temperature,” writes Arm Director for Ecosystem Strategy Peter Hodges. “Rendering at a native resolution inevitably led to undesirable thermal throttling, which in games can ruin the user experience and shorten engagement.”

Why not just use FSR2 without modification? Arm claims that the ASR upscaling tech has been tuned to reduce GPU usage and to run well on devices without a ton of memory bandwidth—think low-power mobile GPUs with integrated graphics rather than desktop-class graphics cards. ASR’s GPU use is as little as one-third of FSR2’s at the same target resolutions and scaling factors. Arm also claims that ASR delivers roughly 20 to 40 percent better frame rates than FSR2 on Arm devices, depending on the settings you’re using.

  • Arm also says that reduced GPU usage when using ASR can lead to lower heat and improved battery life.

    Arm

  • Arm says that ASR runs faster and uses less power than FSR on the same mobile hardware.

    Arm

Arm says it used “a commercial mobile device that features an Arm Immortalis-G720 GPU” for its performance testing and that it worked with MediaTek to corroborate its power consumption numbers “using a Dimensity 9300 handset.”

When the ASR spec is released, it will be up to OS makers and game developers to implement it. Apple will likely stick with its own MetalFX upscaling technology—also derived from AMD’s FSR, for what that’s worth. Microsoft is pushing “Automatic Super Resolution” on Arm devices while also attempting to develop a vendor-agnostic upscaling API in “DirectSR.” Qualcomm announced Snapdragon Game Super Resolution a little over a year ago.

Arm’s upscaler has the benefit of being hardware-agnostic and also open-source (Arm says it “want[s] to share [ASR] with the developer community under an MIT open-source license”) so that other upscalers can benefit from its improvements. Qualcomm’s upscaler is also a simpler spatial upscaler a la AMD’s first-generation FSR algorithm, so Arm’s upscaler could also end up producing superior image quality on the same GPUs.

We’re undeniably getting into that one xkcd comic about the proliferation of standards territory here, but it’s at least interesting to see different companies using graphics upscaling technology to solve problems other than “make games look nicer.”

Listing image by Arm

Arm tweaks AMD’s FSR to bring battery-saving GPU upscaling to phones and tablets Read More »

republicans-angry-that-isps-receiving-us-grants-must-offer-low-cost-plans

Republicans angry that ISPs receiving US grants must offer low-cost plans

Illustration of ones and zeroes overlaid on a US map.

Getty Images | Matt Anderson Photography

Republican lawmakers are fighting a Biden administration attempt to bring cheap broadband service to low-income people, claiming it is an illegal form of rate regulation. GOP leaders of the House Energy and Commerce Committee announced an investigation into the National Telecommunications and Information Administration (NTIA), which is administering the $42.45 billion Broadband Equity, Access, and Deployment (BEAD) program that was approved by Congress in November 2021.

“States have reported that the NTIA is directing them to set rates and conditioning approval of initial proposals on doing so. This undoubtedly constitutes rate regulation by the NTIA,” states a letter to the NTIA from Committee Chair Cathy McMorris Rodgers (R-Wash.), Subcommittee on Communications and Technology Chair Bob Latta (R-Ohio), and Subcommittee on Oversight and Investigations Chair Morgan Griffith (R-Va.).

As evidence, the letter points to a statement by Virginia that described feedback received from the NTIA. The federal agency told Virginia that “the low-cost option must be established in the Initial proposal as an exact price or formula.”

The Republicans said anecdotal evidence suggests “the NTIA may be evaluating initial proposals counter to Congressional intent and in violation of the law.” They asked the agency for all communications about the grants between NTIA officials and state broadband offices.

The US law that ordered NTIA to distribute the money requires that Internet providers receiving federal funds offer at least one “low-cost broadband service option for eligible subscribers.” But the law also says the NTIA may not “regulate the rates charged for broadband service.”

We’re following the law, agency says

An NTIA spokesperson told Ars that the agency is working to implement the law’s requirement that grant recipients offer an affordable service tier to qualifying low-income households. “We’ve received the letter and will respond through the appropriate channels. NTIA is working to implement BEAD in a manner that is faithful to the statute,” the agency said.

NTIA Administrator Alan Davidson tried to deflect Republican criticism of the low-cost requirements at a hearing in May. He said that requiring a low-cost option, as the law demands, is not the same as regulating broadband rates.

“The statute requires that there be a low-cost service option,” Davidson told Latta at the hearing, according to Broadband Breakfast. “We do not believe the states are regulating rates here. We believe that this is a condition to get a federal grant. Nobody’s requiring a service provider to follow these rates, people do not have to participate in the program.”

The NTIA needs to evaluate specific proposals to determine whether plans are low-cost, he said. “You have to be able to understand what is affordable,” Davidson was quoted as saying. “Every state has to submit a low-cost option that we can understand is affordable. When states do that, we will approve their plans.”

Republicans angry that ISPs receiving US grants must offer low-cost plans Read More »