Author name: Kris Guyer

senate-panel-votes-20–0-for-holding-ceo-of-“health-care-terrorists”-in-contempt

Senate panel votes 20–0 for holding CEO of “health care terrorists” in contempt

Not above the law —

After he rejected subpoena, contempt charges against de la Torre go before Senate.

Ralph de la Torre, founder and chief executive officer of Steward Health Care System LLC, speaks during a summit in New York on Tuesday, Oct. 25, 2016.

Enlarge / Ralph de la Torre, founder and chief executive officer of Steward Health Care System LLC, speaks during a summit in New York on Tuesday, Oct. 25, 2016.

A Senate committee on Thursday voted overwhelmingly to hold the wealthy CEO of a failed hospital chain in civil and criminal contempt for rejecting a rare subpoena from the lawmakers.

In July, the Senate Committee on Health, Education, Labor, and Pensions (HELP) subpoenaed Steward Health Care CEO Ralph de la Torre to testify before the lawmakers on the deterioration and eventual bankruptcy of the system, which included more than 30 hospitals across eight states. The resulting dire conditions in the hospitals, described as providing “third-world medicine,” allegedly led to the deaths of at least 15 patients and imperiled more than 2,000 others.

The committee, chaired by Senator Bernie Sanders (I-Vt.), highlighted that amid the system’s collapse, de la Torre was paid at least $250 million, bought a $40 million yacht, and owned a $15 million luxury fishing boat. Meanwhile, Steward executives jetted around on two private jets collectively worth $95 million.

De la Torre initially agreed to appear at the September 12 hearing but backed out the week beforehand. He claimed, through his lawyers, that a federal order stemming from Steward’s bankruptcy case prohibited him from discussing the hospital system’s situation amid reorganization and settlement efforts. The HELP committee rejected that explanation, but de la Torre was nevertheless a no-show at the hearing.

In a 20–0 bipartisan vote Thursday, the HELP committee held de la Torre in civil and criminal contempt, with only Sen. Rand Paul (R-Ky.) abstaining. It is the first time in modern history the committee has issued civil and criminal contempt resolutions. The charges will now go before the full Senate for a vote.

If upheld by the full Senate, the civil enforcement will direct the Senate’s legal counsel to bring a federal civil suit against de la Torre in order to force him to comply with the subpoena and testify before the HELP Committee. The criminal contempt charge would refer the case to the US Attorney for the District of Columbia to criminally prosecute de la Torre for failing to comply with the subpoena. If the trial proceeds and de la Torre is convicted, the tarnished CEO could face a fine of up to $100,000 and a prison sentence of up to 12 months.

On Wednesday, the day before the committee voted on the contempt charges, a lawyer for de la Torre blasted the senators and claimed that testifying at the hearing would have violated his Fifth Amendment rights, according to the Boston Globe.

In a statement Thursday, Sanders slammed de la Torre, saying that his wealth and expensive lawyers did not make him above the law. “If you defy a Congressional subpoena, you will be held accountable no matter who you are or how well-connected you may be,” he said.

Senate panel votes 20–0 for holding CEO of “health care terrorists” in contempt Read More »

how-to-stop-linkedin-from-training-ai-on-your-data

How to stop LinkedIn from training AI on your data

Better to beg for forgiveness than ask for permission? —

LinkedIn limits opt-outs to future training, warns AI models may spout personal data.

How to stop LinkedIn from training AI on your data

LinkedIn admitted Wednesday that it has been training its own AI on many users’ data without seeking consent. Now there’s no way for users to opt out of training that has already occurred, as LinkedIn limits opt-out to only future AI training.

In a blog detailing updates coming on November 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn’s user agreement and privacy policy will be changed to better explain how users’ personal data powers AI on the platform.

Under the new privacy policy, LinkedIn now informs users that “we may use your personal data… [to] develop and train artificial intelligence (AI) models, develop, provide, and personalize our Services, and gain insights with the help of AI, automated systems, and inferences, so that our Services can be more relevant and useful to you and others.”

An FAQ explained that the personal data could be collected any time a user interacts with generative AI or other AI features, as well as when a user composes a post, changes their preferences, provides feedback to LinkedIn, or uses the platform for any amount of time.

That data is then stored until the user deletes the AI-generated content. LinkedIn recommends that users use its data access tool if they want to delete or request to delete data collected about past LinkedIn activities.

LinkedIn’s AI models powering generative AI features “may be trained by LinkedIn or another provider,” such as Microsoft, which provides some AI models through its Azure OpenAI service, the FAQ said.

A potentially major privacy risk for users, LinkedIn’s FAQ noted, is that users who “provide personal data as an input to a generative AI powered feature” could end up seeing their “personal data being provided as an output.”

LinkedIn claims that it “seeks to minimize personal data in the data sets used to train the models,” relying on “privacy enhancing technologies to redact or remove personal data from the training dataset.”

While Lawit’s blog avoids clarifying if data already collected can be removed from AI training data sets, the FAQ affirmed that users who automatically opted in to sharing personal data for AI training can only opt out of the invasive data collection “going forward.”

Opting out “does not affect training that has already taken place,” the FAQ said.

A LinkedIn spokesperson told Ars that it “benefits all members” to be opted in to AI training “by default.”

“People can choose to opt out, but they come to LinkedIn to be found for jobs and networking and generative AI is part of how we are helping professionals with that change,” LinkedIn’s spokesperson said.

By allowing opt-outs of future AI training, LinkedIn’s spokesperson additionally claimed that the platform is giving “people using LinkedIn even more choice and control when it comes to how we use data to train our generative AI technology.”

How to opt out of AI training on LinkedIn

Users can opt out of AI training by navigating to the “Data privacy” section in their account settings, then turning off the option allowing collection of “data for generative AI improvement” that LinkedIn otherwise automatically turns on for most users.

The only exception is for users in the European Economic Area or Switzerland, who are protected by stricter privacy laws that either require consent from platforms to collect personal data or for platforms to justify the data collection as a legitimate interest. Those users will not see an option to opt out, because they were never opted in, LinkedIn repeatedly confirmed.

Additionally, users can “object to the use of their personal data for training” generative AI models not used to generate LinkedIn content—such as models used for personalization or content moderation purposes, The Verge noted—by submitting the LinkedIn Data Processing Objection Form.

Last year, LinkedIn shared AI principles, promising to take “meaningful steps to reduce the potential risks of AI.”

One risk that the updated user agreement specified is that using LinkedIn’s generative features to help populate a profile or generate suggestions when writing a post could generate content that “might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes.”

Users are advised that they are responsible for avoiding sharing misleading information or otherwise spreading AI-generated content that may violate LinkedIn’s community guidelines. And users are additionally warned to be cautious when relying on any information shared on the platform.

“Like all content and other information on our Services, regardless of whether it’s labeled as created by ‘AI,’ be sure to carefully review before relying on it,” LinkedIn’s user agreement says.

Back in 2023, LinkedIn claimed that it would always “seek to explain in clear and simple ways how our use of AI impacts people,” because users’ “understanding of AI starts with transparency.”

Legislation like the European Union’s AI Act and the GDPR—especially with its strong privacy protections—if enacted elsewhere, would lead to fewer shocks to unsuspecting users. That would put all companies and their users on equal footing when it comes to training AI models and result in fewer nasty surprises and angry customers.

How to stop LinkedIn from training AI on your data Read More »

ever-wonder-how-crooks-get-the-credentials-to-unlock-stolen-phones?

Ever wonder how crooks get the credentials to unlock stolen phones?

BUSTED —

iServer provided a simple service for phishing credentials to unlock phones.

Ever wonder how crooks get the credentials to unlock stolen phones?

Getty Images

A coalition of law-enforcement agencies said it shut down a service that facilitated the unlocking of more than 1.2 million stolen or lost mobile phones so they could be used by someone other than their rightful owner.

The service was part of iServer, a phishing-as-a-service platform that has been operating since 2018. The Argentina-based iServer sold access to a platform that offered a host of phishing-related services through email, texts, and voice calls. One of the specialized services offered was designed to help people in possession of large numbers of stolen or lost mobile devices to obtain the credentials needed to bypass protections such as the lost mode for iPhones, which prevent a lost or stolen device from being used without entering its passcode.

iServer's phishing-as-a-service model.

Enlarge / iServer’s phishing-as-a-service model.

Group-IB

Catering to low-skilled thieves

An international operation coordinated by Europol’s European Cybercrime Center said it arrested the Argentinian national that was behind iServer and identified more than 2,000 “unlockers” who had enrolled in the phishing platform over the years. Investigators ultimately found that the criminal network had been used to unlock more than 1.2 million mobile phones. Officials said they also identified 483,000 phone owners who had received messages phishing for credentials for their lost or stolen devices.

According to Group-IB, the security firm that discovered the phone-unlocking racket and reported it to authorities, iServer provided a web interface that allowed low-skilled unlockers to phish the rightful device owners for the device passcodes, user credentials from cloud-based mobile platforms, and other personal information.

Group-IB wrote:

During its investigations into iServer’s criminal activities, Group-IB specialists also uncovered the structure and roles of criminal syndicates operating with the platform: the platform’s owner/developer sells access to “unlockers,” who in their turn provide phone unlocking services to other criminals with locked stolen devices. The phishing attacks are specifically designed to gather data that grants access to physical mobile devices, enabling criminals to acquire users’ credentials and local device passwords to unlock devices or unlink them from their owners. iServer automates the creation and delivery of phishing pages that imitate popular cloud-based mobile platforms, featuring several unique implementations that enhance its effectiveness as a cybercrime tool.

Unlockers obtain the necessary information for unlocking the mobile phones, such as IMEI, language, owner details, and contact information, often accessed through lost mode or via cloud-based mobile platforms. They utilize phishing domains provided by iServer or create their own to set up a phishing attack. After selecting an attack scenario, iServer creates a phishing page and sends an SMS with a malicious link to the victim.

An example phishing message sent.

Enlarge / An example phishing message sent.

When successful, iServer customers would receive the credentials through the web interface. The customers could then unlock a phone to disable the lost mode so the device could be used by someone new.

Ultimately, criminals received the stolen and validated credentials through the iServer web interface, enabling them to unlock a phone, turn off “Lost mode” and untie it from the owner’s account.

To better camouflage the ruse, iServer often disguised phishing pages as belonging to cloud-based services.

Phishing message asking for passcode.

Enlarge / Phishing message asking for passcode.

Group-IB

Phishing message masquerades as a cloud-based service with a map once passcode is entered.

Enlarge / Phishing message masquerades as a cloud-based service with a map once passcode is entered.

Group-IB

Besides the arrest, authorities also seized the iserver.com domain.

The iServer site as it appeared before the takedown.

Enlarge / The iServer site as it appeared before the takedown.

Group-IB

The iServer website after the takedown.

Enlarge / The iServer website after the takedown.

Group-IB

The takedown and arrests occurred from September 10–17 in Spain, Argentina, Chile, Colombia, Ecuador, and Peru. Authorities in those countries began investigating the phishing service in 2022.

Ever wonder how crooks get the credentials to unlock stolen phones? Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

homeopathic-company-refuses-to-recall-life-threatening-nasal-spray,-fda-says

Homeopathic company refuses to recall life-threatening nasal spray, FDA says

Dangerous —

Consumers should stop using SnoreStop, FDA says.

Homeopathic company refuses to recall life-threatening nasal spray, FDA says

The maker of a homeopathic nasal spray with a history of contamination is refusing to recall its product after the Food and Drug Administration once again found evidence of dangerous microbial contamination.

In a warning Thursday, the FDA advised consumers to immediately stop using SnoreStop nasal spray—made by Green Pharmaceuticals—because it may contain microbes that, when sprayed directly into nasal cavities, can cause life-threatening infections. The FDA highlighted the risk to people with compromised immune systems and also children, since SnoreStop is marketed to kids as young as age 5.

According to the regulator, an FDA inspection in April uncovered laboratory test results showing that a batch of SnoreStop contained “significant microbial contamination.” But, instead of discarding the batch, FDA inspectors found evidence that Green Pharmaceuticals had repackaged some of the contaminated lot and distributed it as single spray bottles or as part of a starter kit.

In response, Green Pharmaceuticals destroyed the remainder of the tainted lot and stopped selling the nasal spray on its website. (It is still selling its SnoreStop throat spray, chewable tablets, and pet products, which includes a nasal spray.) But, according to the FDA, it refused to recall products that may contain product from the tainted lot. The agency said it “reiterated its recall recommendation multiple times” in August and September. But, “To date, the company has not taken action to recall this potentially dangerous product from the market.”

Ars has reached out to Green Pharmaceuticals for comment but has not received a response.

Tainted history

SnoreStop.

Enlarge / SnoreStop.

This isn’t new territory for the company. In 2022, Green Pharmaceuticals got warnings from the FDA and issued a recall due to microbial contamination in its SnoreStop nasal spray. In June 2022, the FDA held a conference with the company over findings of bacteria and fungi in the spray. Some of the results suggested high levels of microbial contamination. “The individual sample results varied between 420 and up to 6,200 colony forming units (CFU)/mL for total aerobic microbial count… and between 30 and up to 3,800 CFU/mL for total yeast and mold counts,” the FDA reported in a December 2022 warning letter sent after the fact.

The FDA also noted finding the specific bacterial pathogen Providencia rettgeri, an opportunistic germ that can lurk in health care settings. It’s most often linked to urinary tract infections, but it can also cause pneumonia, brain and spinal cord infections, heart infections, and wound and bloodstream infections in vulnerable people, according to a 2018 review.

“The high bioburden in conjunction with the route of administration with this drug product poses a high risk of harm to vulnerable patients, including children,” the FDA wrote in its warning letter. Green Pharmaceuticals recalled SnoreStop in June 2022, after its meeting with the FDA.

Dangerous dilutions

Aside from the gross microbial contamination, the FDA also noted in its letter that SnoreStop appears to be an unapproved new drug, illegally claiming to treat a disease without FDA approval. SnoreStop is a homeopathic product, meaning it is based on pseudoscience. Homeopaths falsely believe that if substances, including poisons, cause the same symptoms as illnesses, the substance can cure those illnesses (“like cures like”). The reason the products don’t poison users is because homeopaths also believe that diluting substances into oblivion enhances their curative properties (“law of infinitesimals”). Some dilutions are so extreme that not a single molecule of the starting substance is present in homeopathic products. And some homeopaths have argued that water molecules can have a “memory” of the substance, which, they contend, explains how the products work.

SnoreStop is said to contain dilutions of: nux vomica (a natural source of strychnine), belladonna (deadly nightshade), Ephedra vulgaris (a source of the drug ephedrine), hydrastis canadensis (a toxic herb), Kali Bichromicum (potassium dichromate, which is considered toxic and carcinogenic), Teucrium marum (similar to catnip), and Histaminum hydrochloricum (Histamine dihydrochloride).

Consumer advocates have worked for years to try to get homeopathic products off of store shelves, where they’re sometimes sold alongside evidence-based, FDA-approved over-the-counter medicines. While homeopathic products are mostly harmless and ineffective—offering placebo effects at best—they can turn deadly when manufacturers mishandle the dilutions. For instance, in 2016, the FDA linked improperly diluted belladonna in homeopathic teething products to the deaths of 10 infants and the poisonings of more than 400 others.

Homeopathic company refuses to recall life-threatening nasal spray, FDA says Read More »

landmark-ai-deal-sees-hollywood-giant-lionsgate-provide-library-for-ai-training

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training

The silicon screen —

Runway deal will create a Lionsgate AI video generator, but not everyone is happy.

An illustration of a filmstrip with a robot, horse, rocket, and whale.

On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate’s vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.

Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate’s vice chair, stated in a press release that AI could help develop “cutting edge, capital efficient content creation opportunities.” He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.

Runway plans to develop a custom AI model using Lionsgate’s proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.

“We’re committed to giving artists, creators and studios the best and most powerful tools to augment their workflows and enable new ways of bringing their stories to life,” said Runway co-founder and CEO Cristóbal Valenzuela in a press release. “The history of art is the history of technology and these new models are part of our continuous efforts to build transformative mediums for artistic and creative expression; the best stories are yet to be told.”

The quest for legal training data

Generative AI models are master imitators, and video synthesis models like Runway’s latest Gen-3 Alpha are no exception. The companies that create them must amass a great deal of existing video (and still image) samples to analyze, allowing the resulting AI models to re-synthesize that information into new video generations, guided by text descriptions called prompts. And wherever that training data is lacking, it can result in unusual generations, as we saw in our hands-on evaluation of Gen-3 Alpha in July.

However, in the past, AI companies have gotten into legal trouble for scraping vast quantities of media without permission. In fact, Runway is currently the defendant in a class-action lawsuit that alleges copyright infringement for using video data obtained without permission to train its video synthesis models. While companies like OpenAI have claimed this scraping process is “fair use,” US courts have not yet definitively ruled on the practice. With other potential legal challenges ahead, it makes sense from Runway’s perspective to reach out and sign deals for training data that is completely in the clear.

Even if the training data becomes fully legal and licensed, different elements of the entertainment industry view generative AI on a spectrum that seems to range between fascination and horror. The technology’s ability to rapidly create images and video based on prompts may attract studios looking to streamline production. However, it raises polarizing concerns among unions about job security, actors and musicians about likeness misuse and ethics, and studios about legal implications.

So far, news of the deal has not been received kindly among vocal AI critics found on social media. On X, filmmaker and AI critic Joe Russo wrote, “I don’t think I’ve ever seen a grosser string of words than: ‘to develop cutting-edge, capital-efficient content creation opportunities.'”

Film concept artist Reid Southen shared a similar negative take on X: “I wonder how the directors and actors of their films feel about having their work fed into the AI to make a proprietary model. As an artist on The Hunger Games? I’m pissed. This is the first step in trying to replace artists and filmmakers.”

It’s a fear that we will likely hear more about in the future as AI video synthesis technology grows more capable—and potentially becomes adopted as a standard filmmaking tool. As studios explore AI applications despite legal uncertainties and labor concerns, partnerships like the Lionsgate-Runway deal may shape the future of content creation in Hollywood.

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training Read More »

massive-china-state-iot-botnet-went-undetected-for-four-years—until-now

Massive China-state IoT botnet went undetected for four years—until now

OVER 260,000 PWNED —

75% of infected devices were located in homes and offices in North America and Europe.

Massive China-state IoT botnet went undetected for four years—until now

Getty Images

The FBI has dismantled a massive network of compromised devices that Chinese state-sponsored hackers have used for four years to mount attacks on government agencies, telecoms, defense contractors, and other targets in the US and Taiwan.

The botnet was made up primarily of small office and home office routers, surveillance cameras, network-attached storage, and other Internet-connected devices located all over the world. Over the past four years, US officials said, 260,000 such devices have cycled through the sophisticated network, which is organized in three tiers that allow the botnet to operate with efficiency and precision. At its peak in June 2023, Raptor Train, as the botnet is named, consisted of more than 60,000 commandeered devices, according to researchers from Black Lotus Labs, making it the largest China state botnet discovered to date.

Burning down the house

Raptor Train is the second China state-operated botnet US authorities have taken down this year. In January, law enforcement officials covertly issued commands to disinfect Internet of Things devices that hackers backed by the Chinese government had taken over without the device owners’ knowledge. The Chinese hackers, part of a group tracked as Volt Typhoon, used the botnet for more than a year as a platform to deliver exploits that burrowed deep into the networks of targets of interest. Because the attacks appear to originate from IP addresses with good reputations, they are subjected to less scrutiny from network security defenses, making the bots an ideal delivery proxy. Russia-state hackers have also been caught assembling large IoT botnets for the same purposes.

An advisory jointly issued Wednesday by the FBI, the Cyber National Mission Force, and the National Security Agency said that China-based company Integrity Technology Group controlled and managed Raptor Train. The company has ties to the People’s Republic of China, officials said. The company, they said, has also used the state-controlled China Unicom Beijing Province Network IP addresses to control and manage the botnet. Researchers and law enforcement track the China-state group that worked with Integrity Technology as Flax Typhoon. More than half of the infected Raptor Train devices were located in North America and another 25 percent in Europe.

Raptor Train concentration by continent.

Enlarge / Raptor Train concentration by continent.

IC3.gov

Raptor Train concentration by country.

Enlarge / Raptor Train concentration by country.

IC3.gov

“Flax Typhoon was targeting critical infrastructure across the US and overseas, everyone from corporations and media organizations to universities and government agencies,” FBI Director Christopher Wray said Wednesday at the Aspen Cyber Summit. “Like Volt Typhoon, they used Internet-connected devices, this time hundreds of thousands of them, to create a botnet that helped them compromise systems and exfiltrate confidential data.” He added: “Flax Typhoon’s actions caused real harm to its victims who had to devote precious time to clean up the mess.”

Massive China-state IoT botnet went undetected for four years—until now Read More »

14-dead-as-hezbollah-walkie-talkies-explode-in-second,-deadlier-attack

14 dead as Hezbollah walkie-talkies explode in second, deadlier attack

Day 2 —

People aren’t sure what devices will detonate next.

14 dead as Hezbollah walkie-talkies explode in second, deadlier attack

Aurich Lawson | Getty Images

Wireless communication devices have exploded again today across Lebanon in a second attack even deadlier than yesterday’s explosion of thousands of Hezbollah pagers. According to Lebanon’s Ministry of Health, the new attack has killed at least 14 more people and injured more than 450.

Today’s attack targeted two-way radios (“walkie-talkies”) issued to Hezbollah members. The radios exploded in the middle of the day, with at least one going off during a funeral for people killed in yesterday’s pager attacks. A New York Times report on that funeral described the moment:

When the blast went off, a brief, eerie stillness descended on the crowd. Mourners looked at one another in disbelief. The religious chants being broadcast over a loudspeaker abruptly stopped.

Then panic set in. People started scrambling in the streets, hiding in the lobbies of nearby buildings, and shouting at one another, “Turn off your phone! Take out the battery!” Soon a voice on the loudspeaker at the funeral urged everyone to do the same…

One woman, Um Ibrahim, stopped a reporter in the middle of the confusion and begged to use the reporter’s cellphone to call her children. The woman dialed a number with her hands shaking, then screamed into the phone, “Turn off your phones now!”

The story appears to capture the current mood in Lebanon, where no one seems quite sure what will explode next. While today’s attack against walkie-talkies is well-attested, various unconfirmed reports suggest that people fear an explosion from just about anything with a battery.

At the time of publication, The Associated Press was currently leading its coverage of the attack with the line, “Walkie-talkies and solar equipment exploded in Beirut and multiple parts of Lebanon on Wednesday.” It later added that “a girl was hurt in the south when a solar energy system blew up, the state news agency reported.” Whether this actually happened, or if it was in any way connected with the attacks, remains unclear.

The Jerusalem Post rounded up a slew of rumors making the rounds in the region, some far less plausible than others:

Unofficial reports claimed that iPhones, video cameras, IC-V82 radios, and other devices also detonated.

According to unconfirmed reports, Hezbollah has told its operatives to distance itself from communication devices.

Unofficial reports also claimed that Hezbollah told its members to dispose of devices containing a lithium battery or that are connected to the internet.

Additional unconfirmed reports claimed that lithium batteries for solar energy storage had detonated and that some houses were on fire.

Yesterday, multiple news outlets reported that the pager attacks had been caused by explosives built into the devices, likely as part of an Israeli supply chain attack.

Today, similar reporting suggests the same kind of attack was used against the two-way radios. Axios cited two of its own sources who confirmed that the “walkie-talkies were booby-trapped in advance by Israeli intelligence services and then delivered to Hezbollah as part of the militia’s emergency communications system,” adding that “the decision to conduct the second attack was also driven by the assessment that Hezbollah’s investigation into the pager explosions would likely expose the security breach in the walkie-talkies.”

14 dead as Hezbollah walkie-talkies explode in second, deadlier attack Read More »

age-of-mythology:-retold-is-surprisingly-playable-with-a-controller

Age of Mythology: Retold is surprisingly playable with a controller

I hope you like radial menus, because you'll be looking at a lot of them.

Enlarge / I hope you like radial menus, because you’ll be looking at a lot of them.

Age of Mythology: Retold brings a lot of the usual advancements that you’d expect for a reboot of both the increasingly dated 2002 original game and its previous reboot: 2014’s Extended Edition, which is still perfectly playable and available on Steam. The newest version of this real-time strategy classic comes with the requisite improvements in graphics and user interface, making the whole game much easier to look at and parse at a glance. And while the updated voice acting isn’t going to win any awards, neither is the stilted, bare bones dialogue that those actors are working with (which seems faithful to the original game, for better or worse).

But Retold does add one thing that I wasn’t really expecting in a modern real-time strategy game—full support for a handheld controller. Developers have been trying to make RTS games work without the traditional mouse and keyboard since the days of SNES Populous and Starcraft 64, usually with limited success. Microsoft hasn’t given up on the dream, though, fully integrating controller support for Age of Mythology: Retold into both the PC version (which we sampled) and, obviously, the Xbox Series X|S release.

The result is definitely the best version of an RTS controller interface that I’ve tried and proof that a modern controller can be a perfectly functional option for the genre. In the end, though, there are just a few too many annoyances associated with a handheld controller to make it the preferred way to play a game like this.

Too many functions, too few buttons

A detailed map.

Enlarge / A detailed map.

Microsoft

To get a feel for what I mean, just look at the “Controls Popup” summarizing all the things a single controller needs to do in a game like Age of Mythology. The game makes full use of every single button and directional input on the Xbox gamepad for some function or other. Things are so crowded that commands like Stop and Delete need to be mapped to a combination of two shoulder buttons, with different functions for holding and tapping (and there are a few other multi-button menu toggles that aren’t even listed here).

If anything, this diagram undersells the control complexity here. Tapping either trigger brings up context-sensitive radial menus full of general commands or construction options for the currently selected building. Finding the right option then often means scrolling through multiple pages of radial menus in this full-screen interface, an awkward solution to the problem of having too many options for too few buttons.

To the game’s credit, it does its best to limit how much of this menu-based fumbling you have to do. Tapping the Y button when a building is selected, for instance, automatically starts construction on the most common unit you’d want to create with that building. And holding down the Y button maximizes the production queue for that building quickly, saving the need to spend a few seconds clicking through menus to do so.

Age of Mythology: Retold is surprisingly playable with a controller Read More »

backlash-over-amazon’s-return-to-office-comes-as-workers-demand-higher-wages

Backlash over Amazon’s return to office comes as workers demand higher wages

Warehouse workers at the STL8 Amazon Fulfillment Center marched on the boss Wednesday to demand a $25 an hour minimum wage for all workers.

Enlarge / Warehouse workers at the STL8 Amazon Fulfillment Center marched on the boss Wednesday to demand a $25 an hour minimum wage for all workers.

via Justice Speaks

Amazon currently faces disgruntled workers in every direction.

Office workers are raging against CEO Andy Jassy’s return to office mandate, Fortune reported—which came just as a leaked document reportedly showed that Amazon is also planning to gut management, Business Insider reported. Drivers by the hundreds are flocking to join a union to negotiate even better work conditions, CNBC reported, despite some of the biggest concessions in Amazon’s history. And hundreds more unionized warehouse workers are increasingly banding together nationwide to demand a $25 an hour minimum wage. On Wednesday, workers everywhere were encouraged to leave Jassy a voicemail elevating workers’ demands for a $25 minimum wage.

Putting on the pressure

This momentum has been building for years after drivers unionized in 2021. And all this collective fury increasingly appears to be finally pressuring Amazon into negotiating better conditions for some workers.

Just last week, Amazon ponied up $2.1 billion—its “biggest investment yet”—to improve driver safety and increase drivers’ wages.

Unionizing warehouse workers told Ars that they’re seeking a similar investment from Amazon, which currently pays on average a $20.50 minimum wage.

“We work at a breakneck pace,” Christine Manno, an Amazon Fulfillment Center worker at Amazon site STL8 in St. Louis, Missouri, who was injured and never expects to work again, told Ars. “We put smiles on the billionaire’s faces, and we feel it’s prime time for a real raise for the employees. There’s too many of us struggling with food and housing, yet Andy Jassy took home over $14,000 an hour last year and Amazon is making billions in profit.”

On Wednesday, Amazon seemed to finally bend to the warehouse workers’ pressure, announcing a compromise on wage increases. The company said it was investing $2.2 billion to raise the base salaries of hourly fulfillment workers to “more than $22 an hour, and more than $29 an hour including benefits,” Reuters reported. Amazon’s spokesperson told Ars that STL8 workers’ starting wage “increased to $19 per hour coupled with our industry-leading benefits” and claimed that the company’s “biggest ever investment” in fulfillment workers was simply “part of an annual process where we review wages and benefits to ensure they stay competitive—and in many cases industry-leading.”

But while workers claimed the victory, they’re not going to sit back and take the pay bump. An STL8 worker on the organizing committee with Manno, Ash Judd, told Ars that workers “made this $1.50 raise happen through our tireless organizing, and we’ll keep fighting until we reach $25.”

Because of recent gains and the increasingly dire economic plight of workers, Amazon workers likely won’t be easing off the e-commerce giant any time soon. Some office workers told Fortune they are seeking other remote work to avoid returning to the office, threatening to “soft quit” and claiming that Amazon is going “backwards” with a stricter office policy than pre-COVID times. “This is a layoff in disguise,” one apparent worker complained on Reddit. “Return to the office or you’re fired and we don’t have to pay any severance or unemployment.”

With so many workers upset, it could now be a question of when Amazon will cave to their growing demands—not if—according to Beth Gutelius, the research director of the University of Illinois Chicago’s Center for Urban Economic Development.

“Research shows that the presence of collective bargaining agreements creates upward pressure on wages and working conditions, both in facilities that are unionized and those that are not,” Gutelius told Ars. “Based on that evidence, I would expect working conditions at Amazon to improve.”

Gutelius co-authored a May report documenting the financial insecurity of Amazon warehouse workers by surveying more than 1,400 across 42 states.

Backlash over Amazon’s return to office comes as workers demand higher wages Read More »

a-key-nasa-commercial-partner-faces-severe-financial-challenges

A key NASA commercial partner faces severe financial challenges

Station struggles —

“The business model had to change.”

Spacious zero-g quarters with a big TV.

Enlarge / Rendering of an individual crew quarter within the Axiom habitat module.

Axiom Space

Axiom Space is facing significant financial headwinds as the company attempts to deliver on two key commercial programs for NASA—the development of a private space station in low-Earth orbit and spacesuits that could one day be worn by astronauts on the Moon.

Forbes reports that Axiom Space, which was founded by billionaire Kam Ghaffarian and NASA executive Mike Suffredini in 2016, has been struggling to raise money to keep its doors open and has had difficulties meeting its payroll dating back to at least early 2023. In addition, the Houston-based company has fallen behind on payments to key suppliers, including Thales Alenia Space for its space station and SpaceX for crewed launches.

“The lack of fresh capital has exacerbated long-standing financial challenges that have grown alongside Axiom’s payroll, which earlier this year was nearly 1,000 employees,” the publication reports. “Sources familiar with the company’s operations told Forbes that co-founder and CEO Michael Suffredini, who spent 30 years at NASA, ran Axiom like a big government program instead of the resource-constrained startup it really was. His mandate to staff up to 800 workers by the end of 2022 led to mass hiring so detached from product development needs that new engineers often found themselves with nothing to do.”

The report underscores a lot of what Ars has been hearing about the financial struggles of Axiom in recent months. Dozens of employees have been laid off, and Thales officials have made no secret of their discontent at not being paid in full for the production of pressure modules for the Axiom space station. Although the departure of Suffredini as chief executive was framed as being his decision for personal reasons, it seems probable that he moved out of the company for performance reasons.

Space station troubles

All of this raises significant questions about Axiom’s ability to deliver on the primary reason the company was created—to build a successor to the International Space Station. Suffredini joined Ghaffarian in the venture after serving as manager of NASA’s space station program for more than a decade. When they founded the company in 2016, the plan was to launch an initial space station module in 2020.

The timeline for station development has since been delayed multiple times. Presently, Axiom plans to launch its first module to the International Space Station no earlier than late 2026. And the company’s ambitions have been downsized, according to the report. Instead of a four-module station that would be separated from the government-operated space station by 2030, Axiom is likely to go forward with a smaller station consisting of just two elements. This station would have lower power and reduced commercial potential, according to the article.

“The business model had always counted on having significant power for microgravity research, semiconductor production, and pharmaceutical production, plus supporting life in space,” a source told the publication. “The business model had to change… and that has continued to make it challenging for the company to get around its cash flow issues.”

Axiom is one of several companies—alongside Blue Origin, Voyager Space, Vast Space, and potentially SpaceX—working with NASA to devise commercial replacements for the International Space Station after that facility retires in 2030.

NASA plans to issue a “request for proposals” for the second round of commercial space station contracts in 2025 and make an award the following year. Multiple sources have indicated that the space agency would like to award at least two companies in this second phase. However, Ghaffarian told Forbes that he would prefer NASA to decide next year and award a single competitor.

“Today there’s not enough market for more than one,” he said.

This may be true, although some of Axiom’s competitors may dispute it. Nevertheless, Ghaffarian’s desire for an award next year, and for a sole winner, underscores the evident urgency of Axiom’s fundraising needs.

Dragons and spacesuits

The report also notes that Axiom has lost significant amounts of funding on three private astronaut missions it has flown to the International Space Station to date. Ghaffarian said these missions were conducted at a loss to build relationships with global space agencies. This does make some sense, as space agencies in Europe, the Middle East, and elsewhere are likely to be customers of commercial space stations in the next decade. However, Axiom is ill-positioned to absorb such launches financially.

The publication reveals that Axiom is due to pay $670 million to SpaceX for four Crew Dragon missions, each of which includes a launch and ride for four astronauts to and from the station encompassing a one- to two-week period. This equates to $167.5 million per launch, or $41.9 million per seat.

Axiom’s other major line of business is a $228 million development contract with NASA to develop spacesuits for the Artemis Program, which will allow astronauts to venture outside the Starship lunar lander on the Moon’s surface. According to the Forbes report, this initiative has pulled resources away from the space station program.

Multiple sources have told Ars that, from a financial and technical standpoint, this spacesuit program is on better footing than the station program. And at this point, the spacesuit program is probably the one element of Axiom’s business that NASA views as essential going forward.

A key NASA commercial partner faces severe financial challenges Read More »

russian-state-media-outlet-rt-banned-by-facebook-“for-foreign-interference”

Russian state media outlet RT banned by Facebook “for foreign interference”

Still on X, though —

US said Russian media worked with Kremlin to influence election, foment unrest.

Russia President Vladimir Putin hands a bouquet of flowers to editor-in-chief of Russian broadcaster RT Margarita Simonyan.

Enlarge / Russia President Vladimir Putin presents flowers to editor-in-chief of Russian broadcaster RT Margarita Simonyan after awarding her with the “Order of Alexander Nevsky” during a ceremony at the Kremlin in Moscow on May 23, 2019.

Getty Images | Evgenia Novozhenina

Meta yesterday announced a ban on Russian state media outlets RT (formerly Russia Today) and Rossiya Segodnya, taking action three days after the US government imposed sanctions on the outlets for covert influence activities.

“After careful consideration, we expanded our ongoing enforcement against Russian state media outlets: Rossiya Segodnya, RT and other related entities are now banned from our apps globally for foreign interference activity,” Meta said in a statement provided to Ars. Meta is the owner of Facebook, Instagram, WhatsApp, and Threads.

Meta already blocked RT and Rossiya Segodnya’s Sputnik network across Europe in March 2022, following a ban imposed by European Union government officials. YouTube blocked the channels worldwide. At the time, Vladimir Putin’s government was telling Russian media outlets not to call the invasion of Ukraine “an attack,” “invasion,” or “declaration of war.”

Although Meta didn’t block RT worldwide in 2022, it did impose worldwide restrictions on Russian state media. Meta said today that those restrictions prevented Russian state media from running ads, placed the state media content lower in people’s feeds, and added “nudges” asking users to confirm that they want to share or navigate to content from those outlets.

US says RT worked with Kremlin to foment unrest

Meta’s new worldwide ban comes after the US State Department said on Friday that it was designating Russian state media outlets “for their connection to Russia’s destabilizing actions abroad.” The US said it obtained new information from employees of RT and other sources showing that RT has “engaged in information operations, covert influence, and military procurement. These operations are targeting countries around the world, including in Europe, Africa, and North and South America.”

The US State Department said the government is “not taking action against these entities and individuals for the content of their reporting, or even the disinformation they create and spread publicly. We are taking action against them for their covert influence activities. Covert influence activities are not journalism.”

The US alleged that “RT and employees, including Editor-in-Chief Margarita Simonyan, have directly coordinated with the Kremlin to support Russian government efforts to influence the October 2024 Moldovan election. Specifically, in coordination with the Kremlin, Simonyan leverages the state-funded platforms for which she serves in leadership positions… to attempt to foment unrest in Moldova, likely with the specific aim of causing protests to turn violent. RT is aware of and prepared to assist Russia’s plans to incite protests should the election not result in a Russia-preferred candidate winning the presidency.”

The US also said that people “affiliated with Rossiya Segodnya coordinated with the Kremlin to attempt to foment unrest in Moldova, likely with the specific aim of causing protests to turn violent.” While RT is funded by Russia, Rossiya Segodnya is both state-owned and state-funded, the US said.

The US action blocks most transactions involving the designated entities, and “all property and interests in property of the designated persons described above that are in the United States or in possession or control of US persons are blocked and must be reported to the Department of Treasury’s Office of Foreign Assets Control,” the US said. The designation applies to Rossiya Segodnya and TV-Novosti, the latter of which is a federally funded organization that is “associated with Rossiya Segodnya and controls the RT media channel,” the US said. The US also designated Dmitry Konstantinovich Kiselev, the director general of Rossiya Segodnya.

RT: “We’ve been broadcasting straight out of the KGB”

RT has issued sarcastic responses to the US government and Meta actions. “RT Editor-in-Chief Margarita Simonyan joked that RT had learned from the Americans, rather than from Russian intelligence officers,” an RT article on the Meta ban said today.

When contacted by Ars today, RT’s press office gave us a statement saying the organization will “find the cracks to crawl through” despite the Meta ban. RT’s statement read in full:

It’s cute how there’s a competition in the West—who can try to spank RT the hardest, in order to make themselves look better. Meta/Facebook already blocked RT in Europe two years ago, now they’re censoring information flow to the rest of the world. Don’t worry, where they close a door, and then a window, our ‘partisans’ (or in your parlance, guerrilla fighters) will find the cracks to crawl through—as by Biden administration’s admission we are apt at doing.

After the State Department action last week, “RT responded with a mocking email that read in part: ‘We’ve been broadcasting straight out of the KGB headquarters all this time,'” according to CNN.

RT is still active on X, formerly Twitter. “On behalf of our team: Silence us all you want, but there’s no way to silence the truth,” the organization said.

We contacted Rossiya Segodnya today and will update this article if it provides comment.

Russian state media outlet RT banned by Facebook “for foreign interference” Read More »