Biz & IT

ever-wonder-how-crooks-get-the-credentials-to-unlock-stolen-phones?

Ever wonder how crooks get the credentials to unlock stolen phones?

BUSTED —

iServer provided a simple service for phishing credentials to unlock phones.

Ever wonder how crooks get the credentials to unlock stolen phones?

Getty Images

A coalition of law-enforcement agencies said it shut down a service that facilitated the unlocking of more than 1.2 million stolen or lost mobile phones so they could be used by someone other than their rightful owner.

The service was part of iServer, a phishing-as-a-service platform that has been operating since 2018. The Argentina-based iServer sold access to a platform that offered a host of phishing-related services through email, texts, and voice calls. One of the specialized services offered was designed to help people in possession of large numbers of stolen or lost mobile devices to obtain the credentials needed to bypass protections such as the lost mode for iPhones, which prevent a lost or stolen device from being used without entering its passcode.

iServer's phishing-as-a-service model.

Enlarge / iServer’s phishing-as-a-service model.

Group-IB

Catering to low-skilled thieves

An international operation coordinated by Europol’s European Cybercrime Center said it arrested the Argentinian national that was behind iServer and identified more than 2,000 “unlockers” who had enrolled in the phishing platform over the years. Investigators ultimately found that the criminal network had been used to unlock more than 1.2 million mobile phones. Officials said they also identified 483,000 phone owners who had received messages phishing for credentials for their lost or stolen devices.

According to Group-IB, the security firm that discovered the phone-unlocking racket and reported it to authorities, iServer provided a web interface that allowed low-skilled unlockers to phish the rightful device owners for the device passcodes, user credentials from cloud-based mobile platforms, and other personal information.

Group-IB wrote:

During its investigations into iServer’s criminal activities, Group-IB specialists also uncovered the structure and roles of criminal syndicates operating with the platform: the platform’s owner/developer sells access to “unlockers,” who in their turn provide phone unlocking services to other criminals with locked stolen devices. The phishing attacks are specifically designed to gather data that grants access to physical mobile devices, enabling criminals to acquire users’ credentials and local device passwords to unlock devices or unlink them from their owners. iServer automates the creation and delivery of phishing pages that imitate popular cloud-based mobile platforms, featuring several unique implementations that enhance its effectiveness as a cybercrime tool.

Unlockers obtain the necessary information for unlocking the mobile phones, such as IMEI, language, owner details, and contact information, often accessed through lost mode or via cloud-based mobile platforms. They utilize phishing domains provided by iServer or create their own to set up a phishing attack. After selecting an attack scenario, iServer creates a phishing page and sends an SMS with a malicious link to the victim.

An example phishing message sent.

Enlarge / An example phishing message sent.

When successful, iServer customers would receive the credentials through the web interface. The customers could then unlock a phone to disable the lost mode so the device could be used by someone new.

Ultimately, criminals received the stolen and validated credentials through the iServer web interface, enabling them to unlock a phone, turn off “Lost mode” and untie it from the owner’s account.

To better camouflage the ruse, iServer often disguised phishing pages as belonging to cloud-based services.

Phishing message asking for passcode.

Enlarge / Phishing message asking for passcode.

Group-IB

Phishing message masquerades as a cloud-based service with a map once passcode is entered.

Enlarge / Phishing message masquerades as a cloud-based service with a map once passcode is entered.

Group-IB

Besides the arrest, authorities also seized the iserver.com domain.

The iServer site as it appeared before the takedown.

Enlarge / The iServer site as it appeared before the takedown.

Group-IB

The iServer website after the takedown.

Enlarge / The iServer website after the takedown.

Group-IB

The takedown and arrests occurred from September 10–17 in Spain, Argentina, Chile, Colombia, Ecuador, and Peru. Authorities in those countries began investigating the phishing service in 2022.

Ever wonder how crooks get the credentials to unlock stolen phones? Read More »

due-to-ai-fakes,-the-“deep-doubt”-era-is-here

Due to AI fakes, the “deep doubt” era is here

A person writing

Memento | Aurich Lawson

Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.

Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.

The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.

Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.

The rise of deepfakes, the persistence of doubt

Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.

Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.

In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.

In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.

Due to AI fakes, the “deep doubt” era is here Read More »

real-time-linux-is-officially-part-of-the-kernel-after-decades-of-debate

Real-time Linux is officially part of the kernel after decades of debate

No RTO needed for RTOS —

Now you can run your space laser or audio production without specialty patches.

CNC laser skipping across a metal surface, leaving light trails in long exposure.

Enlarge / Cutting metal with lasers is hard, but even harder when you don’t know the worst-case timings of your code.

Getty Images

As is so often the case, a notable change in an upcoming Linux kernel is both historic and no big deal.

If you wanted to use “Real-Time Linux” for your audio gear, your industrial welding laser, or your Mars rover, you have had that option for a long time (presuming you didn’t want to use QNX or other alternatives). Universities started making their own real-time kernels in the late 1990s. A patch set, PREEMPT_RT, has existed since at least 2005. And some aspects of the real-time work, like NO_HZ, were long ago moved into the mainline kernel, enabling its use in data centers, cloud computing, or anything with a lot of CPUs.

But officialness still matters, and in the 6.12 kernel, PREEMPT_RT will likely be merged into the mainline. As noted by Steven Vaughan-Nichols at ZDNet, the final sign-off by Linus Torvalds occurred while he was attending Open Source Summit Europe. Torvalds wrote the original code for printk, a debugging tool that can pinpoint exact moments where a process crashes, but also introduces latency that runs counter to real-time computing. The Phoronix blog has tracked the progress of PREEMPT_RT into the kernel, along with the printk changes that allowed for threaded/atomic console support crucial to real-time mainlining.

What does this mean for desktop Linux? Not much. Beyond high-end audio production or replication (and even that is debatable), a real-time kernel won’t likely make windows snappier or programs zippier. But the guaranteed execution and worst-case latency timings a real-time Linux provides are quite useful to, say, the systems that monitor car brakes, guide CNC machines, and regulate fiendishly complex multi-CPU systems. Having PREEMPT-RT in the mainline kernel makes it easier to maintain a real-time system, rather than tend to out-of-tree patches.

It will likely change things for what had been, until now, specialty providers of real-time OS solutions for mission-critical systems. Ubuntu, for example, started offering a real-time version of its distribution in 2023 but required an Ubuntu Pro subscription for access. Ubuntu pitched its release at robotics, automation, embedded Linux, and other real-time needs, with the fixes, patches, module integration, and testing provided by Ubuntu.

“Controlling a laster with Linux is crazy,” Torvalds said at the Kernel Summit of 2006, “but everyone in this room is crazy in his own way. So if you want to use Linux to control an industrial welding laser, I have no problem with your using PREEMPT_RT.” Roughly 18 years later, Torvalds and the kernel team, including longtime maintainer and champion-of-real-time Steven Rostedt, have made it even easier to do that kind of thing.

Real-time Linux is officially part of the kernel after decades of debate Read More »

landmark-ai-deal-sees-hollywood-giant-lionsgate-provide-library-for-ai-training

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training

The silicon screen —

Runway deal will create a Lionsgate AI video generator, but not everyone is happy.

An illustration of a filmstrip with a robot, horse, rocket, and whale.

On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate’s vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.

Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate’s vice chair, stated in a press release that AI could help develop “cutting edge, capital efficient content creation opportunities.” He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.

Runway plans to develop a custom AI model using Lionsgate’s proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.

“We’re committed to giving artists, creators and studios the best and most powerful tools to augment their workflows and enable new ways of bringing their stories to life,” said Runway co-founder and CEO Cristóbal Valenzuela in a press release. “The history of art is the history of technology and these new models are part of our continuous efforts to build transformative mediums for artistic and creative expression; the best stories are yet to be told.”

The quest for legal training data

Generative AI models are master imitators, and video synthesis models like Runway’s latest Gen-3 Alpha are no exception. The companies that create them must amass a great deal of existing video (and still image) samples to analyze, allowing the resulting AI models to re-synthesize that information into new video generations, guided by text descriptions called prompts. And wherever that training data is lacking, it can result in unusual generations, as we saw in our hands-on evaluation of Gen-3 Alpha in July.

However, in the past, AI companies have gotten into legal trouble for scraping vast quantities of media without permission. In fact, Runway is currently the defendant in a class-action lawsuit that alleges copyright infringement for using video data obtained without permission to train its video synthesis models. While companies like OpenAI have claimed this scraping process is “fair use,” US courts have not yet definitively ruled on the practice. With other potential legal challenges ahead, it makes sense from Runway’s perspective to reach out and sign deals for training data that is completely in the clear.

Even if the training data becomes fully legal and licensed, different elements of the entertainment industry view generative AI on a spectrum that seems to range between fascination and horror. The technology’s ability to rapidly create images and video based on prompts may attract studios looking to streamline production. However, it raises polarizing concerns among unions about job security, actors and musicians about likeness misuse and ethics, and studios about legal implications.

So far, news of the deal has not been received kindly among vocal AI critics found on social media. On X, filmmaker and AI critic Joe Russo wrote, “I don’t think I’ve ever seen a grosser string of words than: ‘to develop cutting-edge, capital-efficient content creation opportunities.'”

Film concept artist Reid Southen shared a similar negative take on X: “I wonder how the directors and actors of their films feel about having their work fed into the AI to make a proprietary model. As an artist on The Hunger Games? I’m pissed. This is the first step in trying to replace artists and filmmakers.”

It’s a fear that we will likely hear more about in the future as AI video synthesis technology grows more capable—and potentially becomes adopted as a standard filmmaking tool. As studios explore AI applications despite legal uncertainties and labor concerns, partnerships like the Lionsgate-Runway deal may shape the future of content creation in Hollywood.

Landmark AI deal sees Hollywood giant Lionsgate provide library for AI training Read More »

massive-china-state-iot-botnet-went-undetected-for-four-years—until-now

Massive China-state IoT botnet went undetected for four years—until now

OVER 260,000 PWNED —

75% of infected devices were located in homes and offices in North America and Europe.

Massive China-state IoT botnet went undetected for four years—until now

Getty Images

The FBI has dismantled a massive network of compromised devices that Chinese state-sponsored hackers have used for four years to mount attacks on government agencies, telecoms, defense contractors, and other targets in the US and Taiwan.

The botnet was made up primarily of small office and home office routers, surveillance cameras, network-attached storage, and other Internet-connected devices located all over the world. Over the past four years, US officials said, 260,000 such devices have cycled through the sophisticated network, which is organized in three tiers that allow the botnet to operate with efficiency and precision. At its peak in June 2023, Raptor Train, as the botnet is named, consisted of more than 60,000 commandeered devices, according to researchers from Black Lotus Labs, making it the largest China state botnet discovered to date.

Burning down the house

Raptor Train is the second China state-operated botnet US authorities have taken down this year. In January, law enforcement officials covertly issued commands to disinfect Internet of Things devices that hackers backed by the Chinese government had taken over without the device owners’ knowledge. The Chinese hackers, part of a group tracked as Volt Typhoon, used the botnet for more than a year as a platform to deliver exploits that burrowed deep into the networks of targets of interest. Because the attacks appear to originate from IP addresses with good reputations, they are subjected to less scrutiny from network security defenses, making the bots an ideal delivery proxy. Russia-state hackers have also been caught assembling large IoT botnets for the same purposes.

An advisory jointly issued Wednesday by the FBI, the Cyber National Mission Force, and the National Security Agency said that China-based company Integrity Technology Group controlled and managed Raptor Train. The company has ties to the People’s Republic of China, officials said. The company, they said, has also used the state-controlled China Unicom Beijing Province Network IP addresses to control and manage the botnet. Researchers and law enforcement track the China-state group that worked with Integrity Technology as Flax Typhoon. More than half of the infected Raptor Train devices were located in North America and another 25 percent in Europe.

Raptor Train concentration by continent.

Enlarge / Raptor Train concentration by continent.

IC3.gov

Raptor Train concentration by country.

Enlarge / Raptor Train concentration by country.

IC3.gov

“Flax Typhoon was targeting critical infrastructure across the US and overseas, everyone from corporations and media organizations to universities and government agencies,” FBI Director Christopher Wray said Wednesday at the Aspen Cyber Summit. “Like Volt Typhoon, they used Internet-connected devices, this time hundreds of thousands of them, to create a botnet that helped them compromise systems and exfiltrate confidential data.” He added: “Flax Typhoon’s actions caused real harm to its victims who had to devote precious time to clean up the mess.”

Massive China-state IoT botnet went undetected for four years—until now Read More »

1.3-million-android-based-tv-boxes-backdoored;-researchers-still-don’t-know-how

1.3 million Android-based TV boxes backdoored; researchers still don’t know how

CAUSE UNKNOWN —

Infection corrals devices running AOSP-based firmware into a botnet.

1.3 million Android-based TV boxes backdoored; researchers still don’t know how

Getty Images

Researchers still don’t know the cause of a recently discovered malware infection affecting almost 1.3 million streaming devices running an open source version of Android in almost 200 countries.

Security firm Doctor Web reported Thursday that malware named Android.Vo1d has backdoored the Android-based boxes by putting malicious components in their system storage area, where they can be updated with additional malware at any time by command-and-control servers. Google representatives said the infected devices are running operating systems based on the Android Open Source Project, a version overseen by Google but distinct from Android TV, a proprietary version restricted to licensed device makers.

Dozens of variants

Although Doctor Web has a thorough understanding of Vo1d and the exceptional reach it has achieved, company researchers say they have yet to determine the attack vector that has led to the infections.

“At the moment, the source of the TV boxes’ backdoor infection remains unknown,” Thursday’s post stated. “One possible infection vector could be an attack by an intermediate malware that exploits operating system vulnerabilities to gain root privileges. Another possible vector could be the use of unofficial firmware versions with built-in root access.”

The following device models infected by Vo1d are:

TV box model Declared firmware version
R4 Android 7.1.2; R4 Build/NHG47K
TV BOX Android 12.1; TV BOX Build/NHG47K
KJ-SMART4KVIP Android 10.1; KJ-SMART4KVIP Build/NHG47K

One possible cause of the infections is that the devices are running outdated versions that are vulnerable to exploits that remotely execute malicious code on them. Versions 7.1, 10.1, and 12.1, for example, were released in 2016, 2019, and 2022, respectively. What’s more, Doctor Web said it’s not unusual for budget device manufacturers to install older OS versions in streaming boxes and make them appear more attractive by passing them off as more up-to-date models.

Further, while only licensed device makers are permitted to modify Google’s AndroidTV, any device maker is free to make changes to open source versions. That leaves open the possibility that the devices were infected in the supply chain and were already compromised by the time they were purchased by the end user.

“These off-brand devices discovered to be infected were not Play Protect certified Android devices,” Google said in a statement. “If a device isn’t Play Protect certified, Google doesn’t have a record of security and compatibility test results. Play Protect certified Android devices undergo extensive testing to ensure quality and user safety.”

The statement said people can confirm a device runs Android TV OS by checking this link and following the steps listed here.

Doctor Web said that there are dozens of Vo1d variants that use different code and plant malware in slightly different storage areas, but that all achieve the same end result of connecting to an attacker-controlled server and installing a final component that can install additional malware when instructed. VirusTotal shows that most of the Vo1d variants were first uploaded to the malware identification site several months ago.

Researchers wrote:

All these cases involved similar signs of infection, so we will describe them using one of the first requests we received as an example. The following objects were changed on the affected TV box:

  • install-recovery.sh
  • daemonsu

In addition, 4 new files emerged in its file system:

  • /system/xbin/vo1d
  • /system/xbin/wd
  • /system/bin/debuggerd
  • /system/bin/debuggerd_real

The vo1d and wd files are the components of the Android.Vo1d trojan that we discovered.

The trojan’s authors probably tried to disguise one if its components as the system program /system/bin/vold, having called it by the similar-looking name “vo1d” (substituting the lowercase letter “l” with the number “1”). The malicious program’s name comes from the name of this file. Moreover, this spelling is consonant with the English word “void”.

The install-recovery.sh file is a script that is present on most Android devices. It runs when the operating system is launched and contains data for autorunning the elements specified in it. If any malware has root access and the ability to write to the /system system directory, it can anchor itself in the infected device by adding itself to this script (or by creating it from scratch if it is not present in the system). Android.Vo1d has registered the autostart for the wd component in this file.

The modified install-recovery.sh file

The modified install-recovery.sh file

Doctor Web

The daemonsu file is present on many Android devices with root access. It is launched by the operating system when it starts and is responsible for providing root privileges to the user. Android.Vo1d registered itself in this file, too, having also set up autostart for the wd module.

The debuggerd file is a daemon that is typically used to create reports on occurred errors. But when the TV box was infected, this file was replaced by the script that launches the wd component.

The debuggerd_real file in the case we are reviewing is a copy of the script that was used to substitute the real debuggerd file. Doctor Web experts believe that the trojan’s authors intended the original debuggerd to be moved into debuggerd_real to maintain its functionality. However, because the infection probably occurred twice, the trojan moved the already substituted file (i.e., the script). As a result, the device had two scripts from the trojan and not a single real debuggerd program file.

At the same time, other users who contacted us had a slightly different list of files on their infected devices:

  • daemonsu (the vo1d file analogue — Android.Vo1d.1);
  • wd (Android.Vo1d.3);
  • debuggerd (the same script as described above);
  • debuggerd_real (the original file of the debuggerd tool);
  • install-recovery.sh (a script that loads objects specified in it).

An analysis of all the aforementioned files showed that in order to anchor Android.Vo1d in the system, its authors used at least three different methods: modification of the install-recovery.sh and daemonsu files and substitution of the debuggerd program. They probably expected that at least one of the target files would be present in the infected system, since manipulating even one of them would ensure the trojan’s successful auto launch during subsequent device reboots.

Android.Vo1d’s main functionality is concealed in its vo1d (Android.Vo1d.1) and wd (Android.Vo1d.3) components, which operate in tandem. The Android.Vo1d.1 module is responsible for Android.Vo1d.3’s launch and controls its activity, restarting its process if necessary. In addition, it can download and run executables when commanded to do so by the C&C server. In turn, the Android.Vo1d.3 module installs and launches the Android.Vo1d.5 daemon that is encrypted and stored in its body. This module can also download and run executables. Moreover, it monitors specified directories and installs the APK files that it finds in them.

The geographic distribution of the infections is wide, with the biggest number detected in Brazil, Morocco, Pakistan, Saudi Arabia, Russia, Argentina, Ecuador, Tunisia, Malaysia, Algeria, and Indonesia.

A world map listing the number of infections found in various countries.

Enlarge / A world map listing the number of infections found in various countries.

Doctor Web

It’s not especially easy for less experienced people to check if a device is infected short of installing malware scanners. Doctor Web said its antivirus software for Android will detect all Vo1d variants and disinfect devices that provide root access. More experienced users can check indicators of compromise here.

1.3 million Android-based TV boxes backdoored; researchers still don’t know how Read More »

google-rolls-out-voice-powered-ai-chat-to-the-android-masses

Google rolls out voice-powered AI chat to the Android masses

Chitchat Wars —

Gemini Live allows back-and-forth conversation, now free to all Android users.

The Google Gemini logo.

Enlarge / The Google Gemini logo.

Google

On Thursday, Google made Gemini Live, its voice-based AI chatbot feature, available for free to all Android users. The feature allows users to interact with Gemini through voice commands on their Android devices. That’s notable because competitor OpenAI’s Advanced Voice Mode feature of ChatGPT, which is similar to Gemini Live, has not yet fully shipped.

Google unveiled Gemini Live during its Pixel 9 launch event last month. Initially, the feature was exclusive to Gemini Advanced subscribers, but now it’s accessible to anyone using the Gemini app or its overlay on Android.

Gemini Live enables users to ask questions aloud and even interrupt the AI’s responses mid-sentence. Users can choose from several voice options for Gemini’s responses, adding a level of customization to the interaction.

Gemini suggests the following uses of the voice mode in its official help documents:

Talk back and forth: Talk to Gemini without typing, and Gemini will respond back verbally.

Brainstorm ideas out loud: Ask for a gift idea, to plan an event, or to make a business plan.

Explore: Uncover more details about topics that interest you.

Practice aloud: Rehearse for important moments in a more natural and conversational way.

Interestingly, while OpenAI originally demoed its Advanced Voice Mode in May with the launch of GPT-4o, it has only shipped the feature to a limited number of users starting in late July. Some AI experts speculate that a wider rollout has been hampered by a lack of available computer power since the voice feature is presumably very compute-intensive.

To access Gemini Live, users can reportedly tap a new waveform icon in the bottom-right corner of the app or overlay. This action activates the microphone, allowing users to pose questions verbally. The interface includes options to “hold” Gemini’s answer or “end” the conversation, giving users control over the flow of the interaction.

Currently, Gemini Live supports only English, but Google has announced plans to expand language support in the future. The company also intends to bring the feature to iOS devices, though no specific timeline has been provided for this expansion.

Google rolls out voice-powered AI chat to the Android masses Read More »

free-starlink-internet-is-coming-to-all-of-united’s-airplanes

Free Starlink Internet is coming to all of United’s airplanes

free as in beer —

The upgrade starts in 2025, but with more than 1,000 planes, will take several years.

A child plays with a handheld games console while sitting in an airplane seat

Enlarge / Soon you’ll be able to stream games and video for free on United flights.

United

United Airlines announced this morning that it is giving its in-flight Internet access an upgrade. It has signed a deal with Starlink to deliver SpaceX’s satellite-based service to all its aircraft, a process that will start in 2025. And the good news for passengers is that the in-flight Wi-Fi will be free of charge.

The flying experience as it relates to consumer technology has come a very long way in the two-and-a-bit decades that Ars has been publishing. At the turn of the century, even having a power socket in your seat was a long shot. Laptop batteries didn’t last that long, either—usually less than the runtime of whatever DVD I hoped to distract myself with, if memory serves.

Bring a spare battery and that might double, but it helped to have a book or magazine to read.

By 2011, the picture had changed. Wi-Fi was no longer some esoteric thing known only to nerds who built their own computers, and smartphones and tablets were on their way to ubiquity. After an aborted attempt in 2004, 2008 made in-flight Internet access a reality in North America, although the air-to-ground cellular-based system was slow, unreliable, and expensive.

Air-to-ground Internet access was maybe slightly cheaper by 2018, but it was still frustrating and slow, particularly if you were, oh, I dunno, a journalist trying to upload images to a CMS on your way back from an event. But by then, there was a better alternative—satellites. Airliners started sporting new antenna-concealing blisters, and soon, we were all streaming and posting and working our way across the skies.

Enter SpaceX

That bandwidth was courtesy of Viasat, according to all the receipts in my expense reports, but in 2022, SpaceX announced that it was adding aviation to Starlink’s portfolio. Initially, Starlink only targeted smaller regional and private jet aircraft, but now its equipment is also certified for commercial passenger planes from Airbus and Boeing and is already in use with carriers including Qatar Airways and Air New Zealand.

United says it will start testing Starlink equipment early in 2025, with the first use on passenger flights later that year. The service will be available gate-to-gate (as opposed to only working above 10,000 feet, a restriction some other systems operate under), and it certainly sounds like a superior experience to current in-flight Internet, as it will explicitly allow streaming of both video and games, and multiple connected devices at once. Better yet, United says the service will be free for passengers.

Depending on the route you fly, you may need to have some patience, though. United says it will take several years to install Starlink systems on its more than 1,000 aircraft.

Free Starlink Internet is coming to all of United’s airplanes Read More »

openai’s-new-“reasoning”-ai-models-are-here:-o1-preview-and-o1-mini

OpenAI’s new “reasoning” AI models are here: o1-preview and o1-mini

fruit by the foot —

New o1 language model can solve complex tasks iteratively, count R’s in “strawberry.”

An illustration of a strawberry made out of pixel-like blocks.

OpenAI finally unveiled its rumored “Strawberry” AI language model on Thursday, claiming significant improvements in what it calls “reasoning” and problem-solving capabilities over previous large language models (LLMs). Formally named “OpenAI o1,” the model family will initially launch in two forms, o1-preview and o1-mini, available today for ChatGPT Plus and certain API users.

OpenAI claims that o1-preview outperforms its predecessor, GPT-4o, on multiple benchmarks, including competitive programming, mathematics, and “scientific reasoning.” However, people who have used the model say it does not yet outclass GPT-4o in every metric. Other users have criticized the delay in receiving a response from the model, owing to the multi-step processing occurring behind the scenes before answering a query.

In a rare display of public hype-busting, OpenAI product manager Joanne Jang tweeted, “There’s a lot of o1 hype on my feed, so I’m worried that it might be setting the wrong expectations. what o1 is: the first reasoning model that shines in really hard tasks, and it’ll only get better. (I’m personally psyched about the model’s potential & trajectory!) what o1 isn’t (yet!): a miracle model that does everything better than previous models. you might be disappointed if this is your expectation for today’s launch—but we’re working to get there!”

OpenAI reports that o1-preview ranked in the 89th percentile on competitive programming questions from Codeforces. In mathematics, it scored 83 percent on a qualifying exam for the International Mathematics Olympiad, compared to GPT-4o’s 13 percent. OpenAI also states, in a claim that may later be challenged as people scrutinize the benchmarks and run their own evaluations over time, o1 performs comparably to PhD students on specific tasks in physics, chemistry, and biology. The smaller o1-mini model is designed specifically for coding tasks and is priced at 80 percent less than o1-preview.

A benchmark chart provided by OpenAI. They write,

Enlarge / A benchmark chart provided by OpenAI. They write, “o1 improves over GPT-4o on a wide range of benchmarks, including 54/57 MMLU subcategories. Seven are shown for illustration.”

OpenAI attributes o1’s advancements to a new reinforcement learning (RL) training approach that teaches the model to spend more time “thinking through” problems before responding, similar to how “let’s think step-by-step” chain-of-thought prompting can improve outputs in other LLMs. The new process allows o1 to try different strategies and “recognize” its own mistakes.

AI benchmarks are notoriously unreliable and easy to game; however, independent verification and experimentation from users will show the full extent of o1’s advancements over time. It’s worth noting that MIT Research showed earlier this year that some of the benchmark claims OpenAI touted with GPT-4 last year were erroneous or exaggerated.

A mixed bag of capabilities

OpenAI demos “o1” correctly counting the number of Rs in the word “strawberry.”

Amid many demo videos of o1 completing programming tasks and solving logic puzzles that OpenAI shared on its website and social media, one demo stood out as perhaps the least consequential and least impressive, but it may become the most talked about due to a recurring meme where people ask LLMs to count the number of R’s in the word “strawberry.”

Due to tokenization, where the LLM processes words in data chunks called tokens, most LLMs are typically blind to character-by-character differences in words. Apparently, o1 has the self-reflective capabilities to figure out how to count the letters and provide an accurate answer without user assistance.

Beyond OpenAI’s demos, we’ve seen optimistic but cautious hands-on reports about o1-preview online. Wharton Professor Ethan Mollick wrote on X, “Been using GPT-4o1 for the last month. It is fascinating—it doesn’t do everything better but it solves some very hard problems for LLMs. It also points to a lot of future gains.”

Mollick shared a hands-on post in his “One Useful Thing” blog that details his experiments with the new model. “To be clear, o1-preview doesn’t do everything better. It is not a better writer than GPT-4o, for example. But for tasks that require planning, the changes are quite large.”

Mollick gives the example of asking o1-preview to build a teaching simulator “using multiple agents and generative AI, inspired by the paper below and considering the views of teachers and students,” then asking it to build the full code, and it produced a result that Mollick found impressive.

Mollick also gave o1-preview eight crossword puzzle clues, translated into text, and the model took 108 seconds to solve it over many steps, getting all of the answers correct but confabulating a particular clue Mollick did not give it. We recommend reading Mollick’s entire post for a good early hands-on impression. Given his experience with the new model, it appears that o1 works very similar to GPT-4o but iteratively in a loop, which is something that the so-called “agentic” AutoGPT and BabyAGI projects experimented with in early 2023.

Is this what could “threaten humanity?”

Speaking of agentic models that run in loops, Strawberry has been subject to hype since last November, when it was initially known as Q(Q-star). At the time, The Information and Reuters claimed that, just before Sam Altman’s brief ouster as CEO, OpenAI employees had internally warned OpenAI’s board of directors about a new OpenAI model called Q*  that could “threaten humanity.”

In August, the hype continued when The Information reported that OpenAI showed Strawberry to US national security officials.

We’ve been skeptical about the hype around Qand Strawberry since the rumors first emerged, as this author noted last November, and Timothy B. Lee covered thoroughly in an excellent post about Q* from last December.

So even though o1 is out, AI industry watchers should note how this model’s impending launch was played up in the press as a dangerous advancement while not being publicly downplayed by OpenAI. For an AI model that takes 108 seconds to solve eight clues in a crossword puzzle and hallucinates one answer, we can say that its potential danger was likely hype (for now).

Controversy over “reasoning” terminology

It’s no secret that some people in tech have issues with anthropomorphizing AI models and using terms like “thinking” or “reasoning” to describe the synthesizing and processing operations that these neural network systems perform.

Just after the OpenAI o1 announcement, Hugging Face CEO Clement Delangue wrote, “Once again, an AI system is not ‘thinking,’ it’s ‘processing,’ ‘running predictions,’… just like Google or computers do. Giving the false impression that technology systems are human is just cheap snake oil and marketing to fool you into thinking it’s more clever than it is.”

“Reasoning” is also a somewhat nebulous term since, even in humans, it’s difficult to define exactly what the term means. A few hours before the announcement, independent AI researcher Simon Willison tweeted in response to a Bloomberg story about Strawberry, “I still have trouble defining ‘reasoning’ in terms of LLM capabilities. I’d be interested in finding a prompt which fails on current models but succeeds on strawberry that helps demonstrate the meaning of that term.”

Reasoning or not, o1-preview currently lacks some features present in earlier models, such as web browsing, image generation, and file uploading. OpenAI plans to add these capabilities in future updates, along with continued development of both the o1 and GPT model series.

While OpenAI says the o1-preview and o1-mini models are rolling out today, neither model is available in our ChatGPT Plus interface yet, so we have not been able to evaluate them. We’ll report our impressions on how this model differs from other LLMs we have previously covered.

OpenAI’s new “reasoning” AI models are here: o1-preview and o1-mini Read More »

music-industry’s-1990s-hard-drives,-like-all-hdds,-are-dying

Music industry’s 1990s hard drives, like all HDDs, are dying

The spinning song —

The music industry traded tape for hard drives and got a hard-earned lesson.

Hard drive seemingly exploding in flames and particles

Enlarge / Hard drives, unfortunately, tend to die not with a spectacular and sparkly bang, but with a head-is-stuck whimper.

Getty Images

One of the things enterprise storage and destruction company Iron Mountain does is handle the archiving of the media industry’s vaults. What it has been seeing lately should be a wake-up call: roughly one-fifth of the hard disk drives dating to the 1990s it was sent are entirely unreadable.

Music industry publication Mix spoke with the people in charge of backing up the entertainment industry. The resulting tale is part explainer on how music is so complicated to archive now, part warning about everyone’s data stored on spinning disks.

“In our line of work, if we discover an inherent problem with a format, it makes sense to let everybody know,” Robert Koszela, global director for studio growth and strategic initiatives at Iron Mountain, told Mix. “It may sound like a sales pitch, but it’s not; it’s a call for action.”

Hard drives gained popularity over spooled magnetic tape as digital audio workstations, mixing and editing software, and the perceived downsides of tape, including deterioration from substrate separation and fire. But hard drives present their own archival problems. Standard hard drives were also not designed for long-term archival use. You can almost never decouple the magnetic disks from the reading hardware inside, so that if either fails, the whole drive dies.

There are also general computer storage issues, including the separation of samples and finished tracks, or proprietary file formats requiring archival versions of software. Still, Iron Mountain tells Mix that “If the disk platters spin and aren’t damaged,” it can access the content.

But “if it spins” is becoming a big question mark. Musicians and studios now digging into their archives to remaster tracks often find that drives, even when stored at industry-standard temperature and humidity, have failed in some way, with no partial recovery option available.

“It’s so sad to see a project come into the studio, a hard drive in a brand-new case with the wrapper and the tags from wherever they bought it still in there,” Koszela says. “Next to it is a case with the safety drive in it. Everything’s in order. And both of them are bricks.”

Entropy wins

Mix’s passing along of Iron Mountain’s warning hit Hacker News earlier this week, which spurred other tales of faith in the wrong formats. The gist of it: You cannot trust any medium, so you copy important things over and over, into fresh storage. “Optical media rots, magnetic media rots and loses magnetic charge, bearings seize, flash storage loses charge, etc.,” writes user abracadaniel. “Entropy wins, sometimes much faster than you’d expect.”

There is discussion of how SSDs are not archival at all; how floppy disk quality varied greatly between the 1980s, 1990s, and 2000s; how Linear Tape-Open, a format specifically designed for long-term tape storage, loses compatibility over successive generations; how the binder sleeves we put our CD-Rs and DVD-Rs in have allowed them to bend too much and stop being readable.

Knowing that hard drives will eventually fail is nothing new. Ars wrote about the five stages of hard drive death, including denial, back in 2005. Last year, backup company Backblaze shared failure data on specific drives, showing that drives that fail tend to fail within three years, that no drive was totally exempt, and that time does, generally, wear down all drives. Google’s server drive data showed in 2007 that HDD failure was mostly unpredictable, and that temperatures were not really the deciding factor.

So Iron Mountain’s admonition to music companies is yet another warning about something we’ve already heard. But it’s always good to get some new data about just how fragile a good archive really is.

Music industry’s 1990s hard drives, like all HDDs, are dying Read More »

taylor-swift-cites-ai-deepfakes-in-endorsement-for-kamala-harris

Taylor Swift cites AI deepfakes in endorsement for Kamala Harris

it’s raining creepy men —

Taylor Swift on AI: “The simplest way to combat misinformation is with the truth.”

A screenshot of Taylor Swift's Kamala Harris Instagram post, captured on September 11, 2024.

Enlarge / A screenshot of Taylor Swift’s Kamala Harris Instagram post, captured on September 11, 2024.

On Tuesday night, Taylor Swift endorsed Vice President Kamala Harris for US President on Instagram, citing concerns over AI-generated deepfakes as a key motivator. The artist’s warning aligns with current trends in technology, especially in an era where AI synthesis models can easily create convincing fake images and videos.

“Recently I was made aware that AI of ‘me’ falsely endorsing Donald Trump’s presidential run was posted to his site,” she wrote in her Instagram post. “It really conjured up my fears around AI, and the dangers of spreading misinformation. It brought me to the conclusion that I need to be very transparent about my actual plans for this election as a voter. The simplest way to combat misinformation is with the truth.”

In August 2024, former President Donald Trump posted AI-generated images on Truth Social falsely suggesting Swift endorsed him, including a manipulated photo depicting Swift as Uncle Sam with text promoting Trump. The incident sparked Swift’s fears about the spread of misinformation through AI.

This isn’t the first time Swift and generative AI have appeared together in the news. In February, we reported that a flood of explicit AI-generated images of Swift originated from a 4chan message board where users took part in daily challenges to bypass AI image generator filters.

Listing image by Ronald Woan/CC BY-SA 2.0

Taylor Swift cites AI deepfakes in endorsement for Kamala Harris Read More »