On Saturday, a YouTube creator called “ChromaLock” published a video detailing how he modified a Texas Instruments TI-84 graphing calculator to connect to the Internet and access OpenAI’s ChatGPT, potentially enabling students to cheat on tests. The video, titled “I Made The Ultimate Cheating Device,” demonstrates a custom hardware modification that allows users of the graphing calculator to type in problems sent to ChatGPT using the keypad and receive live responses on the screen.
ChromaLock began by exploring the calculator’s link port, typically used for transferring educational programs between devices. He then designed a custom circuit board he calls “TI-32” that incorporates a tiny Wi-Fi-enabled microcontroller, the Seed Studio ESP32-C3 (which costs about $5), along with other components to interface with the calculator’s systems.
It’s worth noting that the TI-32 hack isn’t a commercial project. Replicating ChromaLock’s work would involve purchasing a TI-84 calculator, a Seed Studio ESP32-C3 microcontroller, and various electronic components, and fabricating a custom PCB based on ChromaLock’s design, which is available online.
The creator says he encountered several engineering challenges during development, including voltage incompatibilities and signal integrity issues. After developing multiple versions, ChromaLock successfully installed the custom board into the calculator’s housing without any visible signs of modifications from the outside.
To accompany the hardware, ChromaLock developed custom software for the microcontroller and the calculator, which is available open source on GitHub. The system simulates another TI-84, allowing people to use the calculator’s built-in “send” and “get” commands to transfer files. This allows a user to easily download a launcher program that provides access to various “applets” designed for cheating.
One of the applets is a ChatGPT interface that might be most useful for answering short questions, but it has a drawback in that it’s slow and cumbersome to type in long alphanumeric questions on the limited keypad.
Beyond the ChatGPT interface, the device offers several other cheating tools. An image browser allows users to access pre-prepared visual aids stored on the central server. The app browser feature enables students to download not only games for post-exam entertainment but also text-based cheat sheets disguised as program source code. ChromaLock even hinted at a future video discussing a camera feature, though details were sparse in the current demo.
ChromaLock claims his new device can bypass common anti-cheating measures. The launcher program can be downloaded on-demand, avoiding detection if a teacher inspects or clears the calculator’s memory before a test. The modification can also supposedly break calculators out of “Test Mode,” a locked-down state used to prevent cheating.
While the video presents the project as a technical achievement, consulting ChatGPT during a test on your calculator almost certainly represents an ethical breach and/or a form of academic dishonesty that could get you in serious trouble at most schools. So tread carefully, study hard, and remember to eat your Wheaties.
Certificate authorities and browser makers are planning to end the use of WHOIS data verifying domain ownership following a report that demonstrated how threat actors could abuse the process to obtain fraudulently issued TLS certificates.
TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. These credentials are issued by any one of hundreds of CAs (certificate authorities) to domain owners. The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One “base requirement rule” allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved.
Non-trivial dependencies
Researchers from security firm watchTowr recently demonstrated how threat actors could abuse the rule to obtain fraudulently issued certificates for domains they didn’t own. The security failure resulted from a lack of uniform rules for determining the validity of sites claiming to provide official WHOIS records.
Specifically, watchTowr researchers were able to receive a verification link for any domain ending in .mobi, including ones they didn’t own. The researchers did this by deploying a fake WHOIS server and populating it with fake records. Creation of the fake server was possible because dotmobiregistry.net—the previous domain hosting the WHOIS server for .mobi domains—was allowed to expire after the server was relocated to a new domain. watchTowr researchers registered the domain, set up the imposter WHOIS server, and found that CAs continued to rely on it to verify ownership of .mobi domains.
The research didn’t escape the notice of the CA/Browser Forum (CAB Forum). On Monday, a member representing Google proposed ending the reliance on WHOIS data for domain ownership verification “in light of recent events where research from watchTowr Labs demonstrated how threat actors could exploit WHOIS to obtain fraudulently issued TLS certificates.”
The formal proposal calls for reliance on WHOIS data to “sunset” in early November. It establishes specifically that “CAs MUST NOT rely on WHOIS to identify Domain Contacts” and that “Effective November 1, 2024, validations using this [email verification] method MUST NOT rely on WHOIS to identify Domain Contact information.”
Since Monday’s submission, more than 50 follow-up comments have been posted. Many of the responses expressed support for the proposed change. Others have questioned the need for a change as proposed, given that the security failure watchTowr uncovered is known to affect only a single top-level domain.
An Amazon representative, meanwhile, noted that the company previously implemented a unilateral change in which the AWS Certificate Manager will fully transition away from reliance on WHOIS records. The representative told CAB Forum members that Google’s proposed deadline of November 1 may be too stringent.
“We got feedback from customers that for some this is a non-trivial dependency to remove,” the Amazon representative wrote. “It’s not uncommon for companies to have built automation on top of email validation. Based on the information we got I recommend a date of April 30, 2025.”
CA Digicert endorsed Amazon’s proposal to extend the deadline. Digicert went on to propose that instead of using WHOIS records, CAs instead use the WHOIS successor known as the Registration Data Access Protocol.
The proposed changes are formally in the discussion phase of deliberations. It’s unclear when formal voting on the change will begin.
Microsoft announced today that it’s releasing a new app called Windows App as an app for Windows that allows users to run Windows and also Windows apps (it’s also coming to macOS, iOS, web browsers, and is in public preview for Android).
On most of those platforms, Windows App is a replacement for the Microsoft Remote Desktop app, which was used for connecting to a copy of Windows running on a remote computer or server—for some users and IT organizations, a relatively straightforward way to run Windows software on devices that aren’t running Windows or can’t run Windows natively.
The new name, though potentially confusing, attempts to sum up the app’s purpose: It’s a unified way to access your own Windows PCs with Remote Desktop access turned on, cloud-hosted Windows 365 and Microsoft Dev Box systems, and individual remotely hosted apps that have been provisioned by your work or school.
“This unified app serves as your secure gateway to connect to Windows across Windows 365, Azure Virtual Desktop, Remote Desktop, Remote Desktop Services, Microsoft Dev Box, and more,” reads the post from Microsoft’s Windows 365 Senior Product Manager Hilary Braun.
Microsoft says that aside from unifying multiple services into a single app, Windows App’s enhancements include easier account switching, better device management for IT administrators, support for the version of Windows 365 for frontline workers, and support for Microsoft’s “Relayed RDP Shortpath,” which can enable Remote Desktop on networks that normally wouldn’t allow it.
On macOS, iOS, and Android, the Windows App is a complete replacement for the Remote Desktop Connection app—if you have Remote Desktop installed, an update will change it to the Windows App. On Windows, the Remote Desktop Connection remains available, and Windows App is only used for Microsoft’s other services; it also requires some kind of account sign-in on Windows, while it works without a user account on other platforms.
For connections to your own Remote Desktop-equipped PCs, Windows App has most of the same features and requirements as the Remote Desktop Connection app did before, including support for multiple monitors, device redirection for devices like webcams and audio input/output, and dynamic resolution support (so that your Windows desktop resizes as you resize the app window).
A coalition of law-enforcement agencies said it shut down a service that facilitated the unlocking of more than 1.2 million stolen or lost mobile phones so they could be used by someone other than their rightful owner.
The service was part of iServer, a phishing-as-a-service platform that has been operating since 2018. The Argentina-based iServer sold access to a platform that offered a host of phishing-related services through email, texts, and voice calls. One of the specialized services offered was designed to help people in possession of large numbers of stolen or lost mobile devices to obtain the credentials needed to bypass protections such as the lost mode for iPhones, which prevent a lost or stolen device from being used without entering its passcode.
Catering to low-skilled thieves
An international operation coordinated by Europol’s European Cybercrime Center said it arrested the Argentinian national that was behind iServer and identified more than 2,000 “unlockers” who had enrolled in the phishing platform over the years. Investigators ultimately found that the criminal network had been used to unlock more than 1.2 million mobile phones. Officials said they also identified 483,000 phone owners who had received messages phishing for credentials for their lost or stolen devices.
According to Group-IB, the security firm that discovered the phone-unlocking racket and reported it to authorities, iServer provided a web interface that allowed low-skilled unlockers to phish the rightful device owners for the device passcodes, user credentials from cloud-based mobile platforms, and other personal information.
Group-IB wrote:
During its investigations into iServer’s criminal activities, Group-IB specialists also uncovered the structure and roles of criminal syndicates operating with the platform: the platform’s owner/developer sells access to “unlockers,” who in their turn provide phone unlocking services to other criminals with locked stolen devices. The phishing attacks are specifically designed to gather data that grants access to physical mobile devices, enabling criminals to acquire users’ credentials and local device passwords to unlock devices or unlink them from their owners. iServer automates the creation and delivery of phishing pages that imitate popular cloud-based mobile platforms, featuring several unique implementations that enhance its effectiveness as a cybercrime tool.
Unlockers obtain the necessary information for unlocking the mobile phones, such as IMEI, language, owner details, and contact information, often accessed through lost mode or via cloud-based mobile platforms. They utilize phishing domains provided by iServer or create their own to set up a phishing attack. After selecting an attack scenario, iServer creates a phishing page and sends an SMS with a malicious link to the victim.
When successful, iServer customers would receive the credentials through the web interface. The customers could then unlock a phone to disable the lost mode so the device could be used by someone new.
Ultimately, criminals received the stolen and validated credentials through the iServer web interface, enabling them to unlock a phone, turn off “Lost mode” and untie it from the owner’s account.
To better camouflage the ruse, iServer often disguised phishing pages as belonging to cloud-based services.
Besides the arrest, authorities also seized the iserver.com domain.
The takedown and arrests occurred from September 10–17 in Spain, Argentina, Chile, Colombia, Ecuador, and Peru. Authorities in those countries began investigating the phishing service in 2022.
Given the flood of photorealistic AI-generated images washing over social media networks like X and Facebook these days, we’re seemingly entering a new age of media skepticism: the era of what I’m calling “deep doubt.” While questioning the authenticity of digital content stretches back decades—and analog media long before that—easy access to tools that generate convincing fake content has led to a new wave of liars using AI-generated scenes to deny real documentary evidence. Along the way, people’s existing skepticism toward online content from strangers may be reaching new heights.
Deep doubt is skepticism of real media that stems from the existence of generative AI. This manifests as broad public skepticism toward the veracity of media artifacts, which in turn leads to a notable consequence: People can now more credibly claim that real events did not happen and suggest that documentary evidence was fabricated using AI tools.
The concept behind “deep doubt” isn’t new, but its real-world impact is becoming increasingly apparent. Since the term “deepfake” first surfaced in 2017, we’ve seen a rapid evolution in AI-generated media capabilities. This has led to recent examples of deep doubt in action, such as conspiracy theorists claiming that President Joe Biden has been replaced by an AI-powered hologram and former President Donald Trump’s baseless accusation in August that Vice President Kamala Harris used AI to fake crowd sizes at her rallies. And on Friday, Trump cried “AI” again at a photo of him with E. Jean Carroll, a writer who successfully sued him for sexual assault, that contradicts his claim of never having met her.
Legal scholars Danielle K. Citron and Robert Chesney foresaw this trend years ago, coining the term “liar’s dividend” in 2019 to describe the consequence of deep doubt: deepfakes being weaponized by liars to discredit authentic evidence. But whereas deep doubt was once a hypothetical academic concept, it is now our reality.
The rise of deepfakes, the persistence of doubt
Doubt has been a political weapon since ancient times. This modern AI-fueled manifestation is just the latest evolution of a tactic where the seeds of uncertainty are sown to manipulate public opinion, undermine opponents, and hide the truth. AI is the newest refuge of liars.
Over the past decade, the rise of deep-learning technology has made it increasingly easy for people to craft false or modified pictures, audio, text, or video that appear to be non-synthesized organic media. Deepfakes were named after a Reddit user going by the name “deepfakes,” who shared AI-faked pornography on the service, swapping out the face of a performer with the face of someone else who wasn’t part of the original recording.
In the 20th century, one could argue that a certain part of our trust in media produced by others was a result of how expensive and time-consuming it was, and the skill it required, to produce documentary images and films. Even texts required a great deal of time and skill. As the deep doubt phenomenon grows, it will erode this 20th-century media sensibility. But it will also affect our political discourse, legal systems, and even our shared understanding of historical events that rely on that media to function—we rely on others to get information about the world. From photorealistic images to pitch-perfect voice clones, our perception of what we consider “truth” in media will need recalibration.
In April, a panel of federal judges highlighted the potential for AI-generated deepfakes to not only introduce fake evidence but also cast doubt on genuine evidence in court trials. The concern emerged during a meeting of the US Judicial Conference’s Advisory Committee on Evidence Rules, where the judges discussed the challenges of authenticating digital evidence in an era of increasingly sophisticated AI technology. Ultimately, the judges decided to postpone making any AI-related rule changes, but their meeting shows that the subject is already being considered by American judges.
As is so often the case, a notable change in an upcoming Linux kernel is both historic and no big deal.
If you wanted to use “Real-Time Linux” for your audio gear, your industrial welding laser, or your Mars rover, you have had that option for a long time (presuming you didn’t want to use QNX or other alternatives). Universities started making their own real-time kernels in the late 1990s. A patch set, PREEMPT_RT, has existed since at least 2005. And some aspects of the real-time work, like NO_HZ, were long ago moved into the mainline kernel, enabling its use in data centers, cloud computing, or anything with a lot of CPUs.
But officialness still matters, and in the 6.12 kernel, PREEMPT_RT will likely be merged into the mainline. As noted by Steven Vaughan-Nichols at ZDNet, the final sign-off by Linus Torvalds occurred while he was attending Open Source Summit Europe. Torvalds wrote the original code for printk, a debugging tool that can pinpoint exact moments where a process crashes, but also introduces latency that runs counter to real-time computing. The Phoronix blog has tracked the progress of PREEMPT_RT into the kernel, along with the printk changes that allowed for threaded/atomic console support crucial to real-time mainlining.
What does this mean for desktop Linux? Not much. Beyond high-end audio production or replication (and even that is debatable), a real-time kernel won’t likely make windows snappier or programs zippier. But the guaranteed execution and worst-case latency timings a real-time Linux provides are quite useful to, say, the systems that monitor car brakes, guide CNC machines, and regulate fiendishly complex multi-CPU systems. Having PREEMPT-RT in the mainline kernel makes it easier to maintain a real-time system, rather than tend to out-of-tree patches.
It will likely change things for what had been, until now, specialty providers of real-time OS solutions for mission-critical systems. Ubuntu, for example, started offering a real-time version of its distribution in 2023 but required an Ubuntu Pro subscription for access. Ubuntu pitched its release at robotics, automation, embedded Linux, and other real-time needs, with the fixes, patches, module integration, and testing provided by Ubuntu.
“Controlling a laster with Linux is crazy,” Torvalds said at the Kernel Summit of 2006, “but everyone in this room is crazy in his own way. So if you want to use Linux to control an industrial welding laser, I have no problem with your using PREEMPT_RT.” Roughly 18 years later, Torvalds and the kernel team, including longtime maintainer and champion-of-real-time Steven Rostedt, have made it even easier to do that kind of thing.
On Wednesday, AI video synthesis firm Runway and entertainment company Lionsgate announced a partnership to create a new AI model trained on Lionsgate’s vast film and TV library. The deal will feed Runway legally clear training data and will also reportedly provide Lionsgate with tools to enhance content creation while potentially reducing production costs.
Lionsgate, known for franchises like John Wick and The Hunger Games, sees AI as a way to boost efficiency in content production. Michael Burns, Lionsgate’s vice chair, stated in a press release that AI could help develop “cutting edge, capital efficient content creation opportunities.” He added that some filmmakers have shown enthusiasm about potential applications in pre- and post-production processes.
Runway plans to develop a custom AI model using Lionsgate’s proprietary content portfolio. The model will be exclusive to Lionsgate Studios, allowing filmmakers, directors, and creative staff to augment their work. While specifics remain unclear, the partnership marks the first major collaboration between Runway and a Hollywood studio.
“We’re committed to giving artists, creators and studios the best and most powerful tools to augment their workflows and enable new ways of bringing their stories to life,” said Runway co-founder and CEO Cristóbal Valenzuela in a press release. “The history of art is the history of technology and these new models are part of our continuous efforts to build transformative mediums for artistic and creative expression; the best stories are yet to be told.”
The quest for legal training data
Generative AI models are master imitators, and video synthesis models like Runway’s latest Gen-3 Alpha are no exception. The companies that create them must amass a great deal of existing video (and still image) samples to analyze, allowing the resulting AI models to re-synthesize that information into new video generations, guided by text descriptions called prompts. And wherever that training data is lacking, it can result in unusual generations, as we saw in our hands-on evaluation of Gen-3 Alpha in July.
However, in the past, AI companies have gotten into legal trouble for scraping vast quantities of media without permission. In fact, Runway is currently the defendant in a class-action lawsuit that alleges copyright infringement for using video data obtained without permission to train its video synthesis models. While companies like OpenAI have claimed this scraping process is “fair use,” US courts have not yet definitively ruled on the practice. With other potential legal challenges ahead, it makes sense from Runway’s perspective to reach out and sign deals for training data that is completely in the clear.
Even if the training data becomes fully legal and licensed, different elements of the entertainment industry view generative AI on a spectrum that seems to range between fascination and horror. The technology’s ability to rapidly create images and video based on prompts may attract studios looking to streamline production. However, it raises polarizing concerns among unions about job security, actors and musicians about likeness misuse and ethics, and studios about legal implications.
So far, news of the deal has not been received kindly among vocal AI critics found on social media. On X, filmmaker and AI critic Joe Russo wrote, “I don’t think I’ve ever seen a grosser string of words than: ‘to develop cutting-edge, capital-efficient content creation opportunities.'”
Film concept artist Reid Southen shared a similar negative take on X: “I wonder how the directors and actors of their films feel about having their work fed into the AI to make a proprietary model. As an artist on The Hunger Games? I’m pissed. This is the first step in trying to replace artists and filmmakers.”
It’s a fear that we will likely hear more about in the future as AI video synthesis technology grows more capable—and potentially becomes adopted as a standard filmmaking tool. As studios explore AI applications despite legal uncertainties and labor concerns, partnerships like the Lionsgate-Runway deal may shape the future of content creation in Hollywood.
The FBI has dismantled a massive network of compromised devices that Chinese state-sponsored hackers have used for four years to mount attacks on government agencies, telecoms, defense contractors, and other targets in the US and Taiwan.
The botnet was made up primarily of small office and home office routers, surveillance cameras, network-attached storage, and other Internet-connected devices located all over the world. Over the past four years, US officials said, 260,000 such devices have cycled through the sophisticated network, which is organized in three tiers that allow the botnet to operate with efficiency and precision. At its peak in June 2023, Raptor Train, as the botnet is named, consisted of more than 60,000 commandeered devices, according to researchers from Black Lotus Labs, making it the largest China state botnet discovered to date.
Burning down the house
Raptor Train is the second China state-operated botnet US authorities have taken down this year. In January, law enforcement officials covertly issued commands to disinfect Internet of Things devices that hackers backed by the Chinese government had taken over without the device owners’ knowledge. The Chinese hackers, part of a group tracked as Volt Typhoon, used the botnet for more than a year as a platform to deliver exploits that burrowed deep into the networks of targets of interest. Because the attacks appear to originate from IP addresses with good reputations, they are subjected to less scrutiny from network security defenses, making the bots an ideal delivery proxy. Russia-state hackers have also been caught assembling large IoT botnets for the same purposes.
An advisory jointly issued Wednesday by the FBI, the Cyber National Mission Force, and the National Security Agency said that China-based company Integrity Technology Group controlled and managed Raptor Train. The company has ties to the People’s Republic of China, officials said. The company, they said, has also used the state-controlled China Unicom Beijing Province Network IP addresses to control and manage the botnet. Researchers and law enforcement track the China-state group that worked with Integrity Technology as Flax Typhoon. More than half of the infected Raptor Train devices were located in North America and another 25 percent in Europe.
“Flax Typhoon was targeting critical infrastructure across the US and overseas, everyone from corporations and media organizations to universities and government agencies,” FBI Director Christopher Wray said Wednesday at the Aspen Cyber Summit. “Like Volt Typhoon, they used Internet-connected devices, this time hundreds of thousands of them, to create a botnet that helped them compromise systems and exfiltrate confidential data.” He added: “Flax Typhoon’s actions caused real harm to its victims who had to devote precious time to clean up the mess.”
On Tuesday, Google announced plans to implement content authentication technology across its products to help users distinguish between human-created and AI-generated images. Over several upcoming months, the tech giant will integrate the Coalition for Content Provenance and Authenticity (C2PA) standard, a system designed to track the origin and editing history of digital content, into its search, ads, and potentially YouTube services. However, it’s an open question of whether a technological solution can address the ancient social issue of trust in recorded media produced by strangers.
A group of tech companies created the C2PA system beginning in 2019 in an attempt to combat misleading, realistic synthetic media online. As AI-generated content becomes more prevalent and realistic, experts have worried that it may be difficult for users to determine the authenticity of images they encounter. The C2PA standard creates a digital trail for content, backed by an online signing authority, that includes metadata information about where images originate and how they’ve been modified.
Google will incorporate this C2PA standard into its search results, allowing users to see if an image was created or edited using AI tools. The tech giant’s “About this image” feature in Google Search, Lens, and Circle to Search will display this information when available.
In a blog post, Laurie Richardson, Google’s vice president of trust and safety, acknowledged the complexities of establishing content provenance across platforms. She stated, “Establishing and signaling content provenance remains a complex challenge, with a range of considerations based on the product or service. And while we know there’s no silver bullet solution for all content online, working with others in the industry is critical to create sustainable and interoperable solutions.”
The company plans to use the C2PA’s latest technical standard, version 2.1, which reportedly offers improved security against tampering attacks. Its use will extend beyond search since Google intends to incorporate C2PA metadata into its ad systems as a way to “enforce key policies.” YouTube may also see integration of C2PA information for camera-captured content in the future.
Google says the new initiative aligns with its other efforts toward AI transparency, including the development of SynthID, an embedded watermarking technology created by Google DeepMind.
Widespread C2PA efficacy remains a dream
Despite having a history that reaches back at least five years now, the road to useful content provenance technology like C2PA is steep. The technology is entirely voluntary, and key authenticating metadata can easily be stripped from images once added.
AI image generators would need to support the standard for C2PA information to be included in each generated file, which will likely preclude open source image synthesis models like Flux. So perhaps, in practice, more “authentic,” camera-authored media will be labeled with C2PA than AI-generated images.
Beyond that, maintaining the metadata requires a complete toolchain that supports C2PA every step along the way, including at the source and any software used to edit or retouch the images. Currently, only a handful of camera manufacturers, such as Leica, support the C2PA standard. Nikon and Canon have pledged to adopt it, but The Verge reports that there’s still uncertainty about whether Apple and Google will implement C2PA support in their smartphone devices.
Adobe’s Photoshop and Lightroom can add and maintain C2PA data, but many other popular editing tools do not yet offer the capability. It only takes one non-compliant image editor in the chain to break the full usefulness of C2PA. And the general lack of standardized viewing methods for C2PA data across online platforms presents another obstacle to making the standard useful for everyday users.
Currently, C2PA could arguably be seen as a technological solution for current trust issues around fake images. In that sense, C2PA may become one of many tools used to authenticate content by determining whether the information came from a credible source—if the C2PA metadata is preserved—but it is unlikely to be a complete solution to AI-generated misinformation on its own.
Researchers still don’t know the cause of a recently discovered malware infection affecting almost 1.3 million streaming devices running an open source version of Android in almost 200 countries.
Security firm Doctor Web reported Thursday that malware named Android.Vo1d has backdoored the Android-based boxes by putting malicious components in their system storage area, where they can be updated with additional malware at any time by command-and-control servers. Google representatives said the infected devices are running operating systems based on the Android Open Source Project, a version overseen by Google but distinct from Android TV, a proprietary version restricted to licensed device makers.
Dozens of variants
Although Doctor Web has a thorough understanding of Vo1d and the exceptional reach it has achieved, company researchers say they have yet to determine the attack vector that has led to the infections.
“At the moment, the source of the TV boxes’ backdoor infection remains unknown,” Thursday’s post stated. “One possible infection vector could be an attack by an intermediate malware that exploits operating system vulnerabilities to gain root privileges. Another possible vector could be the use of unofficial firmware versions with built-in root access.”
The following device models infected by Vo1d are:
TV box model
Declared firmware version
R4
Android 7.1.2; R4 Build/NHG47K
TV BOX
Android 12.1; TV BOX Build/NHG47K
KJ-SMART4KVIP
Android 10.1; KJ-SMART4KVIP Build/NHG47K
One possible cause of the infections is that the devices are running outdated versions that are vulnerable to exploits that remotely execute malicious code on them. Versions 7.1, 10.1, and 12.1, for example, were released in 2016, 2019, and 2022, respectively. What’s more, Doctor Web said it’s not unusual for budget device manufacturers to install older OS versions in streaming boxes and make them appear more attractive by passing them off as more up-to-date models.
Further, while only licensed device makers are permitted to modify Google’s AndroidTV, any device maker is free to make changes to open source versions. That leaves open the possibility that the devices were infected in the supply chain and were already compromised by the time they were purchased by the end user.
“These off-brand devices discovered to be infected were not Play Protect certified Android devices,” Google said in a statement. “If a device isn’t Play Protect certified, Google doesn’t have a record of security and compatibility test results. Play Protect certified Android devices undergo extensive testing to ensure quality and user safety.”
The statement said people can confirm a device runs Android TV OS by checking this link and following the steps listed here.
Doctor Web said that there are dozens of Vo1d variants that use different code and plant malware in slightly different storage areas, but that all achieve the same end result of connecting to an attacker-controlled server and installing a final component that can install additional malware when instructed. VirusTotal shows that most of the Vo1d variants were first uploaded to the malware identification site several months ago.
Researchers wrote:
All these cases involved similar signs of infection, so we will describe them using one of the first requests we received as an example. The following objects were changed on the affected TV box:
install-recovery.sh
daemonsu
In addition, 4 new files emerged in its file system:
/system/xbin/vo1d
/system/xbin/wd
/system/bin/debuggerd
/system/bin/debuggerd_real
The vo1d and wd files are the components of the Android.Vo1d trojan that we discovered.
The trojan’s authors probably tried to disguise one if its components as the system program /system/bin/vold, having called it by the similar-looking name “vo1d” (substituting the lowercase letter “l” with the number “1”). The malicious program’s name comes from the name of this file. Moreover, this spelling is consonant with the English word “void”.
The install-recovery.sh file is a script that is present on most Android devices. It runs when the operating system is launched and contains data for autorunning the elements specified in it. If any malware has root access and the ability to write to the /system system directory, it can anchor itself in the infected device by adding itself to this script (or by creating it from scratch if it is not present in the system). Android.Vo1d has registered the autostart for the wd component in this file.
The daemonsu file is present on many Android devices with root access. It is launched by the operating system when it starts and is responsible for providing root privileges to the user. Android.Vo1d registered itself in this file, too, having also set up autostart for the wd module.
The debuggerd file is a daemon that is typically used to create reports on occurred errors. But when the TV box was infected, this file was replaced by the script that launches the wd component.
The debuggerd_real file in the case we are reviewing is a copy of the script that was used to substitute the real debuggerd file. Doctor Web experts believe that the trojan’s authors intended the original debuggerd to be moved into debuggerd_real to maintain its functionality. However, because the infection probably occurred twice, the trojan moved the already substituted file (i.e., the script). As a result, the device had two scripts from the trojan and not a single real debuggerd program file.
At the same time, other users who contacted us had a slightly different list of files on their infected devices:
debuggerd_real (the original file of the debuggerd tool);
install-recovery.sh (a script that loads objects specified in it).
An analysis of all the aforementioned files showed that in order to anchor Android.Vo1d in the system, its authors used at least three different methods: modification of the install-recovery.sh and daemonsu files and substitution of the debuggerd program. They probably expected that at least one of the target files would be present in the infected system, since manipulating even one of them would ensure the trojan’s successful auto launch during subsequent device reboots.
Android.Vo1d’s main functionality is concealed in its vo1d (Android.Vo1d.1) and wd (Android.Vo1d.3) components, which operate in tandem. The Android.Vo1d.1 module is responsible for Android.Vo1d.3’s launch and controls its activity, restarting its process if necessary. In addition, it can download and run executables when commanded to do so by the C&C server. In turn, the Android.Vo1d.3 module installs and launches the Android.Vo1d.5 daemon that is encrypted and stored in its body. This module can also download and run executables. Moreover, it monitors specified directories and installs the APK files that it finds in them.
The geographic distribution of the infections is wide, with the biggest number detected in Brazil, Morocco, Pakistan, Saudi Arabia, Russia, Argentina, Ecuador, Tunisia, Malaysia, Algeria, and Indonesia.
It’s not especially easy for less experienced people to check if a device is infected short of installing malware scanners. Doctor Web said its antivirus software for Android will detect all Vo1d variants and disinfect devices that provide root access. More experienced users can check indicators of compromise here.
On Thursday, Google made Gemini Live, its voice-based AI chatbot feature, available for free to all Android users. The feature allows users to interact with Gemini through voice commands on their Android devices. That’s notable because competitor OpenAI’s Advanced Voice Mode feature of ChatGPT, which is similar to Gemini Live, has not yet fully shipped.
Google unveiled Gemini Live during its Pixel 9 launch event last month. Initially, the feature was exclusive to Gemini Advanced subscribers, but now it’s accessible to anyone using the Gemini app or its overlay on Android.
Gemini Live enables users to ask questions aloud and even interrupt the AI’s responses mid-sentence. Users can choose from several voice options for Gemini’s responses, adding a level of customization to the interaction.
Gemini suggests the following uses of the voice mode in its official help documents:
Talk back and forth: Talk to Gemini without typing, and Gemini will respond back verbally. Brainstorm ideas out loud: Ask for a gift idea, to plan an event, or to make a business plan. Explore: Uncover more details about topics that interest you. Practice aloud: Rehearse for important moments in a more natural and conversational way.
Interestingly, while OpenAI originally demoed its Advanced Voice Mode in May with the launch of GPT-4o, it has only shipped the feature to a limited number of users starting in late July. Some AI experts speculate that a wider rollout has been hampered by a lack of available computer power since the voice feature is presumably very compute-intensive.
To access Gemini Live, users can reportedly tap a new waveform icon in the bottom-right corner of the app or overlay. This action activates the microphone, allowing users to pose questions verbally. The interface includes options to “hold” Gemini’s answer or “end” the conversation, giving users control over the flow of the interaction.
Currently, Gemini Live supports only English, but Google has announced plans to expand language support in the future. The company also intends to bring the feature to iOS devices, though no specific timeline has been provided for this expansion.
United Airlines announced this morning that it is giving its in-flight Internet access an upgrade. It has signed a deal with Starlink to deliver SpaceX’s satellite-based service to all its aircraft, a process that will start in 2025. And the good news for passengers is that the in-flight Wi-Fi will be free of charge.
The flying experience as it relates to consumer technology has come a very long way in the two-and-a-bit decades that Ars has been publishing. At the turn of the century, even having a power socket in your seat was a long shot. Laptop batteries didn’t last that long, either—usually less than the runtime of whatever DVD I hoped to distract myself with, if memory serves.
Bring a spare battery and that might double, but it helped to have a book or magazine to read.
By 2011, the picture had changed. Wi-Fi was no longer some esoteric thing known only to nerds who built their own computers, and smartphones and tablets were on their way to ubiquity. After an aborted attempt in 2004, 2008 made in-flight Internet access a reality in North America, although the air-to-ground cellular-based system was slow, unreliable, and expensive.
Air-to-ground Internet access was maybe slightly cheaper by 2018, but it was still frustrating and slow, particularly if you were, oh, I dunno, a journalist trying to upload images to a CMS on your way back from an event. But by then, there was a better alternative—satellites. Airliners started sporting new antenna-concealing blisters, and soon, we were all streaming and posting and working our way across the skies.
Enter SpaceX
That bandwidth was courtesy of Viasat, according to all the receipts in my expense reports, but in 2022, SpaceX announced that it was adding aviation to Starlink’s portfolio. Initially, Starlink only targeted smaller regional and private jet aircraft, but now its equipment is also certified for commercial passenger planes from Airbus and Boeing and is already in use with carriers including Qatar Airways and Air New Zealand.
United says it will start testing Starlink equipment early in 2025, with the first use on passenger flights later that year. The service will be available gate-to-gate (as opposed to only working above 10,000 feet, a restriction some other systems operate under), and it certainly sounds like a superior experience to current in-flight Internet, as it will explicitly allow streaming of both video and games, and multiple connected devices at once. Better yet, United says the service will be free for passengers.
Depending on the route you fly, you may need to have some patience, though. United says it will take several years to install Starlink systems on its more than 1,000 aircraft.