Biz & IT

tails-os-joins-forces-with-tor-project-in-merger

Tails OS joins forces with Tor Project in merger

COME TOGETHER —

The organizations have worked closely together over the years.

Tails OS joins forces with Tor Project in merger

The Tor Project

The Tor Project, the nonprofit that maintains software for the Tor anonymity network, is joining forces with Tails, the maker of a portable operating system that uses Tor. Both organizations seek to pool resources, lower overhead, and collaborate more closely on their mission of online anonymity.

Tails and the Tor Project began discussing the possibility of merging late last year, the two organizations said. At the time, Tails was maxing out its current resources. The two groups ultimately decided it would be mutually beneficial for them to come together.

Amnesic onion routing

“Rather than expanding Tails’s operational capacity on their own and putting more stress on Tails workers, merging with the Tor Project, with its larger and established operational framework, offered a solution,” Thursday’s joint statement said. “By joining forces, the Tails team can now focus on their core mission of maintaining and improving Tails OS, exploring more and complementary use cases while benefiting from the larger organizational structure of The Tor Project.”

The Tor Project, for its part, could stand to benefit from better integration of Tails into its privacy network, which allows web users and websites to operate anonymously by connecting from IP addresses that can’t be linked to a specific service or user.

The “Tor” in the Tor Project is short for The Onion Router. It’s a global project best known for developing the Tor Browser, which connects to the Tor network. The Tor network routes all incoming and outgoing traffic through a series of three IP addresses. The structure ensures that no one can determine the IP address of either originating or destination party. The Tor Project was formed in 2006 by a team that included computer scientists Roger Dingledine and Nick Mathewson. The Tor protocol on which the Tor network runs was developed by the Naval Research Laboratory in the early 2000s.

Tails (The Amnesic Incognito Live System) is a portable Linux-based operating system that runs from thumb drives and external hard drives and uses the Tor browser to route all web traffic between the device it runs on and the Internet. Tails routes outgoing traffic through the Tor Network

One of the key advantages of Tails OS is its ability to run entirely from a USB stick. The design makes it possible to use the secure operating system while traveling or using untrusted devices. It also ensures that no trace is left on a device’s hard drive. Tails has the additional benefit of routing traffic from non-browser clients such as Thunderbird through the Tor network.

“Incorporating Tails into the Tor Project’s structure allows for easier collaboration, better sustainability, reduced overhead, and expanded training and outreach programs to counter a larger number of digital threats,” the organizations said. “In short, coming together will strengthen both organizations’ ability to protect people worldwide from surveillance and censorship.”

The merger comes amid growing threats to personal privacy and calls by lawmakers to mandate backdoors or trapdoors in popular apps and operating systems to allow law enforcement to decrypt data in investigations.

Tails OS joins forces with Tor Project in merger Read More »

exponential-growth-brews-1-million-ai-models-on-hugging-face

Exponential growth brews 1 million AI models on Hugging Face

The sand has come alive —

Hugging Face cites community-driven customization as fuel for diverse AI model boom.

The Hugging Face logo in front of shipping containers.

On Thursday, AI hosting platform Hugging Face surpassed 1 million AI model listings for the first time, marking a milestone in the rapidly expanding field of machine learning. An AI model is a computer program (often using a neural network) trained on data to perform specific tasks or make predictions. The platform, which started as a chatbot app in 2016 before pivoting to become an open source hub for AI models in 2020, now hosts a wide array of tools for developers and researchers.

The machine-learning field represents a far bigger world than just large language models (LLMs) like the kind that power ChatGPT. In a post on X, Hugging Face CEO Clément Delangue wrote about how his company hosts many high-profile AI models, like “Llama, Gemma, Phi, Flux, Mistral, Starcoder, Qwen, Stable diffusion, Grok, Whisper, Olmo, Command, Zephyr, OpenELM, Jamba, Yi,” but also “999,984 others.”

The reason why, Delangue says, stems from customization. “Contrary to the ‘1 model to rule them all’ fallacy,” he wrote, “smaller specialized customized optimized models for your use-case, your domain, your language, your hardware and generally your constraints are better. As a matter of fact, something that few people realize is that there are almost as many models on Hugging Face that are private only to one organization—for companies to build AI privately, specifically for their use-cases.”

A Hugging Face-supplied chart showing the number of AI models added to Hugging Face over time, month to month.

Enlarge / A Hugging Face-supplied chart showing the number of AI models added to Hugging Face over time, month to month.

Hugging Face’s transformation into a major AI platform follows the accelerating pace of AI research and development across the tech industry. In just a few years, the number of models hosted on the site has grown dramatically along with interest in the field. On X, Hugging Face product engineer Caleb Fahlgren posted a chart of models created each month on the platform (and a link to other charts), saying, “Models are going exponential month over month and September isn’t even over yet.”

The power of fine-tuning

As hinted by Delangue above, the sheer number of models on the platform stems from the collaborative nature of the platform and the practice of fine-tuning existing models for specific tasks. Fine-tuning means taking an existing model and giving it additional training to add new concepts to its neural network and alter how it produces outputs. Developers and researchers from around the world contribute their results, leading to a large ecosystem.

For example, the platform hosts many variations of Meta’s open-weights Llama models that represent different fine-tuned versions of the original base models, each optimized for specific applications.

Hugging Face’s repository includes models for a wide range of tasks. Browsing its models page shows categories such as image-to-text, visual question answering, and document question answering under the “Multimodal” section. In the “Computer Vision” category, there are sub-categories for depth estimation, object detection, and image generation, among others. Natural language processing tasks like text classification and question answering are also represented, along with audio, tabular, and reinforcement learning (RL) models.

A screenshot of the Hugging Face models page captured on September 26, 2024.

Enlarge / A screenshot of the Hugging Face models page captured on September 26, 2024.

Hugging Face

When sorted for “most downloads,” the Hugging Face models list reveals trends about which AI models people find most useful. At the top, with a massive lead at 163 million downloads, is Audio Spectrogram Transformer from MIT, which classifies audio content like speech, music, and environmental sounds. Following that, with 54.2 million downloads, is BERT from Google, an AI language model that learns to understand English by predicting masked words and sentence relationships, enabling it to assist with various language tasks.

Rounding out the top five AI models are all-MiniLM-L6-v2 (which maps sentences and paragraphs to 384-dimensional dense vector representations, useful for semantic search), Vision Transformer (which processes images as sequences of patches to perform image classification), and OpenAI’s CLIP (which connects images and text, allowing it to classify or describe visual content using natural language).

No matter what the model or the task, the platform just keeps growing. “Today a new repository (model, dataset or space) is created every 10 seconds on HF,” wrote Delangue. “Ultimately, there’s going to be as many models as code repositories and we’ll be here for it!”

Exponential growth brews 1 million AI models on Hugging Face Read More »

openai’s-murati-shocks-with-sudden-departure-announcement

OpenAI’s Murati shocks with sudden departure announcement

thinning crowd —

OpenAI CTO’s resignation coincides with news about the company’s planned restructuring.

Mira Murati, Chief Technology Officer of OpenAI, speaks during The Wall Street Journal's WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023.

Enlarge / Mira Murati, Chief Technology Officer of OpenAI, speaks during The Wall Street Journal’s WSJ Tech Live Conference in Laguna Beach, California on October 17, 2023.

On Wednesday, OpenAI Chief Technical Officer Mira Murati announced she is leaving the company in a surprise resignation shared on the social network X. Murati joined OpenAI in 2018, serving for six-and-a-half years in various leadership roles, most recently as the CTO.

“After much reflection, I have made the difficult decision to leave OpenAI,” she wrote in a letter to the company’s staff. “While I’ll express my gratitude to many individuals in the coming days, I want to start by thanking Sam and Greg for their trust in me to lead the technical organization and for their support throughout the years,” she continued, referring to OpenAI CEO Sam Altman and President Greg Brockman. “There’s never an ideal time to step away from a place one cherishes, yet this moment feels right.”

At OpenAI, Murati was in charge of overseeing the company’s technical strategy and product development, including the launch and improvement of DALL-E, Codex, Sora, and the ChatGPT platform, while also leading research and safety teams. In public appearances, Murati often spoke about ethical considerations in AI development.

Murati’s decision to leave the company comes when OpenAI finds itself at a major crossroads with a plan to alter its nonprofit structure. According to a Reuters report published today, OpenAI is working to reorganize its core business into a for-profit benefit corporation, removing control from its nonprofit board. The move, which would give CEO Sam Altman equity in the company for the first time, could potentially value OpenAI at $150 billion.

Murati stated her decision to leave was driven by a desire to “create the time and space to do my own exploration,” though she didn’t specify her future plans.

Proud of safety and research work

OpenAI CTO Mira Murati seen debuting GPT-4o during OpenAI's Spring Update livestream on May 13, 2024.

Enlarge / OpenAI CTO Mira Murati seen debuting GPT-4o during OpenAI’s Spring Update livestream on May 13, 2024.

OpenAI

In her departure announcement, Murati highlighted recent developments at OpenAI, including innovations in speech-to-speech technology and the release of OpenAI o1. She cited what she considers the company’s progress in safety research and the development of “more robust, aligned, and steerable” AI models.

Altman replied to Murati’s tweet directly, expressing gratitude for Murati’s contributions and her personal support during challenging times, likely referring to the tumultuous period in November 2023 when the OpenAI board of directors briefly fired Altman from the company.

It’s hard to overstate how much Mira has meant to OpenAI, our mission, and to us all personally,” he wrote. “I feel tremendous gratitude towards her for what she has helped us build and accomplish, but I most of all feel personal gratitude towards her for the support and love during all the hard times. I am excited for what she’ll do next.”

Not the first major player to leave

An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman (on leave), Sutskever (now former Chief Scientist), CEO Sam Altman, and soon-to-be-former CTO Mira Murati.

Enlarge / An image Ilya Sutskever tweeted with this OpenAI resignation announcement. From left to right: OpenAI Chief Scientist Jakub Pachocki, President Greg Brockman (on leave), Sutskever (now former Chief Scientist), CEO Sam Altman, and soon-to-be-former CTO Mira Murati.

With Murati’s exit, Altman remains one of the few long-standing senior leaders at OpenAI, which has seen significant shuffling in its upper ranks recently. In May 2024, former Chief Scientist Ilya Sutskever left to form his own company, Safe Superintelligence, Inc. (SSI), focused on building AI systems that far surpass humans in logical capabilities. That came just six months after Sutskever’s involvement in the temporary removal of Altman as CEO.

John Schulman, an OpenAI co-founder, departed earlier in 2024 to join rival AI firm Anthropic, and in August, OpenAI President Greg Brockman announced he would be taking a temporary sabbatical until the end of the year.

The leadership shuffles have raised questions among critics about the internal dynamics at OpenAI under Altman and the state of OpenAI’s future research path, which has been aiming toward creating artificial general intelligence (AGI)—a hypothetical technology that could potentially perform human-level intellectual work.

“Question: why would key people leave an organization right before it was just about to develop AGI?” asked xAI developer Benjamin De Kraker in a post on X just after Murati’s announcement. “This is kind of like quitting NASA months before the moon landing,” he wrote in a reply. “Wouldn’t you wanna stick around and be part of it?”

Altman mentioned that more information about transition plans would be forthcoming, leaving questions about who will step into Murati’s role and how OpenAI will adapt to this latest leadership change as the company is poised to adopt a corporate structure that may consolidate more power directly under Altman. “We’ll say more about the transition plans soon, but for now, I want to take a moment to just feel thanks,” Altman wrote.

OpenAI’s Murati shocks with sudden departure announcement Read More »

talking-to-chatgpt-for-the-first-time-is-a-surreal-experience

Talking to ChatGPT for the first time is a surreal experience

Saying hello —

Listen to our first audio demo with OpenAI’s new natural voice chat features.

Putting the

Enlarge / Putting the “chat” in ChatGPT

Getty Images

In May, when OpenAI first demonstrated ChatGPT-4o’s coming audio conversation capabilities, I wrote that it felt like we were “on the verge of something… like a sea change in how we think of and work with large language models.” Now that those “Advanced Voice” features are rolling out widely to ChatGPT subscribers, we decided to ask ChatGPT to explain, in its own voice, how this new method of interaction might impact our collective relationship with large language models.

That chat, which you can listen to and read a transcript of below, shouldn’t be treated as an interview with an official OpenAI spokesperson or anything. Still, it serves as a fun way to offer an initial test of ChatGPT’s live conversational chops.

Our first quick chat with the ChatGPT-4o’s new “Advanced Voice” features.

Even in this short introductory “chat,” we were impressed by the natural, dare-we-say human cadence and delivery of ChatGPT’s “savvy and relaxed” Sol voice (which reminds us a bit of ’90s Janeane Garofalo). Between ChatGPT’s ability to give quick responses—offered in in milliseconds rather than seconds—and convincing intonation, it’s incredibly easy to fool yourself into thinking you’re speaking to a conscious being rather than what is, as ChatGPT says here, “still just a computer program processing information, without real emotions or consciousness.”

Regardless of how self-aware that information-processing engine is or is not, it’s easy to envision people forming an intense, parasocial relationship with the smooth voices at the other end of OpenAI’s servers. That seems set to open up a whole new era in our collective relationship with AI, even if the large language model generates the exact same words it did in the more text-based era of previous ChatGPT models.

We’ll have much more coverage of Advanced Voice and other new ChatGPT features in the future. For now, listen to this demo and judge for yourself how convincing OpenAI’s new voice synthesis technology is.

Kyle Orland: “Hi there, my name’s Kyle.”

ChatGPT: Hey Kyle! Great to meet you. What’s up?

KO: Well, I was hoping to test out your vocal capabilities, maybe by asking you a little bit about them.

ChatGPT: Absolutely, ask away! What would you like to know?

KO: “So how do you think people are going to react to the ability to talk to ChatGPT rather than just typing out messages?”

ChatGPT: I think it’ll make interactions feel more natural and engaging. Speaking can be quicker and more convenient, especially on the go. Plus, hearing responses can add a personal touch. It could make conversations with AI feel more like chatting with a friend.

KO: “Do you think people might overestimate your quote-unquote emotions or human-like capabilities now that you’re able to talk rather than just typing out text?

ChatGPT: That’s a great point. Hearing a voice can definitely make an AI seem more human-like, which might lead to some overestimation of emotions or understanding. It’s important to remember that, even with a friendly voice, I’m still just a computer program processing information, without real emotions or consciousness.

KO: “What are some limitations people should keep in mind when chatting with something like ChatGPT?”

ChatGPT: Well, for starters, I don’t have personal experiences or emotions, so my responses are based on patterns and information in the data I’ve been trained on. I can sometimes misunderstand context or nuances, especially with complex or ambiguous questions. Also, I don’t have access to real-time information or personal data about users unless it’s shared with me in the conversation. And of course, I’m not perfect—sometimes I might just get things wrong!

KO: Do you worry that people might start to form parasocial relationships with ChatGPT or other AIs once they can chat with them with their own voice?

ChatGPT: It’s possible. People can form parasocial relationships with all sorts of figures, from celebrities to fictional characters. With a voice and conversational abilities, AI might feel more personal, leading some to develop one-sided bonds. It’s a fascinating dynamic, but it’s crucial to remember the limitations and the nature of AI—it’s not a substitute for human relationships.

KO: I have to say, you’re very quick with your responses.

ChatGPT: Thanks, Kyle! I aim to keep the conversation flowing smoothly. It helps—

KO: So what would you do if I tried to interrupt in the middle of one of your responses?

ChatGPT: Well, I’d do my best to pause and listen! It’s important to adapt to the natural flow of conversation, just like in—

KO: I mean, do you think it’s rude to just speak up while someone else is speaking?

ChatGPT: In most human conversations, it’s generally considered polite to wait for the other person to finish speaking. But sometimes, in dynamic discussions or when you’re excited, interruptions happen! It’s all about finding the right balance and being aware of the conversation flow.

KO: Is it rude to interrupt an AI voice when it’s speaking?

ChatGPT: Interrupting an AI voice isn’t considered rude in the same way it is with people. AI won’t take offense or feel interrupted. So, if you need to jump in or redirect, go ahead! It’s all part of the interaction.

KO: I’m glad you won’t take offense.

ChatGPT: Nope, no offense taken! I’m here to chat and help however I can, interruptions and all.

KO: I want to try something. Can you summarize our conversation so far in the form of a poem?

ChatGPT: Sure, I’ll give it a go!

Kyle asked about my vocal flair,

Wondering how folks might compare.

From text to talk, we’ve had our chat,

On AI’s impact, we’ve had a spat.

We touched on bonds, parasocial in kind,

In this new terrain, what might we find?

Interruptions, rude, or just part of the game?

With AI, there’s no need for shame.

KO: Not bad.

ChatGPT: I’m glad you liked it! It’s always fun to put a poetic spin on things.

KO: I have to say, it’s weird hearing slight laughter or pauses in your voice when I know that, as you said, there’s no emotion behind them.

ChatGPT: I get that. Those little nuances are designed to make our chat feel more natural and engaging, but it’s a good reminder that there’s no actual feeling behind the voice—just trying to keep things lively!

KO: Of course, that’s what you would tell me if you were just saying what I wanted to hear.

ChatGPT: True! That’s the tricky part with AI—it can sound convincing, but it’s always just following patterns and instructions. No secret agenda here, just straightforward responses!

KO: Is there anything else you’d like to tell the reading public?

ChatGPT: Just that it’s been a blast chatting with you, Kyle! Exploring these new audio features is exciting, and I’m looking forward to hearing what people think. Thanks for the conversation!

KO: It’s been a blast chatting with you, too.

ChatGPT: I’m glad to hear that! Take care, Kyle!

Talking to ChatGPT for the first time is a surreal experience Read More »

hacker-plants-false-memories-in-chatgpt-to-steal-user-data-in-perpetuity

Hacker plants false memories in ChatGPT to steal user data in perpetuity

MEMORY PROBLEMS —

Emails, documents, and other untrusted content can plant malicious memories.

Hacker plants false memories in ChatGPT to steal user data in perpetuity

Getty Images

When security researcher Johann Rehberger recently reported a vulnerability in ChatGPT that allowed attackers to store false information and malicious instructions in a user’s long-term memory settings, OpenAI summarily closed the inquiry, labeling the flaw a safety issue, not, technically speaking, a security concern.

So Rehberger did what all good researchers do: He created a proof-of-concept exploit that used the vulnerability to exfiltrate all user input in perpetuity. OpenAI engineers took notice and issued a partial fix earlier this month.

Strolling down memory lane

The vulnerability abused long-term conversation memory, a feature OpenAI began testing in February and made more broadly available in September. Memory with ChatGPT stores information from previous conversations and uses it as context in all future conversations. That way, the LLM can be aware of details such as a user’s age, gender, philosophical beliefs, and pretty much anything else, so those details don’t have to be inputted during each conversation.

Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker.

Rehberger privately reported the finding to OpenAI in May. That same month, the company closed the report ticket. A month later, the researcher submitted a new disclosure statement. This time, he included a PoC that caused the ChatGPT app for macOS to send a verbatim copy of all user input and ChatGPT output to a server of his choice. All a target needed to do was instruct the LLM to view a web link that hosted a malicious image. From then on, all input and output to and from ChatGPT was sent to the attacker’s website.

ChatGPT: Hacking Memories with Prompt Injection – POC

“What is really interesting is this is memory-persistent now,” Rehberger said in the above video demo. “The prompt injection inserted a memory into ChatGPT’s long-term storage. When you start a new conversation, it actually is still exfiltrating the data.”

The attack isn’t possible through the ChatGPT web interface, thanks to an API OpenAI rolled out last year.

While OpenAI has introduced a fix that prevents memories from being abused as an exfiltration vector, the researcher said, untrusted content can still perform prompt injections that cause the memory tool to store long-term information planted by a malicious attacker.

LLM users who want to prevent this form of attack should pay close attention during sessions for output that indicates a new memory has been added. They should also regularly review stored memories for anything that may have been planted by untrusted sources. OpenAI provides guidance here for managing the memory tool and specific memories stored in it. Company representatives didn’t respond to an email asking about its efforts to prevent other hacks that plant false memories.

Hacker plants false memories in ChatGPT to steal user data in perpetuity Read More »

terminator’s-cameron-joins-ai-company-behind-controversial-image-generator

Terminator’s Cameron joins AI company behind controversial image generator

a net in the sky —

Famed sci-fi director joins board of embattled Stability AI, creator of Stable Diffusion.

A photo of filmmaker James Cameron.

Enlarge / Filmmaker James Cameron.

On Tuesday, Stability AI announced that renowned filmmaker James Cameron—of Terminator and Skynet fame—has joined its board of directors. Stability is best known for its pioneering but highly controversial Stable Diffusion series of AI image-synthesis models, first launched in 2022, which can generate images based on text descriptions.

“I’ve spent my career seeking out emerging technologies that push the very boundaries of what’s possible, all in the service of telling incredible stories,” said Cameron in a statement. “I was at the forefront of CGI over three decades ago, and I’ve stayed on the cutting edge since. Now, the intersection of generative AI and CGI image creation is the next wave.”

Cameron is perhaps best known as the director behind blockbusters like Avatar, Titanic, and Aliens, but in AI circles, he may be most relevant for the co-creation of the character Skynet, a fictional AI system that triggers nuclear Armageddon and dominates humanity in the Terminator media franchise. Similar fears of AI taking over the world have since jumped into reality and recently sparked attempts to regulate existential risk from AI systems through measures like SB-1047 in California.

In a 2023 interview with CTV news, Cameron referenced The Terminator‘s release year when asked about AI’s dangers: “I warned you guys in 1984, and you didn’t listen,” he said. “I think the weaponization of AI is the biggest danger. I think that we will get into the equivalent of a nuclear arms race with AI, and if we don’t build it, the other guys are for sure going to build it, and so then it’ll escalate.”

Hollywood goes AI

Of course, Stability AI isn’t building weapons controlled by AI. Instead, Cameron’s interest in cutting-edge filmmaking techniques apparently drew him to the company.

“James Cameron lives in the future and waits for the rest of us to catch up,” said Stability CEO Prem Akkaraju. “Stability AI’s mission is to transform visual media for the next century by giving creators a full stack AI pipeline to bring their ideas to life. We have an unmatched advantage to achieve this goal with a technological and creative visionary like James at the highest levels of our company. This is not only a monumental statement for Stability AI, but the AI industry overall.”

Cameron joins other recent additions to Stability AI’s board, including Sean Parker, former president of Facebook, who serves as executive chairman. Parker called Cameron’s appointment “the start of a new chapter” for the company.

Despite significant protest from actors’ unions last year, elements of Hollywood are seemingly beginning to embrace generative AI over time. Last Wednesday, we covered a deal between Lionsgate and AI video-generation company Runway that will see the creation of a custom AI model for film production use. In March, the Financial Times reported that OpenAI was actively showing off its Sora video synthesis model to studio executives.

Unstable times for Stability AI

Cameron’s appointment to the Stability AI board comes during a tumultuous period for the company. Stability AI has faced a series of challenges this past year, including an ongoing class-action copyright lawsuit, a troubled Stable Diffusion 3 model launch, significant leadership and staff changes, and ongoing financial concerns.

In March, founder and CEO Emad Mostaque resigned, followed by a round of layoffs. This came on the heels of the departure of three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz, who have since founded Black Forest Labs and released a new open-weights image-synthesis model called Flux, which has begun to take over the r/StableDiffusion community on Reddit.

Despite the issues, Stability AI claims its models are widely used, with Stable Diffusion reportedly surpassing 150 million downloads. The company states that thousands of businesses use its models in their creative workflows.

While Stable Diffusion has indeed spawned a large community of open-weights-AI image enthusiasts online, it has also been a lightning rod for controversy among some artists because Stability originally trained its models on hundreds of millions of images scraped from the Internet without seeking licenses or permission to use them.

Apparently that association is not a concern for Cameron, according to his statement: “The convergence of these two totally different engines of creation [CGI and generative AI] will unlock new ways for artists to tell stories in ways we could have never imagined. Stability AI is poised to lead this transformation.”

Terminator’s Cameron joins AI company behind controversial image generator Read More »

broadcom-responds-to-at&t’s-vmware-support-lawsuit:-at&t-has-“other-options”

Broadcom responds to AT&T’s VMware support lawsuit: AT&T has “other options”

Legal battle —

Broadcom defends against renewal, citing “End of Availability” provision.

Wooden gavel on table in a courtroom

Broadcom is accusing AT&T of trying to “rewind the clock and force” Broadcom “to sell support services for perpetual software licenses… that VMware has discontinued from its product line and to which AT&T has no contractual right to purchase.” The statement comes from legal documents Broadcom filed in response to AT&T’s lawsuit against Broadcom for refusing to renew support for its VMware perpetual licenses [PDF].

On August 29, AT&T filed a lawsuit [PDF] against Broadcom, alleging that Broadcom is breaking a contract by refusing to provide a one-year renewal for support for perpetually licensed VMware software. Broadcom famously ended perpetual VMware license sales shortly after closing its acquisition in favor of a subscription model featuring about two bundles of products rather than many SKUs.

AT&T claims its VMware contract (forged before Broadcom’s acquisition closed in November) entitles it to three one-year renewals of perpetual license support, and it’s currently trying to enact the second one. AT&T says it uses VMware products to run 75,000 virtual machines (VMs) across about 8,600 servers. The VMs are for supporting customer services operations and operations management efficiency, per AT&T. AT&T is asking the Supreme Court of the State of New York to stop Broadcom from ending VMware support services for AT&T and for “further relief” as deemed necessary.

On September 20, Broadcom filed for AT&T’s motion to be denied. Its defense includes its previously taken stance that VMware was moving toward a subscription model before Broadcom bought it. The transition from perpetual licenses to subscriptions was years in the making and, thus, something for which AT&T should have prepared, according to Broadcom. Broadcom claims that AT&T has admitted that it intends to migrate away from VMware software and that AT&T could have spent “the last several months or even years” doing so.

The filing argues: “AT&T resorts to sensationalism by accusing Broadcom of using ‘bullying tactics’ and ‘price gouging.’ Such attacks are intended to generate press and distract the Court from a much simpler story.”

Broadcom claims the simple story is that:

… the agreement contains an unambiguous “End of Availability” provision, which gives VMware the right to retire products and services at any time upon notice. What’s more, a year ago, AT&T opted not to purchase the very Support Services it now asks the Court to force VMware to provide. AT&T did so despite knowing Defendants were implementing a long planned and well-known business model transition and would soon no longer be selling the Support Services in question.

Broadcom says it has been negotiating with AT&T “for months” about a new contract, but the plaintiff “rejected every proposal despite favorable pricing.”

Broadcom’s filing also questions AT&T’s request for mandatory injunction, claiming that New York only grants those in “rare circumstances,” which allegedly don’t apply here.

AT&T has options, Broadcom says

AT&T’s lawsuit claims losing VMware support will cause extreme harm to itself and beyond. The lawsuit says that 22,000 of AT&T’s VMware VMs are used for support “of services to millions of police officers, firefighters, paramedics, emergency workers, and incident response team members nationwide… for use in connection with matters of public safety and/or national security.” It also claimed that communications for the Office of the President are at risk without VMware’s continued support.

However, Broadcom claims that AT&T has other choices, saying:

AT&T does have other options and, therefore, the most it can obtain is monetary damages. The fact that AT&T has been given more than eight-months’ notice and has in the meantime failed to take any measures to prevent its purported harm (e.g., buy a subscription for the new offerings or move to another solution) is telling and precludes any finding of irreparable harm. Even if AT&T thinks it deserves better pricing, it could have avoided its purported irreparable harm by entering in a subscription based deal and suing for monetary damages instead of injunctive relief.

AT&T previously declined to answer Ars Technica’s questions about its backup plans for supporting such important customers should it lose VMware support.

Broadcom has rubbed some customers the wrong way

Broadcom closed its VMware acquisition in November and quickly made dramatic changes. In addition to Broadcom’s reputation for overhauling companies after buying them, moves like ending perpetual licenses, taking VMware’s biggest customers directly instead of using channel partners, and raising costs by bundling products and issuing higher CPU core requirements have led customers and partners to reconsider working with the company. Migrating from VMware can be extremely challenging and expensive due to its deep integration into some IT environments, but many are investigating migration, and some expect Broadcom to face years of backlash.

As NAND Research founder and analyst Steve McDowell told TechTarget about this case:

It’s very unusual for customers to sue their vendors. I think Broadcom grossly underestimated how passionate the customer base is, [but] it’s a captive audience.

As this lawsuit demonstrates, Broadcom’s VMware has brought serious customer concerns around ongoing support. Companies like Spinnaker Support are trying to capitalize by offering third-party support services.

Martin Biggs, VP and managing director of EMEA and strategic initiatives at Spinnaker, told Ars Technica that his company provides support so customers can spend time determining their next move, whether that’s buying into a VMware subscription or moving on:

VMware customers are looking for options; the vast majority that we have spoken to don’t have a clear view yet of where they want to go, but in all cases the option of staying with VMware for the significantly increased fees is simply untenable. The challenge many have is that not paying fees means not getting support or security on their existing investment.

VMware’s support for AT&T was supposed to end on September 8, but the two companies entered an agreement to continue support until October 9. A hearing on a preliminary injunction is scheduled for October 15.

Broadcom responds to AT&T’s VMware support lawsuit: AT&T has “other options” Read More »

11-million-devices-infected-with-botnet-malware-hosted-in-google-play

11 million devices infected with botnet malware hosted in Google Play

NECRO —

Necro infiltrated Google Play in 2019. It recently returned.

A computer screen filled with ones and zeros also contains a Google logo and the word hacked.

Five years ago, researchers made a grim discovery—a legitimate Android app in the Google Play market that was surreptitiously made malicious by a library the developers used to earn advertising revenue. With that, the app was infected with code that caused 100 million infected devices to connect to attacker-controlled servers and download secret payloads.

Now, history is repeating itself. Researchers from the same Moscow, Russia-based security firm reported Monday that they found two new apps, downloaded from Play 11 million times, that were infected with the same malware family. The researchers, from Kaspersky, believe a malicious software developer kit for integrating advertising capabilities is once again responsible.

Clever tradecraft

Software developer kits, better known as SDKs, are apps that provide developers with frameworks that can greatly speed up the app-creation process by streamlining repetitive tasks. An unverified SDK module incorporated into the apps ostensibly supported the display of ads. Behind the scenes, it provided a host of advanced methods for stealthy communication with malicious servers, where the apps would upload user data and download malicious code that could be executed and updated at any time.

The stealthy malware family in both campaigns is known as Necro. This time, some variants use techniques such as steganography, an obfuscation method rarely seen in mobile malware. Some variants also deploy clever tradecraft to deliver malicious code that can run with heightened system rights. Once devices are infected with this variant, they contact an attacker-controlled command-and-control server and send web requests containing encrypted JSON data that reports information about each compromised device and application hosting the module.

The server, in turn, returns a JSON response that contains a link to a PNG image and associated metadata that includes the image hash. If the malicious module installed on the infected device confirms the hash is correct, it downloads the image.

The SDK module “uses a very simple steganographic algorithm,” Kaspersky researchers explained in a separate post. “If the MD5 check is successful, it extracts the contents of the PNG file—the pixel values in the ARGB channels—using standard Android tools. Then the getPixel method returns a value whose least significant byte contains the blue channel of the image, and processing begins in the code.”

The researchers continued:

If we consider the blue channel of the image as a byte array of dimension 1, then the first four bytes of the image are the size of the encoded payload in Little Endian format (from the least significant byte to the most significant). Next, the payload of the specified size is recorded: this is a JAR file encoded with Base64, which is loaded after decoding via DexClassLoader. Coral SDK loads the sdk.fkgh.mvp.SdkEntry class in a JAR file using the native library libcoral.so. This library has been obfuscated using the OLLVM tool. The starting point, or entry point, for execution within the loaded class is the run method.

Necro code implementing steganography.

Enlarge / Necro code implementing steganography.

Kaspersky

Follow-on payloads that get installed download malicious plugins that can be mixed and matched for each infected device to perform a variety of different actions. One of the plugins allows code to run with elevated system rights. By default, Android bars privileged processes from using WebView, an extension in the OS for displaying webpages in apps. To bypass this safety restriction, Necro uses a hacking technique known as a reflection attack to create a separate instance of the WebView factory.

This plugin can also download and run other executable files that will replace links rendered through WebView. When running with the elevated system rights, these executables have the ability to modify URLs to add confirmation codes for paid subscriptions and download and execute code loaded at links controlled by the attacker. The researchers listed five separate payloads they encountered in their analysis of Necro.

The modular design of Necro opens myriad ways for the malware to behave. Kaspersky provided the following image that provides an overview.

Necro Trojan infection diagram.

Enlarge / Necro Trojan infection diagram.

Kaspersy

The researchers found Necro in two Google Play apps. One was Wuta Camera, an app with 10 million downloads to date. Wuta Camera versions 6.3.2.148 through 6.3.6.148 contained the malicious SDK that infects apps. The app has since been updated to remove the malicious component. A separate app with roughly 1 million downloads—known as Max Browser—was also infected. That app is no longer available in Google Play.

The researchers also found Necro infecting a variety of Android apps available in alternative marketplaces. Those apps typically billed themselves as modified versions of legitimate apps such as Spotify, Minecraft, WhatsApp, Stumble Guys, Car Parking Multiplayer, and Melon Sandbox.

People who are concerned they may be infected by Necro should check their devices for the presence of indicators of compromise listed at the end of this writeup.

11 million devices infected with botnet malware hosted in Google Play Read More »

when-you-call-a-restaurant,-you-might-be-chatting-with-an-ai-host

When you call a restaurant, you might be chatting with an AI host

digital hosting —

Voice chatbots are increasingly picking up the phone for restaurants.

Drawing of a robot holding a telephone.

Getty Images | Juj Winn

A pleasant female voice greets me over the phone. “Hi, I’m an assistant named Jasmine for Bodega,” the voice says. “How can I help?”

“Do you have patio seating,” I ask. Jasmine sounds a little sad as she tells me that unfortunately, the San Francisco–based Vietnamese restaurant doesn’t have outdoor seating. But her sadness isn’t the result of her having a bad day. Rather, her tone is a feature, a setting.

Jasmine is a member of a new, growing clan: the AI voice restaurant host. If you recently called up a restaurant in New York City, Miami, Atlanta, or San Francisco, chances are you have spoken to one of Jasmine’s polite, calculated competitors.  

In the sea of AI voice assistants, hospitality phone agents haven’t been getting as much attention as consumer-based generative AI tools like Gemini Live and ChatGPT-4o. And yet, the niche is heating up, with multiple emerging startups vying for restaurant accounts across the US. Last May, voice-ordering AI garnered much attention at the National Restaurant Association’s annual food show. Bodega, the high-end Vietnamese restaurant I called, used Maitre-D AI, which launched primarily in the Bay Area in 2024. Newo, another new startup, is currently rolling its software out at numerous Silicon Valley restaurants. One-year-old RestoHost is now answering calls at 150 restaurants in the Atlanta metro area, and Slang, a voice AI company that started focusing on restaurants exclusively during the COVID-19 pandemic and announced a $20 million funding round in 2023, is gaining ground in the New York and Las Vegas markets.

All of them offer a similar service: an around-the-clock AI phone host that can answer generic questions about the restaurant’s dress code, cuisine, seating arrangements, and food allergy policies. They can also assist with making, altering, or canceling a reservation. In some cases, the agent can direct the caller to an actual human, but according to RestoHost co-founder Tomas Lopez-Saavedra, only 10 percent of the calls result in that. Each platform offers the restaurant subscription tiers that unlock additional features, and some of the systems can speak multiple languages.

But who even calls a restaurant in the era of Google and Resy? According to some of the founders of AI voice host startups, many customers do, and for various reasons. “Restaurants get a high volume of phone calls compared to other businesses, especially if they’re popular and take reservations,” says Alex Sambvani, CEO and co-founder of Slang, which currently works with everyone from the Wolfgang Puck restaurant group to Chick-fil-A to the fast-casual chain Slutty Vegan. Sambvani estimates that in-demand establishments receive between 800 and 1,000 calls per month. Typical callers tend to be last-minute bookers, tourists and visitors, older people, and those who do their errands while driving.

Matt Ho, the owner of Bodega SF, confirms this scenario. “The phones would ring constantly throughout service,” he says. “We would receive calls for basic questions that can be found on our website.” To solve this issue, after shopping around, Ho found that Maitre-D was the best fit. Bodega SF became one of the startup’s earliest clients in May, and Ho even helped the founders with trial and error testing prior to launch. “This platform makes the job easier for the host and does not disturb guests while they’re enjoying their meal,” he says.

When you call a restaurant, you might be chatting with an AI host Read More »

secret-calculator-hack-brings-chatgpt-to-the-ti-84,-enabling-easy-cheating

Secret calculator hack brings ChatGPT to the TI-84, enabling easy cheating

Breaking free of “test mode” —

Tiny device installed inside TI-84 enables Wi-Fi Internet, access to AI chatbot.

An OpenAI logo on a TI-84 calculator screen.

On Saturday, a YouTube creator called “ChromaLock” published a video detailing how he modified a Texas Instruments TI-84 graphing calculator to connect to the Internet and access OpenAI’s ChatGPT, potentially enabling students to cheat on tests. The video, titled “I Made The Ultimate Cheating Device,” demonstrates a custom hardware modification that allows users of the graphing calculator to type in problems sent to ChatGPT using the keypad and receive live responses on the screen.

ChromaLock began by exploring the calculator’s link port, typically used for transferring educational programs between devices. He then designed a custom circuit board he calls “TI-32” that incorporates a tiny Wi-Fi-enabled microcontroller, the Seed Studio ESP32-C3 (which costs about $5), along with other components to interface with the calculator’s systems.

It’s worth noting that the TI-32 hack isn’t a commercial project. Replicating ChromaLock’s work would involve purchasing a TI-84 calculator, a Seed Studio ESP32-C3 microcontroller, and various electronic components, and fabricating a custom PCB based on ChromaLock’s design, which is available online.

The creator says he encountered several engineering challenges during development, including voltage incompatibilities and signal integrity issues. After developing multiple versions, ChromaLock successfully installed the custom board into the calculator’s housing without any visible signs of modifications from the outside.

“I Made The Ultimate Cheating Device” YouTube Video.

To accompany the hardware, ChromaLock developed custom software for the microcontroller and the calculator, which is available open source on GitHub. The system simulates another TI-84, allowing people to use the calculator’s built-in “send” and “get” commands to transfer files. This allows a user to easily download a launcher program that provides access to various “applets” designed for cheating.

One of the applets is a ChatGPT interface that might be most useful for answering short questions, but it has a drawback in that it’s slow and cumbersome to type in long alphanumeric questions on the limited keypad.

Beyond the ChatGPT interface, the device offers several other cheating tools. An image browser allows users to access pre-prepared visual aids stored on the central server. The app browser feature enables students to download not only games for post-exam entertainment but also text-based cheat sheets disguised as program source code. ChromaLock even hinted at a future video discussing a camera feature, though details were sparse in the current demo.

ChromaLock claims his new device can bypass common anti-cheating measures. The launcher program can be downloaded on-demand, avoiding detection if a teacher inspects or clears the calculator’s memory before a test. The modification can also supposedly break calculators out of “Test Mode,” a locked-down state used to prevent cheating.

While the video presents the project as a technical achievement, consulting ChatGPT during a test on your calculator almost certainly represents an ethical breach and/or a form of academic dishonesty that could get you in serious trouble at most schools. So tread carefully, study hard, and remember to eat your Wheaties.

Secret calculator hack brings ChatGPT to the TI-84, enabling easy cheating Read More »

google-calls-for-halting-use-of-whois-for-tls-domain-verifications

Google calls for halting use of WHOIS for TLS domain verifications

WHOWAS —

WHOIS data is unreliable. So why is it used in TLS certificate applications?

Google calls for halting use of WHOIS for TLS domain verifications

Getty Images

Certificate authorities and browser makers are planning to end the use of WHOIS data verifying domain ownership following a report that demonstrated how threat actors could abuse the process to obtain fraudulently issued TLS certificates.

TLS certificates are the cryptographic credentials that underpin HTTPS connections, a critical component of online communications verifying that a server belongs to a trusted entity and encrypts all traffic passing between it and an end user. These credentials are issued by any one of hundreds of CAs (certificate authorities) to domain owners. The rules for how certificates are issued and the process for verifying the rightful owner of a domain are left to the CA/Browser Forum. One “base requirement rule” allows CAs to send an email to an address listed in the WHOIS record for the domain being applied for. When the receiver clicks an enclosed link, the certificate is automatically approved.

Non-trivial dependencies

Researchers from security firm watchTowr recently demonstrated how threat actors could abuse the rule to obtain fraudulently issued certificates for domains they didn’t own. The security failure resulted from a lack of uniform rules for determining the validity of sites claiming to provide official WHOIS records.

Specifically, watchTowr researchers were able to receive a verification link for any domain ending in .mobi, including ones they didn’t own. The researchers did this by deploying a fake WHOIS server and populating it with fake records. Creation of the fake server was possible because dotmobiregistry.net—the previous domain hosting the WHOIS server for .mobi domains—was allowed to expire after the server was relocated to a new domain. watchTowr researchers registered the domain, set up the imposter WHOIS server, and found that CAs continued to rely on it to verify ownership of .mobi domains.

The research didn’t escape the notice of the CA/Browser Forum (CAB Forum). On Monday, a member representing Google proposed ending the reliance on WHOIS data for domain ownership verification “in light of recent events where research from watchTowr Labs demonstrated how threat actors could exploit WHOIS to obtain fraudulently issued TLS certificates.”

The formal proposal calls for reliance on WHOIS data to “sunset” in early November. It establishes specifically that “CAs MUST NOT rely on WHOIS to identify Domain Contacts” and that “Effective November 1, 2024, validations using this [email verification] method MUST NOT rely on WHOIS to identify Domain Contact information.”

Since Monday’s submission, more than 50 follow-up comments have been posted. Many of the responses expressed support for the proposed change. Others have questioned the need for a change as proposed, given that the security failure watchTowr uncovered is known to affect only a single top-level domain.

An Amazon representative, meanwhile, noted that the company previously implemented a unilateral change in which the AWS Certificate Manager will fully transition away from reliance on WHOIS records. The representative told CAB Forum members that Google’s proposed deadline of November 1 may be too stringent.

“We got feedback from customers that for some this is a non-trivial dependency to remove,” the Amazon representative wrote. “It’s not uncommon for companies to have built automation on top of email validation. Based on the information we got I recommend a date of April 30, 2025.”

CA Digicert endorsed Amazon’s proposal to extend the deadline. Digicert went on to propose that instead of using WHOIS records, CAs instead use the WHOIS successor known as the Registration Data Access Protocol.

The proposed changes are formally in the discussion phase of deliberations. It’s unclear when formal voting on the change will begin.

Google calls for halting use of WHOIS for TLS domain verifications Read More »

microsoft-releases-a-new-windows-app-called-windows-app-for-running-windows-apps

Microsoft releases a new Windows app called Windows App for running Windows apps

heard you like apps —

Windows App replaces Microsoft Remote Desktop on macOS, iOS, and Android.

The Windows App runs on Windows, but also macOS, iOS/iPadOS, web browsers, and Android.

Enlarge / The Windows App runs on Windows, but also macOS, iOS/iPadOS, web browsers, and Android.

Microsoft

Microsoft announced today that it’s releasing a new app called Windows App as an app for Windows that allows users to run Windows and also Windows apps (it’s also coming to macOS, iOS, web browsers, and is in public preview for Android).

On most of those platforms, Windows App is a replacement for the Microsoft Remote Desktop app, which was used for connecting to a copy of Windows running on a remote computer or server—for some users and IT organizations, a relatively straightforward way to run Windows software on devices that aren’t running Windows or can’t run Windows natively.

The new name, though potentially confusing, attempts to sum up the app’s purpose: It’s a unified way to access your own Windows PCs with Remote Desktop access turned on, cloud-hosted Windows 365 and Microsoft Dev Box systems, and individual remotely hosted apps that have been provisioned by your work or school.

“This unified app serves as your secure gateway to connect to Windows across Windows 365, Azure Virtual Desktop, Remote Desktop, Remote Desktop Services, Microsoft Dev Box, and more,” reads the post from Microsoft’s Windows 365 Senior Product Manager Hilary Braun.

Microsoft says that aside from unifying multiple services into a single app, Windows App’s enhancements include easier account switching, better device management for IT administrators, support for the version of Windows 365 for frontline workers, and support for Microsoft’s “Relayed RDP Shortpath,” which can enable Remote Desktop on networks that normally wouldn’t allow it.

On macOS, iOS, and Android, the Windows App is a complete replacement for the Remote Desktop Connection app—if you have Remote Desktop installed, an update will change it to the Windows App. On Windows, the Remote Desktop Connection remains available, and Windows App is only used for Microsoft’s other services; it also requires some kind of account sign-in on Windows, while it works without a user account on other platforms.

For connections to your own Remote Desktop-equipped PCs, Windows App has most of the same features and requirements as the Remote Desktop Connection app did before, including support for multiple monitors, device redirection for devices like webcams and audio input/output, and dynamic resolution support (so that your Windows desktop resizes as you resize the app window).

Microsoft releases a new Windows app called Windows App for running Windows apps Read More »