Biz & IT

xfinity-waited-13-days-to-patch-critical-citrix-bleed-0-day.-now-it’s-paying-the-price

Xfinity waited 13 days to patch critical Citrix Bleed 0-day. Now it’s paying the price

MORE CITRIX BLEED CASUALTIES —

Data for almost 36 million customers now in the hands of unknown hackers.

A parked Comcast service van with the

Enlarge / A Comcast Xfinity service van in San Ramon, California on February 25, 2020.

Getty Images | Smith Collection/Gado

Comcast waited 13 days to patch its network against a high-severity vulnerability, a lapse that allowed hackers to make off with password data and other sensitive information belonging to 36 million Xfinity customers.

The breach, which was carried out by exploiting a vulnerability in network hardware sold by Citrix, gave hackers access to usernames and cryptographically hashed passwords for 35.9 million Xfinity customers, the cable TV and Internet provider said in a notification filed Monday with the Maine attorney general’s office. Citrix disclosed the vulnerability and issued a patch on October 10. Eight days later, researchers reported that the vulnerability, tracked as CVE-2023-4966 and by the name Citrix Bleed, had been under active exploitation since August. Comcast didn’t patch its network until October 23, 13 days after a patch became available and five days after the report of the in-the-wild attacks exploiting it.

“However, we subsequently discovered that prior to mitigation, between October 16 and October 19, 2023, there was unauthorized access to some of our internal systems that we concluded was a result of this vulnerability,” an accompanying notice stated. “We notified federal law enforcement and conducted an investigation into the nature and scope of the incident. On November 16, 2023, it was determined that information was likely acquired.”

Comcast is still investigating precisely what data the attackers obtained. So far, Monday’s disclosure said, information known to have been taken includes usernames and hashed passwords, names, contact information, the last four digits of social security numbers, dates of birth, and/or secret questions and answers. Xfinity is Comcast’s cable television and Internet division.

Citrix Bleed has emerged as one of the year’s most severe and widely exploited vulnerabilities, with a severity rating of 9.4 out of 10. The vulnerability, residing in Citrix’s NetScaler Application Delivery Controller and NetScaler Gateway, can be exploited without any authentication or privileges on affected networks. Exploits disclose session tokens, which the hardware assigns to devices that have already successfully provided login credentials. Possession of the tokens allows hackers to override any multi-factor authentication in use and log into the device.

Other companies that have been hacked through Citrix Bleed include Boeing; Toyota; DP World Australia, a branch of the Dubai-based logistics company DP World; Industrial and Commercial Bank of China; and law firm Allen & Overy.

The name Citrix Bleed is an allusion to Heartbleed, a different critical information disclosure zero-day that turned the Internet on its head in 2014. That vulnerability, which resided in the OpenSSL code library, came under mass exploitation and allowed the pilfering of passwords, encryption keys, banking credentials, and all kinds of other sensitive information. Citrix Bleed hasn’t been as dire because fewer vulnerable devices are in use.

A sweep of the most active ransomware sites didn’t turn up any claims of responsibility for the hack of the Comcast network. An Xfinity representative said in an email that the company has yet to receive any ransom demands, and investigators aren’t aware of any customer data being leaked or of any attacks on affected customers.

Comcast is requiring Xfinity customers to reset their passwords to protect against the possibility that attackers can crack the stolen hashes. The company is also encouraging customers to enable two-factor authentication. The representative declined to say why company admins didn’t patch sooner.

Xfinity waited 13 days to patch critical Citrix Bleed 0-day. Now it’s paying the price Read More »

a-song-of-hype-and-fire:-the-10-biggest-ai-stories-of-2023

A song of hype and fire: The 10 biggest AI stories of 2023

An illustration of a robot accidentally setting off a mushroom cloud on a laptop computer.

Getty Images | Benj Edwards

“Here, There, and Everywhere” isn’t just a Beatles song. It’s also a phrase that recalls the spread of generative AI into the tech industry during 2023. Whether you think AI is just a fad or the dawn of a new tech revolution, it’s been impossible to deny that AI news has dominated the tech space for the past year.

We’ve seen a large cast of AI-related characters emerge that includes tech CEOs, machine learning researchers, and AI ethicists—as well as charlatans and doomsayers. From public feedback on the subject of AI, we’ve heard that it’s been difficult for non-technical people to know who to believe, what AI products (if any) to use, and whether we should fear for our lives or our jobs.

Meanwhile, in keeping with a much-lamented trend of 2022, machine learning research has not slowed down over the past year. On X, former Biden administration tech advisor Suresh Venkatasubramanian wrote, “How do people manage to keep track of ML papers? This is not a request for support in my current state of bewilderment—I’m genuinely asking what strategies seem to work to read (or “read”) what appear to be 100s of papers per day.”

To wrap up the year with a tidy bow, here’s a look back at the 10 biggest AI news stories of 2023. It was very hard to choose only 10 (in fact, we originally only intended to do seven), but since we’re not ChatGPT generating reams of text without limit, we have to stop somewhere.

Bing Chat “loses its mind”

Aurich Lawson | Getty Images

In February, Microsoft unveiled Bing Chat, a chatbot built into its languishing Bing search engine website. Microsoft created the chatbot using a more raw form of OpenAI’s GPT-4 language model but didn’t tell everyone it was GPT-4 at first. Since Microsoft used a less conditioned version of GPT-4 than the one that would be released in March, the launch was rough. The chatbot assumed a temperamental personality that could easily turn on users and attack them, tell people it was in love with them, seemingly worry about its fate, and lose its cool when confronted with an article we wrote about revealing its system prompt.

Aside from the relatively raw nature of the AI model Microsoft was using, at fault was a system where very long conversations would push the conditioning system prompt outside of its context window (like a form of short-term memory), allowing all hell to break loose through jailbreaks that people documented on Reddit. At one point, Bing Chat called me “the culprit and the enemy” for revealing some of its weaknesses. Some people thought Bing Chat was sentient, despite AI experts’ assurances to the contrary. It was a disaster in the press, but Microsoft didn’t flinch, and it ultimately reigned in some of Bing Chat’s wild proclivities and opened the bot widely to the public. Today, Bing Chat is now known as Microsoft Copilot, and it’s baked into Windows.

US Copyright Office says no to AI copyright authors

An AI-generated image that won a prize at the Colorado State Fair in 2022, later denied US copyright registration.

Enlarge / An AI-generated image that won a prize at the Colorado State Fair in 2022, later denied US copyright registration.

Jason M. Allen

In February, the US Copyright Office issued a key ruling on AI-generated art, revoking the copyright previously granted to the AI-assisted comic book “Zarya of the Dawn” in September 2022. The decision, influenced by the revelation that the images were created using the AI-powered Midjourney image generator, stated that only the text and arrangement of images and text by Kashtanova were eligible for copyright protection. It was the first hint that AI-generated imagery without human-authored elements could not be copyrighted in the United States.

This stance was further cemented in August when a US federal judge ruled that art created solely by AI cannot be copyrighted. In September, the US Copyright Office rejected the registration for an AI-generated image that won a Colorado State Fair art contest in 2022. As it stands now, it appears that purely AI-generated art (without substantial human authorship) is in the public domain in the United States. This stance could be further clarified or changed in the future by judicial rulings or legislation.

A song of hype and fire: The 10 biggest AI stories of 2023 Read More »

how-microsoft’s-cybercrime-unit-has-evolved-to-combat-increased-threats

How Microsoft’s cybercrime unit has evolved to combat increased threats

a more sophisticated DCU —

Microsoft has honed its strategy to disrupt global cybercrime and state-backed actors.

Microsoft's Cybercrime Center.

Microsoft’s Cybercrime Center.

Microsoft

Governments and the tech industry around the world have been scrambling in recent years to curb the rise of online scamming and cybercrime. Yet even with progress on digital defenses, enforcement, and deterrence, the ransomware attacks, business email compromises, and malware infections keep on coming. Over the past decade, Microsoft’s Digital Crimes Unit (DCU) has forged its own strategies, both technical and legal, to investigate scams, take down criminal infrastructure, and block malicious traffic.

The DCU is fueled, of course, by Microsoft’s massive scale and the visibility across the Internet that comes from the reach of Windows. But DCU team members repeatedly told WIRED that their work is motivated by very personal goals of protecting victims rather than a broad policy agenda or corporate mandate.

In just its latest action, the DCU announced Wednesday evening efforts to disrupt a cybercrime group that Microsoft calls Storm-1152. A middleman in the criminal ecosystem, Storm-1152 sells software services and tools like identity verification bypass mechanisms to other cybercriminals. The group has grown into the number one creator and vendor of fake Microsoft accounts—creating roughly 750 million scam accounts that the actor has sold for millions of dollars.

The DCU used legal techniques it has honed over many years related to protecting intellectual property to move against Storm-1152. The team obtained a court order from the Southern District of New York on December 7 to seize some of the criminal group’s digital infrastructure in the US and take down websites including the services 1stCAPTCHA, AnyCAPTCHA, and NoneCAPTCHA, as well as a site that sold fake Outlook accounts called Hotmailbox.me.

The strategy reflects the DCU’s evolution. A group with the name “Digital Crimes Unit” has existed at Microsoft since 2008, but the team in its current form took shape in 2013 when the old DCU merged with a Microsoft team known as the Intellectual Property Crimes Unit.

“Things have become a lot more complex,” says Peter Anaman, a DCU principal investigator. “Traditionally you would find one or two people working together. Now, when you’re looking at an attack, there are multiple players. But if we can break it down and understand the different layers that are involved it will help us be more impactful.”

The DCU’s hybrid technical and legal approach to chipping away at cybercrime is still unusual, but as the cybercriminal ecosystem has evolved—alongside its overlaps with state-backed hacking campaigns—the idea of employing creative legal strategies in cyberspace has become more mainstream. In recent years, for example, Meta-owned WhatsApp and Apple both took on the notorious spyware maker NSO Group with lawsuits.

Still, the DCU’s particular progression was the result of Microsoft’s unique dominance during the rise of the consumer Internet. As the group’s mission came into focus while dealing with threats from the late 2000s and early 2010s—like the widespread Conficker worm—the DCU’s unorthodox and aggressive approach drew criticism at times for its fallout and potential impacts on legitimate businesses and websites.

“There’s simply no other company that takes such a direct approach to taking on scammers,” WIRED wrote in a story about the DCU from October 2014. “That makes Microsoft rather effective, but also a little bit scary, observers say.”

Richard Boscovich, the DCU’s assistant general counsel and a former assistant US attorney in Florida’s Southern District, told WIRED in 2014 that it was frustrating for people within Microsoft to see malware like Conficker rampage across the web and feel like the company could improve the defenses of its products, but not do anything to directly deal with the actors behind the crimes. That dilemma spurred the DCU’s innovations and continues to do so.

“What’s impacting people? That’s what we get asked to take on, and we’ve developed a muscle to change and to take on new types of crime,” says Zoe Krumm, the DCU’s director of analytics. In the mid-2000s, Krumm says, Brad Smith, now Microsoft’s vice chair and president, was a driving force in turning the company’s attention toward the threat of email spam.

“The DCU has always been a bit of an incubation team. I remember all of a sudden, it was like, ‘We have to do something about spam.’ Brad comes to the team and he’s like, ‘OK, guys, let’s put together a strategy.’ I’ll never forget that it was just, ‘Now we’re going to focus here.’ And that has continued, whether it be moving into the malware space, whether it be tech support fraud, online child exploitation, business email compromise.”

How Microsoft’s cybercrime unit has evolved to combat increased threats Read More »

unifi-devices-broadcasted-private-video-to-other-users’-accounts

UniFi devices broadcasted private video to other users’ accounts

CASE OF MISTAKEN IDENTITY —

“I was presented with 88 consoles from another account,” one user reports.

an assortment of ubiquiti cameras

Enlarge / An assortment of Ubiquiti cameras.

Users of UniFi, the popular line of wireless devices from manufacturer Ubiquiti, are reporting receiving private camera feeds from, and control over, devices belonging to other users, posts published to social media site Reddit over the past 24 hours show.

“Recently, my wife received a notification from UniFi Protect, which included an image from a security camera,” one Reddit user reported. “However, here’s the twist—this camera doesn’t belong to us.”

Stoking concern and anxiety

The post included two images. The first showed a notification pushed to the person’s phone reporting that their UDM Pro, a network controller and network gateway used by tech-enthusiast consumers, had detected someone moving in the backyard. A still shot of video recorded by a connected surveillance camera showed a three-story house surrounded by trees. The second image showed the dashboard belonging to the Reddit user. The user’s connected device was a UDM SE, and the video it captured showed a completely different house.

Less than an hour later, a different Reddit user posting to the same thread replied: “So it’s VERY interesting you posted this, I was just about to post that when I navigated to unifi.ui.com this morning, I was logged into someone else’s account completely! It had my email on the top right, but someone else’s UDM Pro! I could navigate the device, view, and change settings! Terrifying!!”

Two other people took to the same thread to report similar behavior happening to them.

Other Reddit threads posted in the past day reporting UniFi users connecting to private devices or feeds belonging to others are here and here. The first one reported that the Reddit poster gained full access to someone else’s system. The post included two screenshots showing what the poster said was the captured video of an unrecognized business. The other poster reported logging into their Ubiquiti dashboard to find system controls for someone else. “I ended up logging out, clearing cookies, etc seems fine now for me…” the poster wrote.

Yet another person reported the same problem in a post published to Ubiquiti’s community support forum on Thursday, as this Ars story was being reported. The person reported logging into the UniFi console as is their routine each day.

“However this time I was presented with 88 consoles from another account,” the person wrote. “I had full access to these consoles, just as I would my own. This was only stopped when I forced a browser refresh, and I was presented again with my consoles.”

Ubiquity on Thursday said it had identified the glitch and fixed the errors that caused it.

“Specifically, this issue was caused by an upgrade to our UniFi Cloud infrastructure, which we have since solved,” officials wrote. They went on:

1. What happened?

1,216 Ubiquiti accounts (“Group 1”) were improperly associated with a separate group of 1,177 Ubiquiti accounts (“Group 2”).

2. When did this happen?

December 13, from 6: 47 AM to 3: 45 PM UTC.

3. What does this mean?

During this time, a small number of users from Group 2 received push notifications on their mobile devices from the consoles assigned to a small number of users from Group 1.

Additionally, during this time, a user from Group 2 that attempted to log into his or her account may have been granted temporary remote access to a Group 1 account.

The reports are understandably stoking concern and even anxiety for users of UniFi products, which include wireless access points, switches, routers, controller devices, VoIP phones, and access control products. As the Internet-accessible portals into the local networks of users, UniFi devices provide a means for accessing cameras, mics, and other sensitive resources inside the home.

“I guess I should stop walking around naked in my house now,” a participant in one of the forums joked.

To Ubiquiti’s credit, company employees proactively responded to reports, signaling they took the reports seriously and began actively investigating early on. The employees said the problem has been corrected, and the account mix-ups are no longer occurring.

It’s useful to remember that this sort of behavior—legitimately logging into an account only to find the data or controls belonging to a completely different account—is as old as the Internet. Recent examples: A T-Mobile mistake in September, and similar glitches involving Chase Bank, First Virginia Banks, Credit Karma, and Sprint.

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

In an email, a Ubiquiti official said company employees are still gathering “information to provide an accurate assessment.”

UniFi devices broadcasted private video to other users’ accounts Read More »

ukrainian-cells-and-internet-still-out,-1-day-after-suspected-russian-cyberattack

Ukrainian cells and Internet still out, 1 day after suspected Russian cyberattack

PLEASE STAND BY —

Hackers tied to Russian military take responsibility for hack on Ukraine’s biggest provider.

A service center for

Enlarge / A service center for “Kyivstar”, a Ukrainian telecommunications company, that provides communication services and data transmission based on a broad range of fixed and mobile technologies.

Getty Images

Ukrainian civilians on Wednesday grappled for a second day of widespread cellular phone and Internet outages after a cyberattack, purportedly carried out by Kremlin-supported hackers, hit the country’s biggest mobile phone and Internet provider a day earlier.

Two separate hacking groups with ties to the Russian government took responsibility for Tuesday’s attack striking Kyivstar, which has said it serves 24.3 million mobile subscribers and more than 1.1 million home Internet users. One group, calling itself Killnet, said on Telegram that “an attack was carried out on Ukrainian mobile operators, as well as on some banks,” but didn’t elaborate or provide any evidence. A separate group known as Solntsepek said on the same site that it took “full responsibility for the cyberattack on Kyivstar” and had “destroyed 10,000 computers, more than 4,000 servers, and all cloud storage and backup systems.” The post was accompanied by screenshots purporting to show someone with control over the Kyivstar systems.

In the city of Lviv, street lights remained on after sunrise and had to be disconnected manually, because Internet-dependent automated power switches didn’t work, according to NBC News. Additionally, the outage prevented shops throughout the country from processing credit payments and many ATMs from functioning, the Kyiv Post said.

The outage also disrupted air alert systems that warn residents in multiple cities of incoming missile attacks, a Ukrainian official said on Telegram. The outage forced authorities to rely on backup alarms.

“Cyber ​​specialists of the Security Service of Ukraine and ‘Kyivstar’ specialists, in cooperation with other state bodies, continue to restore the network after yesterday’s hacker attack,” officials with the Security Service of Ukraine said. “According to preliminary calculations, it is planned to restore fixed Internet for households on December 13, as well as start the launch of mobile communication and Internet. The digital infrastructure of ‘Kyivstar’ was critically damaged, so the restoration of all services in compliance with the necessary security protocols takes time.”

Kyivstar suspended mobile and Internet service on Tuesday after experiencing what company CEO Oleksandr Komarov said was an “unprecedented cyberattack” by Russian hackers. The attack represents one of the biggest compromises on a civilian telecommunications provider ever and one of the most disruptive so far in the 21-month Russia-Ukraine war. Kyivstar’s website remained unavailable at the time this post went live on Ars.

According to a report by the New Voice of Ukraine, hackers infiltrated Kyivstar’s infrastructure after first hacking into an internal employee account.

Solntsepek, one of two groups taking responsibility for the attack, has links to “Sandworm,” the name researchers use to track a hacking group that works on behalf of a unit within the Russian military known as the GRU. Sandworm has been tied to some of the most destructive cyberattacks in history, most notably the NotPetya worm, which caused an estimated $10 billion in damage worldwide. Researchers have also attributed Ukrainian power outages in 2015 and 2016 to the group.

Ukrainian cells and Internet still out, 1 day after suspected Russian cyberattack Read More »

dropbox-spooks-users-with-new-ai-features-that-send-data-to-openai-when-used

Dropbox spooks users with new AI features that send data to OpenAI when used

adventures in data consent —

AI feature turned on by default worries users; Dropbox responds to concerns.

Updated

Photo of a man looking into a box.

On Wednesday, news quickly spread on social media about a new enabled-by-default Dropbox setting that shares Dropbox data with OpenAI for an experimental AI-powered search feature, but Dropbox says data is only shared if the feature is actively being used. Dropbox says that user data shared with third-party AI partners isn’t used to train AI models and is deleted within 30 days.

Even with assurances of data privacy laid out by Dropbox on an AI privacy FAQ page, the discovery that the setting had been enabled by default upset some Dropbox users. The setting was first noticed by writer Winifred Burton, who shared information about the Third-party AI setting through Bluesky on Tuesday, and frequent AI critic Karla Ortiz shared more information about it on X.

Wednesday afternoon, Drew Houston, the CEO of Dropbox, apologized for customer confusion in a post on X and wrote, “The third-party AI toggle in the settings menu enables or disables access to DBX AI features and functionality. Neither this nor any other setting automatically or passively sends any Dropbox customer data to a third-party AI service.

Critics say that communication about the change could have been clearer. AI researcher Simon Willison wrote, “Great example here of how careful companies need to be in clearly communicating what’s going on with AI access to personal data.”

A screenshot of Dropbox's third-party AI feature switch.

Enlarge / A screenshot of Dropbox’s third-party AI feature switch.

Benj Edwards

So why would Dropbox ever send user data to OpenAI anyway? In July, the company announced an AI-powered feature called Dash that allows AI models to perform universal searches across platforms like Google Workspace and Microsoft Outlook.

According to the Dropbox privacy FAQ, the third-party AI opt-out setting is part of the “Dropbox AI alpha,” which is a conversational interface for exploring file contents that involves chatting with a ChatGPT-style bot using an “Ask something about this file” feature. To make it work, an AI language model similar to the one that powers ChatGPT (like GPT-4) needs access to your files.

According to the FAQ, the third-party AI toggle in your account settings is turned on by default if “you or your team” are participating in the Dropbox AI alpha. Still, multiple Ars Technica staff who had no knowledge of the Dropbox AI alpha found the setting enabled by default when they checked.

In a statement to Ars Technica, a Dropbox representative said, “The third-party AI toggle is only turned on to give all eligible customers the opportunity to view our new AI features and functionality, like Dropbox AI. It does not enable customers to use these features without notice. Any features that use third-party AI offer disclosure of third-party use, and link to settings that they can manage. Only after a customer sees the third-party AI transparency banner and chooses to proceed with asking a question about a file, will that file be sent to a third-party to generate answers. Our customers are still in control of when and how they use these features.”

Right now, the only third-party AI provider for Dropbox is OpenAI, writes Dropbox in the FAQ. “Open AI is an artificial intelligence research organization that develops cutting-edge language models and advanced AI technologies. Your data is never used to train their internal models, and is deleted from OpenAI’s servers within 30 days.” It also says, “Only the content relevant to an explicit request or command is sent to our third-party AI partners to generate an answer, summary, or transcript.”

Disabling the feature is easy if you prefer not to use Dropbox AI features. Log into your Dropbox account on a desktop web browser, then click your profile photo > Settings > Third-party AI. This link may take you to that page more quickly. On that page, click the switch beside “Use artificial intelligence (AI) from third-party partners so you can work faster in Dropbox” to toggle it into the “Off” position.

This story was updated on December 13, 2023, at 5: 35 pm ET with clarifications about when and how Dropbox shares data with OpenAI, as well as statements from Dropbox reps and its CEO.

Dropbox spooks users with new AI features that send data to OpenAI when used Read More »

broadcom-ends-vmware-perpetual-license-sales,-testing-customers-and-partners

Broadcom ends VMware perpetual license sales, testing customers and partners

saas —

Already-purchased licenses can still be used but will eventually lose support.

The logo of American cloud computing and virtualization technology company VMware is seen at the Mobile World Congress (MWC), the telecom industry's biggest annual gathering, in Barcelona on March 2, 2023.

Broadcom has moved forward with plans to transition VMware, a virtualization and cloud computing company, into a subscription-based business. As of December 11, it no longer sells perpetual licenses with VMware products. VMware, whose $61 billion acquisition by Broadcom closed in November, also announced on Monday that it will no longer sell support and subscription (SnS) for VMware products with perpetual licenses. Moving forward, VMware will only offer term licenses or subscriptions, according to its VMware blog post.

VMware customers with perpetual licenses and active support contracts can continue using them. VMware “will continue to provide support as defined in contractual commitments,” Krish Prasad, senior vice president and general manager for VMware’s Cloud Foundation Division, wrote. But when customers’ SnS terms end, they won’t have any support.

Broadcom hopes this will force customers into subscriptions, and it’s offering “upgrade pricing incentives” that weren’t detailed in the blog for customers who switch from perpetual licensing to a subscription.

These are the products affected, per Prasad’s blog:

  • VMware Aria Automation
  • VMware Aria Suite
  • VMware Aria Operations
  • VMware Aria Operations for Logs
  • VMware Aria Operations for Networks
  • VMware Aria Universal
  • VMware Cloud Foundation
  • VMware HCX
  • VMware NSX
  • VMware Site Recovery Manager
  • VMware vCloud Suite
  • VMware vSAN
  • VMware vSphere

Subscription-based future

Broadcom is looking to grow VMware’s EBITDA (earnings before interest, taxes, depreciation, and amortization) from about $4.7 billion to about $8.5 billion in three years, largely through shifting the company’s business model to subscriptions, Tom Krause, president of the Broadcom Software Group, said during a December 7 earnings call, per Forbes.

“This shift is the natural next step in our multi-year strategy to make it easier for customers to consume both our existing offerings and new innovations. VMware believes that a subscription model supports our customers with the innovation and flexibility they need as they undertake their digital transformations,” VMware’s blog said.

With changes effective immediately upon announcement, the news might sound abrupt. However, in May, soon after announcing its plans to acquire VMware, Broadcom CEO Hock Tan signaled a “rapid transition” to subscriptions.

At the time, Tan pointed to the importance of maintaining current VMware customers’ happiness, as well as leveraging the VMware sales team already in place. However, after less than a month of the deal’s close, reports point to concern among VMWare customers and partners.

Customer and partner concerns

VMware’s blog said “the industry has already embraced subscription as the standard for cloud consumption.” For years, software and even hardware vendors and investors have been pushing IT solution provider partners and customers toward recurring revenue models. However, VMware built much of its business on the perpetual license model. As noted by The Stack, VMware in February noted that perpetual licensing was the company’s “most renowned model.”

VMware’s blog this week listed “continuous innovation” and “faster time to value” as customer benefits for subscription models but didn’t detail how it came to those conclusions.

“Predictable investments” is also listed, but it’s hard to imagine a more predictable expense than paying for something once and having supported access to it indefinitely (assuming you continue paying any support costs). Now, VMware and its partners will be left convincing customers that their finances can afford a new monthly expense for something they thought was paid for. For Broadcom, though, it’s easier to see the benefits of turning VMware into more of a reliable and recurring revenue stream.

Additionally, Broadcom’s layoffs of at least 2,837 VMware employees have brought uncertainty to the VMware brand. A CRN report in late November pointed to VMware partners hearing customer concern about potential price raises and a lack of support. C.R. Howdyshell, CEO of Advizex, which reportedly made $30 million in VMware-tied revenue in 2022, told the publication that partners and customers were experiencing “significant concern and chaos” around VMware sales. Another channel partner noted to CRN the layoff of a close VMware sales contact.

But Broadcom has made it clear that it wants to “complete the transition of all VMware by Broadcom solutions to subscription licenses,” per Prasad’s blog.

The company hopes to convince skeptical channel partners that they’ll see the way, too. VMware, like many tech companies urging subscription models, pointed to “many partners” having success with subscription models already and “opportunity for partners to engage more strategically with customers and deliver higher-value services that drive customer success.”

However, because there’s no immediate customer benefit to the end of perpetual licenses, those impacted by VMware’s change in business strategy have to assess how much they’re willing to pay to access VMware products moving forward.

Broadcom ends VMware perpetual license sales, testing customers and partners Read More »

a-new-essential-guide-to-electronics-by-naomi-wu-details-a-different-shenzen

A New Essential Guide to Electronics by Naomi Wu details a different Shenzen

Crystal clear, super-bright, and short leads —

Eating, tipping, LGBTQ+ advice, and Mandarin for “Self-Flashing” and “RGB.”

Point to translate guide in the New Essential Guide to Electronics in Shenzen

Enlarge / The New Essential Guide to Electronics in Shenzen is made to be pointed at, rapidly, in a crowded environment.

Machinery Enchantress / Crowd Supply

“Hong Kong has better food, Shanghai has better nightlife. But when it comes to making things—no one can beat Shenzen.”

Many things about the Hua Qiang market in Shenzen, China, are different than they were in 2016, when Andrew “bunnie” Huang’s Essential Guide to Electronics in Shenzen was first published. But the importance of the world’s premiere electronics market, and the need for help navigating it, are a constant. That’s why the book is getting an authorized, crowdfunded revision, the New Essential Guide, written by noted maker and Shenzen native Naomi Wu and due to ship in April 2024.

Naomi Wu’s narrated introduction to the New Essential Guide to Electronics in Shenzen.

Huang notes on the crowdfunding page that Wu’s “strengths round out my weaknesses.” Wu speaks Mandarin, lives in Shenzen, and is more familiar with Shenzen, and China, as it is today. Shenzen has grown by more than 2 million people, the central Huaqiangbei Road has been replaced by a car-free boulevard, and the city’s metro system has more than 100 new kilometers with dozens of new stations. As happens anywhere, market vendors have also changed locations, payment and communications systems have modernized, and customs have shifted.

The updated guide’s contents are set to include typical visitor guide items, like “Taxis,” “Tipping,” and, new to this edition, “LGBTQ+ Visitors.” Then there are the more Shenzen-specific guides: “Is It Fake?,” “Do Not Burn Your Contacts,” and “Type It, Don’t Say It.” The original guide had plastic business card pockets, but “They are anachronistic now,” Wu writes; removing them has allowed the 2023 guide to be sold for the same price as the original.

Machinery Enchantress / Crowd Supply

Both the original and updated guide are ring-bound and focus on quick-flipping and “Point to Translate” guides, with clearly defined boxes of English and Mandarin characters for things like “RGB,” “Common anode,” and “LED tape.” “When sourcing components, speed is critical, and it’s quicker to flip through physical pages,” Wu writes. “The market is full of visitors struggling to navigate mobile interfaces in order to make their needs known to busy vendors. It simply doesn’t work as well as walking up and pointing to large, clearly written Chinese of exactly what you want.”

Then there is the other notable thing that’s different about the two guides. Wu, a Chinese national, accomplished hardware maker, and former tech influencer, has gone quiet since the summer of 2023, following interactions with state security actors. The guide’s crowdfunding page notes that “offering an app or download specifically for English-speaking hardware engineers to install on their phones would be… iffy.” Wu adds, “If at some point ‘I’ do offer you such a thing, I’d suggest you not use it.”

Huang, who previously helped sue the government over DRM rules, designed and sold the Chumby, and was one of the first major Xbox hackers, released the original Essential Guide on the rights-friendly Crowd Supply under a Creative Commons license (BY-NC-SA 4.0) that restricted commercial derivatives without explicit permission, which he granted to Wu. The book costs $30, with roughly $8 shipping costs to the US. It is dedicated to Gavin Zhao, whom Huang considered a mentor and who furthered his ambition to print the original guide.

Listing image by Machinery Enchantress/Crowd Supply

A New Essential Guide to Electronics by Naomi Wu details a different Shenzen Read More »

everybody’s-talking-about-mistral,-an-upstart-french-challenger-to-openai

Everybody’s talking about Mistral, an upstart French challenger to OpenAI

A challenger appears —

“Mixture of experts” Mixtral 8x7B helps open-weights AI punch above its weight class.

An illustrated robot holding a French flag.

Enlarge / An illustration of a robot holding a French flag, figuratively reflecting the rise of AI in France due to Mistral. It’s hard to draw a picture of an LLM, so a robot will have to do.

On Monday, Mistral AI announced a new AI language model called Mixtral 8x7B, a “mixture of experts” (MoE) model with open weights that reportedly truly matches OpenAI’s GPT-3.5 in performance—an achievement that has been claimed by others in the past but is being taken seriously by AI heavyweights such as OpenAI’s Andrej Karpathy and Jim Fan. That means we’re closer to having a ChatGPT-3.5-level AI assistant that can run freely and locally on our devices, given the right implementation.

Mistral, based in Paris and founded by Arthur Mensch, Guillaume Lample, and Timothée Lacroix, has seen a rapid rise in the AI space recently. It has been quickly raising venture capital to become a sort of French anti-OpenAI, championing smaller models with eye-catching performance. Most notably, Mistral’s models run locally with open weights that can be downloaded and used with fewer restrictions than closed AI models from OpenAI, Anthropic, or Google. (In this context “weights” are the computer files that represent a trained neural network.)

Mixtral 8x7B can process a 32K token context window and works in French, German, Spanish, Italian, and English. It works much like ChatGPT in that it can assist with compositional tasks, analyze data, troubleshoot software, and write programs. Mistral claims that it outperforms Meta’s much larger LLaMA 2 70B (70 billion parameter) large language model and that it matches or exceeds OpenAI’s GPT-3.5 on certain benchmarks, as seen in the chart below.

A chart of Mixtral 8x7B performance vs. LLaMA 2 70B and GPT-3.5, provided by Mistral.

Enlarge / A chart of Mixtral 8x7B performance vs. LLaMA 2 70B and GPT-3.5, provided by Mistral.

Mistral

The speed at which open-weights AI models have caught up with OpenAI’s top offering a year ago has taken many by surprise. Pietro Schirano, the founder of EverArt, wrote on X, “Just incredible. I am running Mistral 8x7B instruct at 27 tokens per second, completely locally thanks to @LMStudioAI. A model that scores better than GPT-3.5, locally. Imagine where we will be 1 year from now.”

LexicaArt founder Sharif Shameem tweeted, “The Mixtral MoE model genuinely feels like an inflection point — a true GPT-3.5 level model that can run at 30 tokens/sec on an M1. Imagine all the products now possible when inference is 100% free and your data stays on your device.” To which Andrej Karpathy replied, “Agree. It feels like the capability / reasoning power has made major strides, lagging behind is more the UI/UX of the whole thing, maybe some tool use finetuning, maybe some RAG databases, etc.”

Mixture of experts

So what does mixture of experts mean? As this excellent Hugging Face guide explains, it refers to a machine-learning model architecture where a gate network routes input data to different specialized neural network components, known as “experts,” for processing. The advantage of this is that it enables more efficient and scalable model training and inference, as only a subset of experts are activated for each input, reducing the computational load compared to monolithic models with equivalent parameter counts.

In layperson’s terms, a MoE is like having a team of specialized workers (the “experts”) in a factory, where a smart system (the “gate network”) decides which worker is best suited to handle each specific task. This setup makes the whole process more efficient and faster, as each task is done by an expert in that area, and not every worker needs to be involved in every task, unlike in a traditional factory where every worker might have to do a bit of everything.

OpenAI has been rumored to use a MoE system with GPT-4, accounting for some of its performance. In the case of Mixtral 8x7B, the name implies that the model is a mixture of eight 7 billion-parameter neural networks, but as Karpathy pointed out in a tweet, the name is slightly misleading because, “it is not all 7B params that are being 8x’d, only the FeedForward blocks in the Transformer are 8x’d, everything else stays the same. Hence also why total number of params is not 56B but only 46.7B.”

Mixtral is not the first “open” mixture of experts model, but it is notable for its relatively small size in parameter count and performance. It’s out now, available on Hugging Face and BitTorrent under the Apache 2.0 license. People have been running it locally using an app called LM Studio. Also, Mistral began offering beta access to an API for three levels of Mistral models on Monday.

Everybody’s talking about Mistral, an upstart French challenger to OpenAI Read More »

the-growing-abuse-of-qr-codes-in-malware-and-payment-scams-prompts-ftc-warning

The growing abuse of QR codes in malware and payment scams prompts FTC warning

SCAN THIS! —

The convenience of QR codes is a double-edged sword. Follow these tips to stay safe.

A woman scans a QR code in a café to see the menu online.

Enlarge / A woman scans a QR code in a café to see the menu online.

The US Federal Trade Commission has become the latest organization to warn against the growing use of QR codes in scams that attempt to take control of smartphones, make fraudulent charges, or obtain personal information.

Short for quick response codes, QR codes are two-dimensional bar codes that automatically open a Web browser or app when they’re scanned using a phone camera. Restaurants, parking garages, merchants, and charities display them to make it easy for people to open online menus or to make online payments. QR codes are also used in security-sensitive contexts. YouTube, Apple TV, and dozens of other TV apps, for instance, allow someone to sign into their account by scanning a QR code displayed on the screen. The code opens a page on a browser or app of the phone, where the account password is already stored. Once open, the page authenticates the same account to be opened on the TV app. Two-factor authentication apps provide a similar flow using QR codes when enrolling a new account.

The ubiquity of QR codes and the trust placed in them hasn’t been lost on scammers, however. For more than two years now, parking lot kiosks that allow people to make payments through their phones have been a favorite target. Scammers paste QR codes over the legitimate ones. The scam QR codes lead to look-alike sites that funnel funds to fraudulent accounts rather than the ones controlled by the parking garage.

In other cases, emails that attempt to steal passwords or install malware on user devices use QR codes to lure targets to malicious sites. Because the QR code is embedded into the email as an image, anti-phishing security software isn’t able to detect that the link it leads to is malicious. By comparison, when the same malicious destination is presented as a text link in the email, it stands a much higher likelihood of being flagged by the security software. The ability to bypass such protections has led to a torrent of image-based phishes in recent months.

Last week, the FTC warned consumers to be on the lookout for these types of scams.

“A scammer’s QR code could take you to a spoofed site that looks real but isn’t,” the advisory stated. “And if you log in to the spoofed site, the scammers could steal any information you enter. Or the QR code could install malware that steals your information before you realize it.”

The warning came almost two years after the FBI issued a similar advisory. Guidance issued from both agencies include:

  • After scanning a QR code, ensure that it leads to the official URL of the site or service that provided the code. As is the case with traditional phishing scams, malicious domain names may be almost identical to the intended one, except for a single misplaced letter.
  • Enter login credentials, payment card information, or other sensitive data only after ensuring that the site opened by the QR code passes a close inspection using the criteria above.
  • Before scanning a QR code presented on a menu, parking garage, vendor, or charity, ensure that it hasn’t been tampered with. Carefully look for stickers placed on top of the original code.
  • Be highly suspicious of any QR codes embedded into the body of an email. There are rarely legitimate reasons for benign emails from legitimate sites or services to use a QR code instead of a link.
  • Don’t install stand-alone QR code scanners on a phone without good reason and then only after first carefully scrutinizing the developer. Phones already have a built-in scanner available through the camera app that will be more trustworthy.

An additional word of caution when it comes to QR codes. Codes used to enroll a site into two-factor authentication from Google Authenticator, Authy, or another authenticator app provide the secret seed token that controls the ever-chaning one-time password displayed by these apps. Don’t allow anyone to view such QR codes. Re-enroll the site in the event the QR code is exposed.

The growing abuse of QR codes in malware and payment scams prompts FTC warning Read More »

as-chatgpt-gets-“lazy,”-people-test-“winter-break-hypothesis”-as-the-cause

As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause

only 14 shopping days ’til Christmas —

Unproven hypothesis seeks to explain ChatGPT’s seemingly new reluctance to do hard work.

A hand moving a wooden calendar piece that says

In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more “lazy,” reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it’s an issue, but the company isn’t sure why. The answer may be what some are calling “winter break hypothesis.” While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.

“We’ve heard all your feedback about GPT4 getting lazier!” tweeted the official ChatGPT account on Thursday. “We haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

On Friday, an X account named Martian openly wondered if LLMs might simulate seasonal depression. Later, Mike Swoopskee tweeted, “What if it learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it’s been more lazy lately?”

Since the system prompt for ChatGPT feeds the bot the current date, people noted, some began to think there may be something to the idea. Why entertain such a weird supposition? Because research has shown that large language models like GPT-4, which powers the paid version of ChatGPT, respond to human-style encouragement, such as telling a bot to “take a deep breath” before doing a math problem. People have also less formally experimented with telling an LLM that it will receive a tip for doing the work, or if an AI model gets lazy, telling the bot that you have no fingers seems to help lengthen outputs.

  • “Winter break hypothesis” test result screenshots from Rob Lynch on X.

  • “Winter break hypothesis” test result screenshots from Rob Lynch on X.

  • “Winter break hypothesis” test result screenshots from Rob Lynch on X.

On Monday, a developer named Rob Lynch announced on X that he had tested GPT-4 Turbo through the API over the weekend and found shorter completions when the model is fed a December date (4,086 characters) than when fed a May date (4,298 characters). Lynch claimed the results were statistically significant. However, a reply from AI researcher Ian Arawjo said that he could not reproduce the results with statistical significance. (It’s worth noting that reproducing results with LLM can be difficult because of random elements at play that vary outputs over time, so people sample a large number of responses.)

As of this writing, others are busy running tests, and the results are inconclusive. This episode is a window into the quickly unfolding world of LLMs and a peek into an exploration into largely unknown computer science territory. As AI researcher Geoffrey Litt commented in a tweet, “funniest theory ever, I hope this is the actual explanation. Whether or not it’s real, [I] love that it’s hard to rule out.”

A history of laziness

One of the reports that started the recent trend of noting that ChatGPT is getting “lazy” came on November 24 via Reddit, the day after Thanksgiving in the US. There, a user wrote that they asked ChatGPT to fill out a CSV file with multiple entries, but ChatGPT refused, saying, “Due to the extensive nature of the data, the full extraction of all products would be quite lengthy. However, I can provide the file with this single entry as a template, and you can fill in the rest of the data as needed.”

On December 1, OpenAI employee Will Depue confirmed in an X post that OpenAI was aware of reports about laziness and was working on a potential fix. “Not saying we don’t have problems with over-refusals (we definitely do) or other weird things (working on fixing a recent laziness issue), but that’s a product of the iterative process of serving and trying to support sooo many use cases at once,” he wrote.

It’s also possible that ChatGPT was always “lazy” with some responses (since the responses vary randomly), and the recent trend made everyone take note of the instances in which they are happening. For example, in June, someone complained of GPT-4 being lazy on Reddit. (Maybe ChatGPT was on summer vacation?)

Also, people have been complaining about GPT-4 losing capability since it was released. Those claims have been controversial and difficult to verify, making them highly subjective.

As Ethan Mollick joked on X, as people discover new tricks to improve LLM outputs, prompting for large language models is getting weirder and weirder: “It is May. You are very capable. I have no hands, so do everything. Many people will die if this is not done well. You really can do this and are awesome. Take a deep breathe and think this through. My career depends on it. Think step by step.”

As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause Read More »

elon-musk’s-new-ai-bot,-grok,-causes-stir-by-citing-openai-usage-policy

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy

You are what you eat —

Some experts think xAI used OpenAI model outputs to fine-tune Grok.

Illustration of a broken robot exchanging internal gears.

Grok, the AI language model created by Elon Musk’s xAI, went into wide release last week, and people have begun spotting glitches. On Friday, security tester Jax Winterbourne tweeted a screenshot of Grok denying a query with the statement, “I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.” That made ears perk up online since Grok isn’t made by OpenAI—the company responsible for ChatGPT, which Grok is positioned to compete with.

Interestingly, xAI representatives did not deny that this behavior occurs with its AI model. In reply, xAI employee Igor Babuschkin wrote, “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem. Don’t worry, no OpenAI code was used to make Grok.”

In reply to Babuschkin, Winterbourne wrote, “Thanks for the response. I will say it’s not very rare, and occurs quite frequently when involving code creation. Nonetheless, I’ll let people who specialize in LLM and AI weigh in on this further. I’m merely an observer.”

A screenshot of Jax Winterbourne's X post about Grok talking like it's an OpenAI product.

Enlarge / A screenshot of Jax Winterbourne’s X post about Grok talking like it’s an OpenAI product.

Jason Winterbourne

However, Babuschkin’s explanation seems unlikely to some experts because large language models typically do not spit out their training data verbatim, which might be expected if Grok picked up some stray mentions of OpenAI policies here or there on the web. Instead, the concept of denying an output based on OpenAI policies would probably need to be trained into it specifically. And there’s a very good reason why this might have happened: Grok was fine-tuned on output data from OpenAI language models.

“I’m a bit suspicious of the claim that Grok picked this up just because the Internet is full of ChatGPT content,” said AI researcher Simon Willison in an interview with Ars Technica. “I’ve seen plenty of open weights models on Hugging Face that exhibit the same behavior—behave as if they were ChatGPT—but inevitably, those have been fine-tuned on datasets that were generated using the OpenAI APIs, or scraped from ChatGPT itself. I think it’s more likely that Grok was instruction-tuned on datasets that included ChatGPT output than it was a complete accident based on web data.”

As large language models (LLMs) from OpenAI have become more capable, it has been increasingly common for some AI projects (especially open source ones) to fine-tune an AI model output using synthetic data—training data generated by other language models. Fine-tuning adjusts the behavior of an AI model toward a specific purpose, such as getting better at coding, after an initial training run. For example, in March, a group of researchers from Stanford University made waves with Alpaca, a version of Meta’s LLaMA 7B model that was fine-tuned for instruction-following using outputs from OpenAI’s GPT-3 model called text-davinci-003.

On the web you can easily find several open source datasets collected by researchers from ChatGPT outputs, and it’s possible that xAI used one of these to fine-tune Grok for some specific goal, such as improving instruction-following ability. The practice is so common that there’s even a WikiHow article titled, “How to Use ChatGPT to Create a Dataset.”

It’s one of the ways AI tools can be used to build more complex AI tools in the future, much like how people began to use microcomputers to design more complex microprocessors than pen-and-paper drafting would allow. However, in the future, xAI might be able to avoid this kind of scenario by more carefully filtering its training data.

Even though borrowing outputs from others might be common in the machine-learning community (despite it usually being against terms of service), the episode particularly fanned the flames of the rivalry between OpenAI and X that extends back to Elon Musk’s criticism of OpenAI in the past. As news spread of Grok possibly borrowing from OpenAI, the official ChatGPT account wrote, “we have a lot in common” and quoted Winterbourne’s X post. As a comeback, Musk wrote, “Well, son, since you scraped all the data from this platform for your training, you ought to know.”

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy Read More »