Biz & IT

dell-responds-to-return-to-office-resistance-with-vpn,-badge-tracking

Dell responds to return-to-office resistance with VPN, badge tracking

Office optics —

Report claims new tracking starts May 13 with unclear consequences.

Signage outside Dell Technologies headquarters in Round Rock, Texas, US, on Monday, Feb. 6, 2023.

After reversing its position on remote work, Dell is reportedly implementing new tracking techniques on May 13 to ensure its workers are following the company’s return-to-office (RTO) policy, The Register reported today, citing anonymous sources.

Dell has allowed people to work remotely for over 10 years. But in February, it issued an RTO mandate, and come May 13, most workers will be classified as either totally remote or hybrid. Starting this month, hybrid workers have to go into a Dell office at least 39 days per quarter. Fully remote workers, meanwhile, are ineligible for promotion, Business Insider reported in March.

Now The Register reports that Dell will track employees’ badge swipes and VPN connections to confirm that workers are in the office for a significant amount of time.

An unnamed source told the publication: “This is likely in response to the official numbers about how many of our staff members chose to remain remote after the RTO mandate.”

Dell’s methods for tracking hybrid workers will also reportedly include a color-coding system. The Register reported that Dell “plans to make weekly site visit data from its badge tracking available to employees through the corporation’s human capital management software and to give them color-coded ratings that summarize their status.” From “consistent” to “limited” presence, the colors are blue, green, yellow, and red.

A different person who reportedly works at Dell said that managers hadn’t shown consistency regarding how many red flags they would consider acceptable. The confusion led the source to tell The Register, “It’s a shit show here.”

An unnamed person reportedly “familiar with Dell” claimed that those failing to show up to a Dell office frequently enough will be referred to Dell COO Jeff Clarke.

Dell’s about-face

Ironically, Clarke used to support the idea of fully remote work post-pandemic. In 2020, he said:

After all of this investment to enable remote everything, we will never go back to the way things were before. Here at Dell, we expect, on an ongoing basis, that 60 percent of our workforce will stay remote or have a hybrid schedule where they work from home mostly and come into the office one or two days a week.”

It’s unclear exactly how many of Dell’s workers are remote. The Register reported today that approximately 50 percent of Dell’s US workers are remote, compared to 66 percent of international workers. In March, an anonymous source told Business Insider that 10–15 percent of every team at Dell was remote.

Michael Dell, Dell’s CEO and founder, also used to support remote work and penned a blog in 2022 saying that Dell “found no meaningful differences for team members working remotely or office-based even before the pandemic forced everyone home.”

Some suspect Dell’s suddenly stringent office policy is an attempt to force people to quit so that the company can avoid layoffs. In 2023, Dell laid off 13,000 people, per regulatory filings [PDF].

Dell didn’t respond to Ars’ request for comment. In a statement to The Register, a representative said that Dell believes “in-person connections paired with a flexible approach are critical to drive innovation and value differentiation.”

Questionable policies

News of Dell’s upcoming tracking methods comes amid growing concern about the potentially invasive and aggressive tactics companies have implemented as workers resist RTO policies. Meta, Amazon, Google, and JPMorgan Chase have all reportedly tracked in-office badge swipes. TikTok reportedly launched an app to track badge swipes and to ask workers why they weren’t in the office on days that they were expected to be.

However, the efficacy of RTO mandates is questionable. An examination of 457 companies on the S&P 500 list released in February concluded that RTO mandates don’t drive company value but instead negatively affect worker morale. Analysis of survey data from more than 18,000 working Americans released in March found that flexible workplace policies, including the ability to work remotely completely or part-time and flexible schedules, can help employees’ mental health.

Dell responds to return-to-office resistance with VPN, badge tracking Read More »

robot-dogs-armed-with-ai-aimed-rifles-undergo-us-marines-special-ops-evaluation

Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops evaluation

The future of warfare —

Quadrupeds being reviewed have automatic targeting systems but require human oversight to fire.

A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

Enlarge / A still image of a robotic quadruped armed with a remote weapons system, captured from a video provided by Onyx Industries.

The United States Marine Forces Special Operations Command (MARSOC) is currently evaluating a new generation of robotic “dogs” developed by Ghost Robotics, with the potential to be equipped with gun systems from defense tech company Onyx Industries, reports The War Zone.

While MARSOC is testing Ghost Robotics’ quadrupedal unmanned ground vehicles (called “Q-UGVs” for short) for various applications, including reconnaissance and surveillance, it’s the possibility of arming them with weapons for remote engagement that may draw the most attention. But it’s not unprecedented: The US Marine Corps has also tested robotic dogs armed with rocket launchers in the past.

MARSOC is currently in possession of two armed Q-UGVs undergoing testing, as confirmed by Onyx Industries staff, and their gun systems are based on Onyx’s SENTRY remote weapon system (RWS), which features an AI-enabled digital imaging system and can automatically detect and track people, drones, or vehicles, reporting potential targets to a remote human operator that could be located anywhere in the world. The system maintains a human-in-the-loop control for fire decisions, and it cannot decide to fire autonomously.

On LinkedIn, Onyx Industries shared a video of a similar system in action.

In a statement to The War Zone, MARSOC states that weaponized payloads are just one of many use cases being evaluated. MARSOC also clarifies that comments made by Onyx Industries to The War Zone regarding the capabilities and deployment of these armed robot dogs “should not be construed as a capability or a singular interest in one of many use cases during an evaluation.” The command further stresses that it is aware of and adheres to all Department of Defense policies concerning autonomous weapons.

The rise of robotic unmanned ground vehicles

An unauthorized video of a gun bolted onto a $3,000 Unitree robodog spread quickly on social media in July 2022 and prompted a response from several robotics companies.

Enlarge / An unauthorized video of a gun bolted onto a $3,000 Unitree robodog spread quickly on social media in July 2022 and prompted a response from several robotics companies.

Alexander Atamanov

The evaluation of armed robotic dogs reflects a growing interest in small robotic unmanned ground vehicles for military use. While unmanned aerial vehicles (UAVs) have been remotely delivering lethal force under human command for at least two decades, the rise of inexpensive robotic quadrupeds—some available for as little as $1,600—has led to a new round of experimentation with strapping weapons to their backs.

In July 2022, a video of a rifle bolted to the back of a Unitree robodog went viral on social media, eventually leading Boston Robotics and other robot vendors to issue a pledge that October to not weaponize their robots (with notable exceptions for military uses). In April, we covered a Unitree Go2 robot dog, with a flame thrower strapped on its back, on sale to the general public.

The prospect of deploying armed robotic dogs, even with human oversight, raises significant questions about the future of warfare and the potential risks and ethical implications of increasingly autonomous weapons systems. There’s also the potential for backlash if similar remote weapons systems eventually end up used domestically by police. Such a concern would not be unfounded: In November 2022, we covered a decision by the San Francisco Board of Supervisors to allow the San Francisco Police Department to use lethal robots against suspects.

There’s also concern that the systems will become more autonomous over time. As The War Zone’s Howard Altman and Oliver Parken describe in their article, “While further details on MARSOC’s use of the gun-armed robot dogs remain limited, the fielding of this type of capability is likely inevitable at this point. As AI-enabled drone autonomy becomes increasingly weaponized, just how long a human will stay in the loop, even for kinetic acts, is increasingly debatable, regardless of assurances from some in the military and industry.”

While the technology is still in the early stages of testing and evaluation, Q-UGVs do have the potential to provide reconnaissance and security capabilities that reduce risks to human personnel in hazardous environments. But as armed robotic systems continue to evolve, it will be crucial to address ethical concerns and ensure that their use aligns with established policies and international law.

Robot dogs armed with AI-aimed rifles undergo US Marines Special Ops evaluation Read More »

ransomware-mastermind-lockbitsupp-reveled-in-his-anonymity—now-he’s-been-id’d

Ransomware mastermind LockBitSupp reveled in his anonymity—now he’s been ID’d

TABLES TURNED —

The US places a $10 million bounty for the arrest of Dmitry Yuryevich Khoroshev.

Dmitry Yuryevich Khoroshev, aka LockBitSupp

Enlarge / Dmitry Yuryevich Khoroshev, aka LockBitSupp

UK National Crime Agency

Since at least 2019, a shadowy figure hiding behind several pseudonyms has publicly gloated for extorting millions of dollars from thousands of victims he and his associates had hacked. Now, for the first time, “LockBitSupp” has been unmasked by an international law enforcement team, and a $10 million bounty has been placed for his arrest.

In an indictment unsealed Tuesday, US federal prosecutors unmasked the flamboyant persona as Dmitry Yuryevich Khoroshev, a 31-year-old Russian national. Prosecutors said that during his five years at the helm of LockBit—one of the most prolific ransomware groups—Khoroshev and his subordinates have extorted $500 million from some 2,500 victims, roughly 1,800 of which were located in the US. His cut of the revenue was allegedly about $100 million.

Damage in the billions of dollars

“Beyond ransom payments and demands, LockBit attacks also severely disrupted their victims’ operations, causing lost revenue and expenses associated with incident response and recovery,” federal prosecutors wrote. “With these losses included, LockBit caused damage around the world totaling billions of US dollars. Moreover, the data Khoroshev and his LockBit affiliate co-conspirators stole—containing highly sensitive organizational and personal information—remained unsecure and compromised in perpetuity, notwithstanding Khoroshev’s and his co-conspirators’ false promises to the contrary.”

The indictment charges the Russian national with one count of conspiracy to commit fraud, extortion, and related activity in connection with computers, one count of conspiracy to commit wire fraud, eight counts of intentional damage to a protected computer, eight counts of extortion in relation to confidential information from a protected computer, and eight counts of extortion in relation to damage to a protected computer. If convicted, Khoroshev faces a maximum penalty of 185 years in prison.

In addition to the indictment, officials in the US Treasury Department—along with counterparts in the UK and Australia—announced sanctions against Khoroshev. Among other things, the US sanctions allow officials to impose civil penalties on any US person who makes or facilitates payments to the LockBit group. The US State Department also announced a $10 million reward for any information leading to Khoroshev’s arrest and or conviction.

Rooting out LockBit

Tuesday’s actions come 11 weeks after law enforcement agencies in the US and 10 other countries struck a major blow to the infrastructure LockBit members used to operate their ransomware-as-a-service enterprise. Images federal authorities posted to the dark web site where LockBit named and shamed victims indicated they had taken control of /etc/shadow, a Linux file that stores cryptographically hashed passwords. The file, among the most security-sensitive ones in Linux, can be accessed only by a user with root, the highest level of system privileges.

In all, the authorities said in February, they seized control of 14,000 LockBit-associated accounts and 34 servers located in the Netherlands, Germany, Finland, France, Switzerland, Australia, the US, and the UK. Two LockBit suspects were arrested in Poland and Ukraine, and five indictments and three arrest warrants were issued. The authorities also froze 200 cryptocurrency accounts linked to the ransomware operation. The UK’s National Crime Agency on Tuesday said the number of active LockBit affiliates has fallen from 114 to 69 since the February action, named Operation Cronos.

In mid-March, an Ontario, Canada, man convicted on charges for working for LockBit was sentenced to four years in prison. Mikhail Vasiliev, 33 years old at the time of sentencing, was arrested in November 2022 and charged with conspiring to infect protected computers with ransomware and sending ransom demands to victims. He pleaded guilty in February to eight counts of cyber extortion, mischief, and weapons charges.

The real-world identity of Khoroshev’s LockBitSupp alter ego has been hotly sought after for years. LockBitSupp thrived on his anonymity in frequent posts to Russian-speaking hacking forums, where he boasted about the prowess and acumen of his work. At one point, he promised a $10 million reward to anyone who revealed his identity. After February’s operation taking down much of the LockBit infrastructure, prosecutors hinted that they knew who LockBitSupp was but stopped short of naming him.

LockBit has operated since at least 2019 and has also been known under the name “ABCD” in the past. Within three years of its founding, the group’s malware was the most widely circulating ransomware. Like most of its peers, LockBit has operated under what’s known as ransomware-as-a-service, in which it provides software and infrastructure to affiliates who use it to do the actual hacking. LockBit and the affiliates then divide any resulting revenue.

Story updated to correct Khoroshev’s age. Initially the State Department said his date of birth was 17 April 1973. Later, the agency said it was 17 April 1993.

Ransomware mastermind LockBitSupp reveled in his anonymity—now he’s been ID’d Read More »

microsoft-launches-ai-chatbot-for-spies

Microsoft launches AI chatbot for spies

Adventures in consequential confabulation —

Air-gapping GPT-4 model on secure network won’t prevent it from potentially making things up.

A person using a computer with a computer screen reflected in their glasses.

Microsoft has introduced a GPT-4-based generative AI model designed specifically for US intelligence agencies that operates disconnected from the Internet, according to a Bloomberg report. This reportedly marks the first time Microsoft has deployed a major language model in a secure setting, designed to allow spy agencies to analyze top-secret information without connectivity risks—and to allow secure conversations with a chatbot similar to ChatGPT and Microsoft Copilot. But it may also mislead officials if not used properly due to inherent design limitations of AI language models.

GPT-4 is a large language model (LLM) created by OpenAI that attempts to predict the most likely tokens (fragments of encoded data) in a sequence. It can be used to craft computer code and analyze information. When configured as a chatbot (like ChatGPT), GPT-4 can power AI assistants that converse in a human-like manner. Microsoft has a license to use the technology as part of a deal in exchange for large investments it has made in OpenAI.

According to the report, the new AI service (which does not yet publicly have a name) addresses a growing interest among intelligence agencies to use generative AI for processing classified data, while mitigating risks of data breaches or hacking attempts. ChatGPT normally  runs on cloud servers provided by Microsoft, which can introduce data leak and interception risks. Along those lines, the CIA announced its plan to create a ChatGPT-like service last year, but this Microsoft effort is reportedly a separate project.

William Chappell, Microsoft’s chief technology officer for strategic missions and technology, noted to Bloomberg that developing the new system involved 18 months of work to modify an AI supercomputer in Iowa. The modified GPT-4 model is designed to read files provided by its users but cannot access the open Internet. “This is the first time we’ve ever had an isolated version—when isolated means it’s not connected to the Internet—and it’s on a special network that’s only accessible by the US government,” Chappell told Bloomberg.

The new service was activated on Thursday and is now available to about 10,000 individuals in the intelligence community, ready for further testing by relevant agencies. It’s currently “answering questions,” according to Chappell.

One serious drawback of using GPT-4 to analyze important data is that it can potentially confabulate (make up) inaccurate summaries, draw inaccurate conclusions, or provide inaccurate information to its users. Since trained AI neural networks are not databases and operate on statistical probabilities, they make poor factual resources unless augmented with external access to information from another source using a technique such as retrieval augmented generation (RAG).

Given that limitation, it’s entirely possible that GPT-4 could potentially misinform or mislead America’s intelligence agencies if not used properly. We don’t know what oversight the system will have, any limitations on how it can or will be used, or how it can be audited for accuracy. We have reached out to Microsoft for comment.

Microsoft launches AI chatbot for spies Read More »

ai-in-space:-karpathy-suggests-ai-chatbots-as-interstellar-messengers-to-alien-civilizations

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations

The new golden record —

Andrej Karpathy muses about sending a LLM binary that could “wake up” and answer questions.

Close shot of Cosmonaut astronaut dressed in a gold jumpsuit and helmet, illuminated by blue and red lights, holding a laptop, looking up.

On Thursday, renowned AI researcher Andrej Karpathy, formerly of OpenAI and Tesla, tweeted a lighthearted proposal that large language models (LLMs) like the one that runs ChatGPT could one day be modified to operate in or be transmitted to space, potentially to communicate with extraterrestrial life. He said the idea was “just for fun,” but with his influential profile in the field, the idea may inspire others in the future.

Karpathy’s bona fides in AI almost speak for themselves, receiving a PhD from Stanford under computer scientist Dr. Fei-Fei Li in 2015. He then became one of the founding members of OpenAI as a research scientist, then served as senior director of AI at Tesla between 2017 and 2022. In 2023, Karpathy rejoined OpenAI for a year, leaving this past February. He’s posted several highly regarded tutorials covering AI concepts on YouTube, and whenever he talks about AI, people listen.

Most recently, Karpathy has been working on a project called “llm.c” that implements the training process for OpenAI’s 2019 GPT-2 LLM in pure C, dramatically speeding up the process and demonstrating that working with LLMs doesn’t necessarily require complex development environments. The project’s streamlined approach and concise codebase sparked Karpathy’s imagination.

“My library llm.c is written in pure C, a very well-known, low-level systems language where you have direct control over the program,” Karpathy told Ars. “This is in contrast to typical deep learning libraries for training these models, which are written in large, complex code bases. So it is an advantage of llm.c that it is very small and simple, and hence much easier to certify as Space-safe.”

Our AI ambassador

In his playful thought experiment (titled “Clearly LLMs must one day run in Space”), Karpathy suggested a two-step plan where, initially, the code for LLMs would be adapted to meet rigorous safety standards, akin to “The Power of 10 Rules” adopted by NASA for space-bound software.

This first part he deemed serious: “We harden llm.c to pass the NASA code standards and style guides, certifying that the code is super safe, safe enough to run in Space,” he wrote in his X post. “LLM training/inference in principle should be super safe – it is just one fixed array of floats, and a single, bounded, well-defined loop of dynamics over it. There is no need for memory to grow or shrink in undefined ways, for recursion, or anything like that.”

That’s important because when software is sent into space, it must operate under strict safety and reliability standards. Karpathy suggests that his code, llm.c, likely meets these requirements because it is designed with simplicity and predictability at its core.

In step 2, once this LLM was deemed safe for space conditions, it could theoretically be used as our AI ambassador in space, similar to historic initiatives like the Arecibo message (a radio message sent from Earth to the Messier 13 globular cluster in 1974) and Voyager’s Golden Record (two identical gold records sent on the two Voyager spacecraft in 1977). The idea is to package the “weights” of an LLM—essentially the model’s learned parameters—into a binary file that could then “wake up” and interact with any potential alien technology that might decipher it.

“I envision it as a sci-fi possibility and something interesting to think about,” he told Ars. “The idea that it is not us that might travel to stars but our AI representatives. Or that the same could be true of other species.”

AI in space: Karpathy suggests AI chatbots as interstellar messengers to alien civilizations Read More »

these-dangerous-scammers-don’t-even-bother-to-hide-their-crimes

These dangerous scammers don’t even bother to hide their crimes

brazenly out in the open —

Cybercriminals openly run dozens of scams across social media and messaging apps.

One hundred dollar bill Benjamin Franklin portrait looks behind brown craft ripped paper

Most scammers and cybercriminals operate in the digital shadows and don’t want you to know how they make money. But that’s not the case for the Yahoo Boys, a loose collective of young men in West Africa who are some of the web’s most prolific—and increasingly dangerous—scammers.

Thousands of people are members of dozens of Yahoo Boy groups operating across Facebook, WhatsApp, and Telegram, a WIRED analysis has found. The scammers, who deal in types of fraud that total hundreds of millions of dollars each year, also have dozens of accounts on TikTok, YouTube, and the document-sharing service Scribd that are getting thousands of views.

Inside the groups, there’s a hive of fraudulent activity with the cybercriminals often showing their faces and sharing ways to scam people with other members. They openly distribute scripts detailing how to blackmail people and how to run sextortion scams—that have driven people to take their own lives—sell albums with hundreds of photographs, and advertise fake social media accounts. Among the scams, they’re also using AI to create fake “nude” images of people and real-time deepfake video calls.

The Yahoo Boys don’t disguise their activity. Many groups use “Yahoo Boys” in their name as well as other related terms. WIRED’s analysis found 16 Yahoo Boys Facebook groups with almost 200,000 total members, a dozen WhatsApp channels, around 10 Telegram channels, 20 TikTok accounts, a dozen YouTube accounts, and more than 80 scripts on Scribd. And that’s just the tip of the iceberg.

Broadly, the companies do not allow content on their platforms that encourages or promotes criminal behavior. The majority of the Yahoo Boys accounts and groups WIRED identified were removed after we contacted the companies about the groups’ overt existence. Despite these removals, dozens more Yahoo Boys groups and accounts remain online.

“They’re not hiding under different names,” says Kathy Waters, the co-founder and executive director of the nonprofit Advocating Against Romance Scammers, which has tracked the Yahoo Boys for years. Waters says the social media companies are essentially providing the Yahoo Boys with “free office space” to organize and conduct their activities. “They’re selling scripts, selling photos, identifications of people, all online, all on the social media platforms,” she says. “Why these accounts still remain is beyond me.”

The Yahoo Boys aren’t a single, organized group. Instead, they’re a collection of thousands of scammers who work individually or in clusters. Often based in Nigeria, their name comes from formerly targeting users of Yahoo services, with links back to the Nigerian Prince email scams of old. Groups in West Africa can be often organized in various confraternities, which are cultish gangs.

“Yahoo is a set of knowledge that allows you to conduct scams,” says Gary Warner, the director of intelligence at DarkTower and director of the University of Alabama at Birmingham’s Computer Forensics Research Laboratory. While there are different levels of sophistication of Yahoo Boys, Warner says, many simply operate from their phones. “Most of these threat actors are only using one device,” he says.

The Yahoo Boys run dozens of scams—from romance fraud to business email compromise. When making contact with potential victims, they’ll often “bomb” people by sending hundreds of messages to dating app accounts or Facebook profiles. “They will say anything they can in order to get the next dime in their pocket,” Waters says.

Searching for the Yahoo Boys on Facebook brings up two warnings: Both say the results may be linked to fraudulent activity, which isn’t allowed on the website. Clicking through the warnings reveals Yahoo Boy groups with thousands of members—one had more than 70,000.

Within the groups—alongside posts selling SIM cards and albums with hundreds of pictures—many of the scammers push people toward other messaging platforms such as Meta’s WhatsApp or Telegram. Here, the Yahoo Boys are at their most bold. Some groups and channels on the two platforms receive hundreds of posts per day and are part of their wider web of operations.

After WIRED asked Facebook about the 16 groups we identified, the company removed them, and some WhatsApp groups were deactivated. “Scammers use every platform available to them to defraud people and constantly adapt to avoid getting caught,” says Al Tolan, a Meta spokesperson. They did not directly address the accounts that were removed or that they were easy to find. “Purposefully exploiting others for money is against our policies, and we take action when we become aware of it,” Tolan says. “We continue to invest in technology and cooperate with law enforcement so they can prosecute scammers. We also actively share tips on how people can protect themselves, their accounts, and avoid scams.”

Groups on Telegram were removed after WIRED messaged the company’s press office; however, the platform did not respond about why it had removed them.

Across all types of social media, Yahoo Boys scammers share “scripts” that they use to socially manipulate people—these can run to thousands of words long and can be copied and pasted to different victims. Many have been online for years. “I’ve seen some scripts that are 30 and 60 layers deep, before the scammer actually would have to go and think of something else to say,” says Ronnie Tokazowski, the chief fraud fighter at Intelligence for Good, which works with cybercrime victims. “It’s 100 percent how they’ll manipulate the people,” Tokazowski says.

Among the many scams, they pretend to be military officers, people offering “hookups,” the FBI, doctors, and people looking for love. One “good morning” script includes around a dozen messages the scammers can send to their targets. “In a world full of deceit and lies, I feel lucky when see the love in your eyes. Good morning,” one says. But things get much darker.

These dangerous scammers don’t even bother to hide their crimes Read More »

microsoft-plans-to-lock-down-windows-dns-like-never-before-here’s-how.

Microsoft plans to lock down Windows DNS like never before. Here’s how.

Microsoft plans to lock down Windows DNS like never before. Here’s how.

Getty Images

Translating human-readable domain names into numerical IP addresses has long been fraught with gaping security risks. After all, lookups are rarely end-to-end encrypted. The servers providing domain name lookups provide translations for virtually any IP address—even when they’re known to be malicious. And many end-user devices can easily be configured to stop using authorized lookup servers and instead use malicious ones.

Microsoft on Friday provided a peek at a comprehensive framework that aims to sort out the Domain Name System (DNS) mess so that it’s better locked down inside Windows networks. It’s called ZTDNS (zero trust DNS). Its two main features are (1) encrypted and cryptographically authenticated connections between end-user clients and DNS servers and (2) the ability for administrators to tightly restrict the domains these servers will resolve.

Clearing the minefield

One of the reasons DNS has been such a security minefield is that these two features can be mutually exclusive. Adding cryptographic authentication and encryption to DNS often obscures the visibility admins need to prevent user devices from connecting to malicious domains or detect anomalous behavior inside a network. As a result, DNS traffic is either sent in clear text or it’s encrypted in a way that allows admins to decrypt it in transit through what is essentially an adversary-in-the-middle attack.

Admins are left to choose between equally unappealing options: (1) route DNS traffic in clear text with no means for the server and client device to authenticate each other so malicious domains can be blocked and network monitoring is possible, or (2) encrypt and authenticate DNS traffic and do away with the domain control and network visibility.

ZTDNS aims to solve this decades-old problem by integrating the Windows DNS engine with the Windows Filtering Platform—the core component of the Windows Firewall—directly into client devices.

Jake Williams, VP of research and development at consultancy Hunter Strategies, said the union of these previously disparate engines would allow updates to be made to the Windows firewall on a per-domain name basis. The result, he said, is a mechanism that allows organizations to, in essence, tell clients “only use our DNS server, that uses TLS, and will only resolve certain domains.” Microsoft calls this DNS server or servers the “protective DNS server.”

By default, the firewall will deny resolutions to all domains except those enumerated in allow lists. A separate allow list will contain IP address subnets that clients need to run authorized software. Key to making this work at scale inside an organization with rapidly changing needs. Networking security expert Royce Williams (no relation to Jake Williams) called this a “sort of a bidirectional API for the firewall layer, so you can both trigger firewall actions (by input *tothe firewall), and trigger external actions based on firewall state (output *fromthe firewall). So instead of having to reinvent the firewall wheel if you are an AV vendor or whatever, you just hook into WFP.”

Microsoft plans to lock down Windows DNS like never before. Here’s how. Read More »

counterfeit-cisco-gear-ended-up-in-us-military-bases,-used-in-combat-operations

Counterfeit Cisco gear ended up in US military bases, used in combat operations

Cisno —

“One of the largest counterfeit-trafficking operations ever.”

Cisco Systems headquarters in San Jose, California, US, on Monday, Aug. 14, 2023.

Enlarge / Cisco Systems headquarters in San Jose, California.

A Florida resident was sentenced to 78 months for running a counterfeit scam that generated $100 million in revenue from fake networking gear and put the US military’s security at risk, the US Department of Justice (DOJ) announced Thursday.

Onur Aksoy, aka Ron Aksoy and Dave Durden, pleaded guilty on June 5, 2023, to two counts of an indictment charging him with conspiring with others to traffic in counterfeit goods, to commit mail fraud, and to commit wire fraud. His sentence, handed down on May 1, also includes an order to pay $100 million in restitution to Cisco, a $40,000 fine, and three years of supervised release. Aksoy will also have to pay his victims a sum that a court will determine at an unspecified future date, the DOJ said.

According to the indictment [PDF], Aksoy began plotting the scam around August 2013, and the operation ran until at least April 2022. Aksoy used at least 19 companies and about 15 Amazon storefronts, 10 eBay ones, and direct sales—known collectively as Pro Network Entities—to sell tens of thousands of computer networking devices. He imported the products from China and Hong Kong and used fake Cisco packaging, labels, and documents to sell them as new and real. Legitimate versions of the products would’ve sold for over $1 billion, per the indictment.

The DOJ’s announcement this week said the devices had an estimated retail value of “hundreds of millions of dollars” and that Aksoy personally received millions of dollars.

Fake Cisco tech used in Air Force, Army, and Navy applications

The US military used gear purchased from Aksoy’s scheme, which jeopardized sensitive applications, including support platforms for US fighter jets and other types of military aircraft, per government officials.

In a statement this week, Bryan Denny, special agent in charge of the US Department of Defense (DoD) Office of Inspector General, Defense Criminal Investigative Service in the Western Field Office, said that Aksoy “knowingly defrauded the Department of Defense by introducing counterfeit products into its supply chain that routinely failed or did not work at all.” He added:

In doing so, he sold counterfeit Cisco products to the DoD that were found on numerous military bases and in various systems, including but not limited to US Air Force F-15 and US Navy P-8 aircraft flight simulators.

The DOJ’s announcement said that Aksoy’s counterfeit devices ended up “used in highly sensitive military and governmental applications—including classified information systems—some involving combat and non-combat operations of the US Navy, US Air Force, and US Army, including platforms supporting the F-15, F-18, and F-22 fighter jets, AH-64 Apache attack helicopter, P-8 maritime patrol aircraft, and B-52 Stratofortress bomber aircraft.”

Devices purchased through the scam also wound up in hospitals and schools, the announcement said.

Counterfeit Cisco gear ended up in US military bases, used in combat operations Read More »

microsoft-ties-executive-pay-to-security-following-multiple-failures-and-breaches

Microsoft ties executive pay to security following multiple failures and breaches

lock it down —

Microsoft has been criticized for “preventable” failures and poor communication.

A PC running Windows 11.

Enlarge / A PC running Windows 11.

It’s been a bad couple of years for Microsoft’s security and privacy efforts. Misconfigured endpoints, rogue security certificates, and weak passwords have all caused or risked the exposure of sensitive data, and Microsoft has been criticized by security researchers, US lawmakers, and regulatory agencies for how it has responded to and disclosed these threats.

The most high-profile of these breaches involved a China-based hacking group named Storm-0558, which breached Microsoft’s Azure service and collected data for over a month in mid-2023 before being discovered and driven out. After months of ambiguity, Microsoft disclosed that a series of security failures gave Storm-0558 access to an engineer’s account, which allowed Storm-0558 to collect data from 25 of Microsoft’s Azure customers, including US federal agencies.

In January, Microsoft disclosed that it had been breached again, this time by Russian state-sponsored hacking group Midnight Blizzard. The group was able “to compromise a legacy non-production test tenant account” to gain access to Microsoft’s systems for “as long as two months.”

All of this culminated in a report (PDF) from the US Cyber Safety Review Board, which castigated Microsoft for its “inadequate” security culture, its “inaccurate public statements,” and its response to “preventable” security breaches.

To attempt to turn things around, Microsoft announced something it called the “Secure Future Initiative” in November 2023. As part of that initiative, Microsoft today announced a series of plans and changes to its security practices, including a few changes that have already been made.

“We are making security our top priority at Microsoft, above all else—over all other features,” wrote Microsoft Security Executive Vice President Charlie Bell. “We’re expanding the scope of SFI, integrating the recent recommendations from the CSRB as well as our learnings from Midnight Blizzard to ensure that our cybersecurity approach remains robust and adaptive to the evolving threat landscape.”

As part of these changes, Microsoft will also make its Senior Leadership Team’s pay partially dependent on whether the company is “meeting our security plans and milestones,” though Bell didn’t specify how much executive pay would be dependent on meeting those security goals.

Microsoft’s post describes three security principles (“secure by design,” “secure by default,” and “secure operations”) and six “security pillars” meant to address different weaknesses in Microsoft’s systems and development practices. The company says it plans to secure 100 percent of all its user accounts with “securely managed, phishing-resistant multifactor authentication,” enforce least-privilege access across all applications and user accounts, improve network monitoring and isolation, and retain all system security logs for at least two years, among other promises. Microsoft is also planning to put new deputy Chief Information Security Officers on different engineering teams to track their progress and report back to the executive team and board of directors.

As for concrete fixes that Microsoft has already implemented, Bell writes that Microsoft has “implemented automatic enforcement of multifactor authentication by default across more than 1 million Microsoft Entra ID tenants within Microsoft,” removed 730,000 old and/or insecure apps “to date across production and corporate tenants,” expanded its security logging, and adopted the Common Weakness Enumeration (CWE) standard for its security disclosures.

In addition to Bell’s public security promises, The Verge has obtained and published an internal memo from Microsoft CEO Satya Nadella that re-emphasizes the company’s publicly stated commitment to security. Nadella also says that improving security should be prioritized over adding new features, something that may affect the constant stream of tweaks and changes that Microsoft releases for Windows 11 and other software.

“The recent findings by the Department of Homeland Security’s Cyber Safety Review Board (CSRB) regarding the Storm-0558 cyberattack, from summer 2023, underscore the severity of the threats facing our company and our customers, as well as our responsibility to defend against these increasingly sophisticated threat actors,” writes Nadella. “If you’re faced with the tradeoff between security and another priority, your answer is clear: Do security. In some cases, this will mean prioritizing security above other things we do, such as releasing new features or providing ongoing support for legacy systems.”

Microsoft ties executive pay to security following multiple failures and breaches Read More »

maximum-severity-gitlab-flaw-allowing-account-hijacking-under-active-exploitation

Maximum-severity GitLab flaw allowing account hijacking under active exploitation

A 10 OUT OF 10 —

The threat is potentially grave because it could be used in supply-chain attacks.

Maximum-severity GitLab flaw allowing account hijacking under active exploitation

A maximum severity vulnerability that allows hackers to hijack GitLab accounts with no user interaction required is now under active exploitation, federal government officials warned as data showed that thousands of users had yet to install a patch released in January.

A change GitLab implemented in May 2023 made it possible for users to initiate password changes through links sent to secondary email addresses. The move was designed to permit resets when users didn’t have access to the email address used to establish the account. In January, GitLab disclosed that the feature allowed attackers to send reset emails to accounts they controlled and from there click on the embedded link and take over the account.

While exploits require no user interaction, hijackings work only against accounts that aren’t configured to use multifactor authentication. Even with MFA, accounts remained vulnerable to password resets, but the attackers ultimately are unable to access the account, allowing the rightful owner to change the reset password. The vulnerability, tracked as CVE-2023-7028, carries a severity rating of 10 out of 10.

On Wednesday, the US Cybersecurity and Infrastructure Security Agency said it is aware of “evidence of active exploitation” and added the vulnerability to its list of known exploited vulnerabilities. CISA provided no details about the in-the-wild attacks. A GitLab representative declined to provide specifics about the active exploitation of the vulnerability.

The vulnerability, classified as an improper access control flaw, could pose a grave threat. GitLab software typically has access to multiple development environments belonging to users. With the ability to access them and surreptitiously introduce changes, attackers could sabotage projects or plant backdoors that could infect anyone using software built in the compromised environment. An example of a similar supply chain attack is the one that hit SolarWinds in 2020 and pushed malware to more than 18,000 of its customers, 100 of whom received follow-on hacks. Other recent examples of supply chain attacks are here, here, and here.

These sorts of attacks are powerful. By hacking a single, carefully selected target, attackers gain the means to infect thousands of downstream users, often without requiring them to take any action at all.

According to Internet scans performed by security organization Shadowserver, more than 2,100 IP addresses showed they were hosting one or more vulnerable GitLab instances.

Shadowserver

The biggest concentration of IP addresses was in India, followed by the US, Indonesia, Algeria, and Thailand.

Shadowserver

The number of IP addresses showing vulnerable instances has fallen over time. Shadowserver shows that there were more than 5,300 addresses on January 22, one week after GitLab issued the patch.

Shadowserver

The vulnerability is classed as an improper access control flaw.

CISA has ordered all civilian federal agencies that have yet to patch the vulnerability to do so immediately. The agency made no mention of MFA, but any GitLab users who haven’t already done so should enable it, ideally with a form that complies with the FIDO industry standard.

GitLab users should also remember that patching does nothing to secure systems that have already been breached through exploits. GitLab has published incident response guidance here.

Maximum-severity GitLab flaw allowing account hijacking under active exploitation Read More »

hacker-free-for-all-fights-for-control-of-home-and-office-routers-everywhere

Hacker free-for-all fights for control of home and office routers everywhere

Rows of 1950s-style robots operate computer workstations.

Cybercriminals and spies working for nation-states are surreptitiously coexisting inside the same compromised name-brand routers as they use the devices to disguise attacks motivated both by financial gain and strategic espionage, researchers said.

In some cases, the coexistence is peaceful, as financially motivated hackers provide spies with access to already compromised routers in exchange for a fee, researchers from security firm Trend Micro reported Wednesday. In other cases, hackers working in nation-state-backed advanced persistent threat groups take control of devices previously hacked by the cybercrime groups. Sometimes the devices are independently compromised multiple times by different groups. The result is a free-for-all inside routers and, to a lesser extent, VPN devices and virtual private servers provided by hosting companies.

“Cybercriminals and Advanced Persistent Threat (APT) actors share a common interest in proxy anonymization layers and Virtual Private Network (VPN) nodes to hide traces of their presence and make detection of malicious activities more difficult,” Trend Micro researchers Feike Hacquebord and Fernando Merces wrote. “This shared interest results in malicious internet traffic blending financial and espionage motives.”

Pawn Storm, a spammer, and a proxy service

A good example is a network made up primarily of EdgeRouter devices sold by manufacturer Ubiquiti. After the FBI discovered it had been infected by a Kremlin-backed group and used as a botnet to camouflage ongoing attacks targeting governments, militaries, and other organizations worldwide, it commenced an operation in January to temporarily disinfect them.

The Russian hackers gained control after the devices were already infected with Moobot, which is botnet malware used by financially motivated threat actors not affiliated with the Russian government. These threat actors installed Moobot after first exploiting publicly known default administrator credentials that hadn’t been removed from the devices by the people who owned them. The Russian hackers—known by a variety of names including Pawn Storm, APT28, Forest Blizzard, Sofacy, and Sednit—then exploited a vulnerability in the Moobot malware and used it to install custom scripts and malware that turned the botnet into a global cyber espionage platform.

The Trend Micro researchers said that Pawn Storm was using the hijacked botnet to proxy (1) logins that used stolen account credentials and (2) attacks that exploited a critical zero-day vulnerability in Microsoft Exchange that went unfixed until March 2023. The zero-day exploits allowed Pawn Storm to obtain the cryptographic hash of users’ Outlook passwords simply by sending them a specially formatted email. Once in possession of the hash, Pawn Storm performed a so-called NTLMv2 hash relay attack that funneled logins to the user accounts through one of the botnet devices. Microsoft provided a diagram of the attack pictured below:

Microsoft

Trend Micro observed the same botnet being used to send spam with pharmaceutical themes that have the hallmarks of what’s known as the Canadian Pharmacy gang. Yet another group installed malware known as Ngioweb on botnet devices. Ngioweb was first found in 2019 running on routers from DLink, Netgear, and other manufacturers, as well as other devices running Linux on top of x86, ARM, and MIPS hardware. The purpose of Ngioweb is to provide proxies individuals can use to route their online activities through a series of regularly changing IP addresses, particularly those located in the US with reputations for trustworthiness. It’s not clear precisely who uses the Ngioweb-powered service.

The Trend Micro researchers wrote:

In the specific case of the compromised Ubiquiti EdgeRouters, we observed that a botnet operator has been installing backdoored SSH servers and a suite of scripts on the compromised devices for years without much attention from the security industry, allowing persistent access. Another threat actor installed the Ngioweb malware that runs only in memory to add the bots to a commercially available residential proxy botnet. Pawn Storm most likely easily brute forced the credentials of the backdoored SSH servers and thus gained access to a pool of EdgeRouter devices they could abuse for various purposes.

The researchers provided the following table, summarizing the botnet-sharing arrangement among Pawn Storm and the two other groups, tracked as Water Zmeu and Water Barghest:

Trend Micro


It’s unclear if either of the groups was responsible for installing the previously mentioned Moobot malware that the FBI reported finding on the devices. If not, that would mean routers were independently infected by three financially motivated groups, in addition to Pawn Storm, further underscoring the ongoing rush by multiple threat groups to establish secret listening posts inside routers. Trend Micro researchers weren’t available to clarify.

The post went on to report that while the January operation by the FBI put a dent in the infrastructure Pawn Storm depended on, legal constraints prevented the operation from preventing reinfection. What’s more, the botnet also comprised virtual public servers and Raspberry Pi devices that weren’t affected by the FBI action.

“This means that despite the efforts of law enforcement, Pawn Storm still has access to many other compromised assets, including EdgeServers,” the Trend Micro report said. “For example, IP address 32[.]143[.]50[.]222 was used as an SMB reflector around February 8, 2024. The same IP address was used as a proxy in a credential phishing attack on February 6 2024 against various government officials around the world.”

Hacker free-for-all fights for control of home and office routers everywhere Read More »

anthropic-releases-claude-ai-chatbot-ios-app

Anthropic releases Claude AI chatbot iOS app

AI in your pocket —

Anthropic finally comes to mobile, launches plan for teams that includes 200K context window.

The Claude AI iOS app running on an iPhone.

Enlarge / The Claude AI iOS app running on an iPhone.

Anthropic

On Wednesday, Anthropic announced the launch of an iOS mobile app for its Claude 3 AI language models that are similar to OpenAI’s ChatGPT. It also introduced a new subscription tier designed for group collaboration. Before the app launch, Claude was only available through a website, an API, and other apps that integrated Claude through API.

Like the ChatGPT app, Claude’s new mobile app serves as a gateway to chatbot interactions, and it also allows uploading photos for analysis. While it’s only available on Apple devices for now, Anthropic says that an Android app is coming soon.

Anthropic rolled out the Claude 3 large language model (LLM) family in March, featuring three different model sizes: Claude Opus, Claude Sonnet, and Claude Haiku. Currently, the app utilizes Sonnet for regular users and Opus for Pro users.

While Anthropic has been a key player in the AI field for several years, it’s entering the mobile space after many of its competitors have already established footprints on mobile platforms. OpenAI released its ChatGPT app for iOS in May 2023, with an Android version arriving two months later. Microsoft released a Copilot iOS app in January. Google Gemini is available through the Google app on iPhone.

Screenshots of the Claude AI iOS app running on an iPhone.

Enlarge / Screenshots of the Claude AI iOS app running on an iPhone.

Anthropic

The app is freely available to all users of Claude, including those using the free version, subscribers paying $20 per month for Claude Pro, and members of the newly introduced Claude Team plan. Conversation history is saved and shared between the web app version of Claude and the mobile app version after logging in.

Speaking of that Team plan, it’s designed for groups of at least five and is priced at $30 per seat per month. It offers more chat queries (higher rate limits), access to all three Claude models, and a larger context window (200K tokens) for processing lengthy documents or maintaining detailed conversations. It also includes group admin tools and billing management, and users can easily switch between Pro and Team plans.

Anthropic releases Claude AI chatbot iOS app Read More »