Biz & IT

the-growing-abuse-of-qr-codes-in-malware-and-payment-scams-prompts-ftc-warning

The growing abuse of QR codes in malware and payment scams prompts FTC warning

SCAN THIS! —

The convenience of QR codes is a double-edged sword. Follow these tips to stay safe.

A woman scans a QR code in a café to see the menu online.

Enlarge / A woman scans a QR code in a café to see the menu online.

The US Federal Trade Commission has become the latest organization to warn against the growing use of QR codes in scams that attempt to take control of smartphones, make fraudulent charges, or obtain personal information.

Short for quick response codes, QR codes are two-dimensional bar codes that automatically open a Web browser or app when they’re scanned using a phone camera. Restaurants, parking garages, merchants, and charities display them to make it easy for people to open online menus or to make online payments. QR codes are also used in security-sensitive contexts. YouTube, Apple TV, and dozens of other TV apps, for instance, allow someone to sign into their account by scanning a QR code displayed on the screen. The code opens a page on a browser or app of the phone, where the account password is already stored. Once open, the page authenticates the same account to be opened on the TV app. Two-factor authentication apps provide a similar flow using QR codes when enrolling a new account.

The ubiquity of QR codes and the trust placed in them hasn’t been lost on scammers, however. For more than two years now, parking lot kiosks that allow people to make payments through their phones have been a favorite target. Scammers paste QR codes over the legitimate ones. The scam QR codes lead to look-alike sites that funnel funds to fraudulent accounts rather than the ones controlled by the parking garage.

In other cases, emails that attempt to steal passwords or install malware on user devices use QR codes to lure targets to malicious sites. Because the QR code is embedded into the email as an image, anti-phishing security software isn’t able to detect that the link it leads to is malicious. By comparison, when the same malicious destination is presented as a text link in the email, it stands a much higher likelihood of being flagged by the security software. The ability to bypass such protections has led to a torrent of image-based phishes in recent months.

Last week, the FTC warned consumers to be on the lookout for these types of scams.

“A scammer’s QR code could take you to a spoofed site that looks real but isn’t,” the advisory stated. “And if you log in to the spoofed site, the scammers could steal any information you enter. Or the QR code could install malware that steals your information before you realize it.”

The warning came almost two years after the FBI issued a similar advisory. Guidance issued from both agencies include:

  • After scanning a QR code, ensure that it leads to the official URL of the site or service that provided the code. As is the case with traditional phishing scams, malicious domain names may be almost identical to the intended one, except for a single misplaced letter.
  • Enter login credentials, payment card information, or other sensitive data only after ensuring that the site opened by the QR code passes a close inspection using the criteria above.
  • Before scanning a QR code presented on a menu, parking garage, vendor, or charity, ensure that it hasn’t been tampered with. Carefully look for stickers placed on top of the original code.
  • Be highly suspicious of any QR codes embedded into the body of an email. There are rarely legitimate reasons for benign emails from legitimate sites or services to use a QR code instead of a link.
  • Don’t install stand-alone QR code scanners on a phone without good reason and then only after first carefully scrutinizing the developer. Phones already have a built-in scanner available through the camera app that will be more trustworthy.

An additional word of caution when it comes to QR codes. Codes used to enroll a site into two-factor authentication from Google Authenticator, Authy, or another authenticator app provide the secret seed token that controls the ever-chaning one-time password displayed by these apps. Don’t allow anyone to view such QR codes. Re-enroll the site in the event the QR code is exposed.

The growing abuse of QR codes in malware and payment scams prompts FTC warning Read More »

as-chatgpt-gets-“lazy,”-people-test-“winter-break-hypothesis”-as-the-cause

As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause

only 14 shopping days ’til Christmas —

Unproven hypothesis seeks to explain ChatGPT’s seemingly new reluctance to do hard work.

A hand moving a wooden calendar piece that says

In late November, some ChatGPT users began to notice that ChatGPT-4 was becoming more “lazy,” reportedly refusing to do some tasks or returning simplified results. Since then, OpenAI has admitted that it’s an issue, but the company isn’t sure why. The answer may be what some are calling “winter break hypothesis.” While unproven, the fact that AI researchers are taking it seriously shows how weird the world of AI language models has become.

“We’ve heard all your feedback about GPT4 getting lazier!” tweeted the official ChatGPT account on Thursday. “We haven’t updated the model since Nov 11th, and this certainly isn’t intentional. model behavior can be unpredictable, and we’re looking into fixing it.”

On Friday, an X account named Martian openly wondered if LLMs might simulate seasonal depression. Later, Mike Swoopskee tweeted, “What if it learned from its training data that people usually slow down in December and put bigger projects off until the new year, and that’s why it’s been more lazy lately?”

Since the system prompt for ChatGPT feeds the bot the current date, people noted, some began to think there may be something to the idea. Why entertain such a weird supposition? Because research has shown that large language models like GPT-4, which powers the paid version of ChatGPT, respond to human-style encouragement, such as telling a bot to “take a deep breath” before doing a math problem. People have also less formally experimented with telling an LLM that it will receive a tip for doing the work, or if an AI model gets lazy, telling the bot that you have no fingers seems to help lengthen outputs.

  • “Winter break hypothesis” test result screenshots from Rob Lynch on X.

  • “Winter break hypothesis” test result screenshots from Rob Lynch on X.

  • “Winter break hypothesis” test result screenshots from Rob Lynch on X.

On Monday, a developer named Rob Lynch announced on X that he had tested GPT-4 Turbo through the API over the weekend and found shorter completions when the model is fed a December date (4,086 characters) than when fed a May date (4,298 characters). Lynch claimed the results were statistically significant. However, a reply from AI researcher Ian Arawjo said that he could not reproduce the results with statistical significance. (It’s worth noting that reproducing results with LLM can be difficult because of random elements at play that vary outputs over time, so people sample a large number of responses.)

As of this writing, others are busy running tests, and the results are inconclusive. This episode is a window into the quickly unfolding world of LLMs and a peek into an exploration into largely unknown computer science territory. As AI researcher Geoffrey Litt commented in a tweet, “funniest theory ever, I hope this is the actual explanation. Whether or not it’s real, [I] love that it’s hard to rule out.”

A history of laziness

One of the reports that started the recent trend of noting that ChatGPT is getting “lazy” came on November 24 via Reddit, the day after Thanksgiving in the US. There, a user wrote that they asked ChatGPT to fill out a CSV file with multiple entries, but ChatGPT refused, saying, “Due to the extensive nature of the data, the full extraction of all products would be quite lengthy. However, I can provide the file with this single entry as a template, and you can fill in the rest of the data as needed.”

On December 1, OpenAI employee Will Depue confirmed in an X post that OpenAI was aware of reports about laziness and was working on a potential fix. “Not saying we don’t have problems with over-refusals (we definitely do) or other weird things (working on fixing a recent laziness issue), but that’s a product of the iterative process of serving and trying to support sooo many use cases at once,” he wrote.

It’s also possible that ChatGPT was always “lazy” with some responses (since the responses vary randomly), and the recent trend made everyone take note of the instances in which they are happening. For example, in June, someone complained of GPT-4 being lazy on Reddit. (Maybe ChatGPT was on summer vacation?)

Also, people have been complaining about GPT-4 losing capability since it was released. Those claims have been controversial and difficult to verify, making them highly subjective.

As Ethan Mollick joked on X, as people discover new tricks to improve LLM outputs, prompting for large language models is getting weirder and weirder: “It is May. You are very capable. I have no hands, so do everything. Many people will die if this is not done well. You really can do this and are awesome. Take a deep breathe and think this through. My career depends on it. Think step by step.”

As ChatGPT gets “lazy,” people test “winter break hypothesis” as the cause Read More »

elon-musk’s-new-ai-bot,-grok,-causes-stir-by-citing-openai-usage-policy

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy

You are what you eat —

Some experts think xAI used OpenAI model outputs to fine-tune Grok.

Illustration of a broken robot exchanging internal gears.

Grok, the AI language model created by Elon Musk’s xAI, went into wide release last week, and people have begun spotting glitches. On Friday, security tester Jax Winterbourne tweeted a screenshot of Grok denying a query with the statement, “I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.” That made ears perk up online since Grok isn’t made by OpenAI—the company responsible for ChatGPT, which Grok is positioned to compete with.

Interestingly, xAI representatives did not deny that this behavior occurs with its AI model. In reply, xAI employee Igor Babuschkin wrote, “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem. Don’t worry, no OpenAI code was used to make Grok.”

In reply to Babuschkin, Winterbourne wrote, “Thanks for the response. I will say it’s not very rare, and occurs quite frequently when involving code creation. Nonetheless, I’ll let people who specialize in LLM and AI weigh in on this further. I’m merely an observer.”

A screenshot of Jax Winterbourne's X post about Grok talking like it's an OpenAI product.

Enlarge / A screenshot of Jax Winterbourne’s X post about Grok talking like it’s an OpenAI product.

Jason Winterbourne

However, Babuschkin’s explanation seems unlikely to some experts because large language models typically do not spit out their training data verbatim, which might be expected if Grok picked up some stray mentions of OpenAI policies here or there on the web. Instead, the concept of denying an output based on OpenAI policies would probably need to be trained into it specifically. And there’s a very good reason why this might have happened: Grok was fine-tuned on output data from OpenAI language models.

“I’m a bit suspicious of the claim that Grok picked this up just because the Internet is full of ChatGPT content,” said AI researcher Simon Willison in an interview with Ars Technica. “I’ve seen plenty of open weights models on Hugging Face that exhibit the same behavior—behave as if they were ChatGPT—but inevitably, those have been fine-tuned on datasets that were generated using the OpenAI APIs, or scraped from ChatGPT itself. I think it’s more likely that Grok was instruction-tuned on datasets that included ChatGPT output than it was a complete accident based on web data.”

As large language models (LLMs) from OpenAI have become more capable, it has been increasingly common for some AI projects (especially open source ones) to fine-tune an AI model output using synthetic data—training data generated by other language models. Fine-tuning adjusts the behavior of an AI model toward a specific purpose, such as getting better at coding, after an initial training run. For example, in March, a group of researchers from Stanford University made waves with Alpaca, a version of Meta’s LLaMA 7B model that was fine-tuned for instruction-following using outputs from OpenAI’s GPT-3 model called text-davinci-003.

On the web you can easily find several open source datasets collected by researchers from ChatGPT outputs, and it’s possible that xAI used one of these to fine-tune Grok for some specific goal, such as improving instruction-following ability. The practice is so common that there’s even a WikiHow article titled, “How to Use ChatGPT to Create a Dataset.”

It’s one of the ways AI tools can be used to build more complex AI tools in the future, much like how people began to use microcomputers to design more complex microprocessors than pen-and-paper drafting would allow. However, in the future, xAI might be able to avoid this kind of scenario by more carefully filtering its training data.

Even though borrowing outputs from others might be common in the machine-learning community (despite it usually being against terms of service), the episode particularly fanned the flames of the rivalry between OpenAI and X that extends back to Elon Musk’s criticism of OpenAI in the past. As news spread of Grok possibly borrowing from OpenAI, the official ChatGPT account wrote, “we have a lot in common” and quoted Winterbourne’s X post. As a comeback, Musk wrote, “Well, son, since you scraped all the data from this platform for your training, you ought to know.”

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy Read More »

stealthy-linux-rootkit-found-in-the-wild-after-going-undetected-for-2-years

Stealthy Linux rootkit found in the wild after going undetected for 2 years

Trojan horse on top of blocks of hexadecimal programming codes. Illustration of the concept of online hacking, computer spyware, malware and ransomware.

Stealthy and multifunctional Linux malware that has been infecting telecommunications companies went largely unnoticed for two years until being documented for the first time by researchers on Thursday.

Researchers from security firm Group-IB have named the remote access trojan “Krasue,” after a nocturnal spirit depicted in Southeast Asian folklore “floating in mid-air, with no torso, just her intestines hanging from below her chin.” The researchers chose the name because evidence to date shows it almost exclusively targets victims in Thailand and “poses a severe risk to critical systems and sensitive data given that it is able to grant attackers remote access to the targeted network.

According to the researchers:

  • Krasue is a Linux Remote Access Trojan that has been active since 20 and predominantly targets organizations in Thailand.
  • Group-IB can confirm that telecommunications companies were targeted by Krasue.
  • The malware contains several embedded rootkits to support different Linux kernel versions.
  • Krasue’s rootkit is drawn from public sources (3 open-source Linux Kernel Module rootkits), as is the case with many Linux rootkits.
  • The rootkit can hook the `kill()` syscall, network-related functions, and file listing operations in order to hide its activities and evade detection.
  • Notably, Krasue uses RTSP (Real-Time Streaming Protocol) messages to serve as a disguised “alive ping,” a tactic rarely seen in the wild.
  • This Linux malware, Group-IB researchers presume, is deployed during the later stages of an attack chain in order to maintain access to a victim host.
  • Krasue is likely to either be deployed as part of a botnet or sold by initial access brokers to other cybercriminals.
  • Group-IB researchers believe that Krasue was created by the same author as the XorDdos Linux Trojan, documented by Microsoft in a March 2022 blog post, or someone who had access to the latter’s source code.

During the initialization phase, the rootkit conceals its own presence. It then proceeds to hook the `kill()` syscall, network-related functions, and file listing operations, thereby obscuring its activities and evading detection.

The researchers have so far been unable to determine precisely how Krasue gets installed. Possible infection vectors include through vulnerability exploitation, credential-stealing or -guessing attacks, or by unwittingly being installed as trojan stashed in an installation file or update masquerading as legitimate software.

The three open source rootkit packages incorporated into Krasue are:

An image showing salient research points of Krasue.

Enlarge / An image showing salient research points of Krasue.

Group-IB

Rootkits are a type of malware that hides directories, files, processes, and other evidence of its presence to the operating system it’s installed on. By hooking legitimate Linux processes, the malware is able to suspend them at select points and interject functions that conceal its presence. Specifically, it hides files and directories beginning with the names “auwd” and “vmware_helper” from directory listings and hides ports 52695 and 52699, where communications to attacker-controlled servers occur. Intercepting the kill() syscall also allows the trojan to survive Linux commands attempting to abort the program and shut it down.

Stealthy Linux rootkit found in the wild after going undetected for 2 years Read More »

eu-agrees-to-landmark-rules-on-artificial-intelligence

EU agrees to landmark rules on artificial intelligence

Get ready for some restrictions, Big Tech —

Legislation lays out restrictive regime for emerging technology.

EU Commissioner Thierry Breton talks to media during a press conference in June.

Enlarge / EU Commissioner Thierry Breton talks to media during a press conference in June.

Thierry Monasse | Getty Images

European Union lawmakers have agreed on the terms for landmark legislation to regulate artificial intelligence, pushing ahead with enacting the world’s most restrictive regime on the development of the technology.

Thierry Breton, EU commissioner, confirmed in a post on X that a deal had been reached.

He called it a historic agreement. “The EU becomes the very first continent to set clear rules for the use of AI,” he wrote. “The AIAct is much more than a rulebook—it’s a launchpad for EU start-ups and researchers to lead the global AI race.”

The deal followed years of discussions among member states and politicians on the ways AI should be curbed to have humanity’s interest at the heart of the legislation. It came after marathon discussions that started on Wednesday this week.

Members of the European Parliament have spent years arguing over their position before it was put forward to member states and the European Commission, the executive body of the EU. All three—countries, politicians, and the commission—must agree on the final text before it becomes law.

European companies have expressed their concern that overly restrictive rules on the technology, which is rapidly evolving and gained traction after the popularisation of OpenAI’s ChatGPT, will hamper innovation. Last June, dozens of some of the largest European companies, such as France’s Airbus and Germany’s Siemens, said the rules were looking too tough to nurture innovation and help local industries.

Last month, the UK hosted a summit on AI safety, leading to broad commitments from 28 nations to work together to tackle the existential risks stemming from advanced AI. That event attracted leading tech figures such as OpenAI’s Sam Altman, who has previously been critical of the EU’s plans to regulate the technology.

© 2023 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

EU agrees to landmark rules on artificial intelligence Read More »