Biz & IT

softbank-plans-to-cancel-out-angry-customer-voices-using-ai

Softbank plans to cancel out angry customer voices using AI

our fake future —

Real-time voice modification tech seeks to reduce stress in call center staff.

A man is angry and screaming while talking on a smartphone.

Japanese telecommunications giant SoftBank recently announced that it has been developing “emotion-canceling” technology powered by AI that will alter the voices of angry customers to sound calmer during phone calls with customer service representatives. The project aims to reduce the psychological burden on operators suffering from harassment and has been in development for three years. Softbank plans to launch it by March 2026, but the idea is receiving mixed reactions online.

According to a report from the Japanese news site The Asahi Shimbun, SoftBank’s project relies on an AI model to alter the tone and pitch of a customer’s voice in real-time during a phone call. SoftBank’s developers, led by employee Toshiyuki Nakatani, trained the system using a dataset of over 10,000 voice samples, which were performed by 10 Japanese actors expressing more than 100 phrases with various emotions, including yelling and accusatory tones.

Voice cloning and synthesis technology has made massive strides in the past three years. We’ve previously covered technology from Microsoft that can clone a voice with a three-second audio sample and audio-processing technology from Adobe that cleans up audio by re-synthesizing a person’s voice, so SoftBank’s technology is well within the realm of plausibility.

By analyzing the voice samples, SoftBank’s AI model has reportedly learned to recognize and modify the vocal characteristics associated with anger and hostility. When a customer speaks to a call center operator, the model processes the incoming audio and adjusts the pitch and inflection of the customer’s voice to make it sound calmer and less threatening.

For example, a high-pitched, resonant voice may be lowered in tone, while a deep male voice may be raised to a higher pitch. The technology reportedly does not alter the content or wording of the customer’s speech, and it retains a slight element of audible anger to ensure that the operator can still gauge the customer’s emotional state. The AI model also monitors the length and content of the conversation, sending a warning message if it determines that the interaction is too long or abusive.

The tech has been developed through SoftBank’s in-house program called “SoftBank Innoventure” in conjunction with The Institute for AI and Beyond, which is a joint AI research institute established by The University of Tokyo.

Harassment a persistent problem

According to SoftBank, Japan’s service sector is grappling with the issue of “kasu-hara,” or customer harassment, where workers face aggressive behavior or unreasonable requests from customers. In response, the Japanese government and businesses are reportedly exploring ways to protect employees from the abuse.

The problem isn’t unique to Japan. In a Reddit thread on Softbank’s AI plans, call center operators from other regions related many stories about the stress of dealing with customer harassment. “I’ve worked in a call center for a long time. People need to realize that screaming at call center agents will get you nowhere,” wrote one person.

A 2021 ProPublica report tells horror stories from call center operators who are trained not to hang up no matter how abusive or emotionally degrading a call gets. The publication quoted Skype customer service contractor Christine Stewart as saying, “One person called me the C-word. I’d call my supervisor. They’d say, ‘Calm them down.’ … They’d always try to push me to stay on the call and calm the customer down myself. I wasn’t getting paid enough to do that. When you have a customer sitting there and saying you’re worthless… you’re supposed to ‘de-escalate.'”

But verbally de-escalating an angry customer is difficult, according to Reddit poster BenCelotil, who wrote, “As someone who has worked in several call centers, let me just point out that there is no way faster to escalate a call than to try and calm the person down. If the angry person on the other end of the call thinks you’re just trying to placate and push them off somewhere else, they’re only getting more pissed.”

Ignoring reality using AI

Harassment of call center workers is a very real problem, but given the introduction of AI as a possible solution, some people wonder whether it’s a good idea to essentially filter emotional reality on demand through voice synthesis. Perhaps this technology is a case of treating the symptom instead of the root cause of the anger, as some social media commenters note.

“This is like the worst possible solution to the problem,” wrote one Redditor in the thread mentioned above. “Reminds me of when all the workers at Apple’s China factory started jumping out of windows due to working conditions, so the ‘solution’ was to put nets around the building.”

SoftBank expects to introduce its emotion-canceling solution within fiscal year 2025, which ends on March 31, 2026. By reducing the psychological burden on call center operators, SoftBank says it hopes to create a safer work environment that enables employees to provide even better services to customers.

Even so, ignoring customer anger could backfire in the long run when the anger is sometimes a legitimate response to poor business practices. As one Redditor wrote, “If you have so many angry customers that it is affecting the mental health of your call center operators, then maybe address the reasons you have so many irate customers instead of just pretending that they’re not angry.”

Softbank plans to cancel out angry customer voices using AI Read More »

high-severity-vulnerabilities-affect-a-wide-range-of-asus-router-models

High-severity vulnerabilities affect a wide range of Asus router models

IT’S PATCH TIME ONCE AGAIN —

Many models receive patches; others will need to be replaced.

High-severity vulnerabilities affect a wide range of Asus router models

Getty Images

Hardware manufacturer Asus has released updates patching multiple critical vulnerabilities that allow hackers to remotely take control of a range of router models with no authentication or interaction required of end users.

The most critical vulnerability, tracked as CVE-2024-3080 is an authentication bypass flaw that can allow remote attackers to log into a device without authentication. The vulnerability, according to the Taiwan Computer Emergency Response Team / Coordination Center (TWCERT/CC), carries a severity rating of 9.8 out of 10. Asus said the vulnerability affects the following routers:

A favorite haven for hackers

A second vulnerability tracked as CVE-2024-3079 affects the same router models. It stems from a buffer overflow flaw and allows remote hackers who have already obtained administrative access to an affected router to execute commands.

TWCERT/CC is warning of a third vulnerability affecting various Asus router models. It’s tracked as CVE-2024-3912 and can allow remote hackers to execute commands with no user authentication required. The vulnerability, carrying a severity rating of 9.8, affects:

Security patches, which have been available since January, are available for those models at the links provided in the table above. CVE-2024-3912 also affects Asus router models that are no longer supported by the manufacturer. Those models include:

  • DSL-N10_C1
  • DSL-N10_D1
  • DSL-N10P_C1
  • DSL-N12E_C1
  • DSL-N16P
  • DSL-N16U
  • DSL-AC52
  • DSL-AC55

TWCERT/CC advises owners of these devices to replace them.

Asus has advised all router owners to regularly check their devices to ensure they’re running the latest available firmware. The company also recommended users set a separate password from the wireless network and router-administration page. Additionally, passwords should be strong, meaning 11 or more characters that are unique and randomly generated. Asus also recommended users disable any services that can be reached from the Internet, including remote access from the WAN, port forwarding, DDNS, VPN server, DMZ, and port trigger. The company provided FAQs here and here.

There are no known reports of any of the vulnerabilities being actively exploited in the wild. That said, routers have become a favorite haven for hackers, who often use them to hide the origins of their attacks. In recent months, both nation-state espionage spies and financially motivated threat actors have been found camping out in routers, sometimes simultaneously. Hackers backed by the Russian and Chinese governments regularly wage attacks on critical infrastructure from routers that are connected to IP addresses with reputations for trustworthiness. Most of the hijackings are made possible by exploiting unpatched vulnerabilities or weak passwords.

High-severity vulnerabilities affect a wide range of Asus router models Read More »

proton-is-taking-its-privacy-first-apps-to-a-nonprofit-foundation-model

Proton is taking its privacy-first apps to a nonprofit foundation model

Proton going nonprofit —

Because of Swiss laws, there are no shareholders, and only one mission.

Swiss flat flying over a landscape of Swiss mountains, with tourists looking on from nearby ledge

Getty Images

Proton, the secure-minded email and productivity suite, is becoming a nonprofit foundation, but it doesn’t want you to think about it in the way you think about other notable privacy and web foundations.

“We believe that if we want to bring about large-scale change, Proton can’t be billionaire-subsidized (like Signal), Google-subsidized (like Mozilla), government-subsidized (like Tor), donation-subsidized (like Wikipedia), or even speculation-subsidized (like the plethora of crypto “foundations”),” Proton CEO Andy Yen wrote in a blog post announcing the transition. “Instead, Proton must have a profitable and healthy business at its core.”

The announcement comes exactly 10 years to the day after a crowdfunding campaign saw 10,000 people give more than $500,000 to launch Proton Mail. To make it happen, Yen, along with co-founder Jason Stockman and first employee Dingchao Lu, endowed the Proton Foundation with some of their shares. The Proton Foundation is now the primary shareholder of the business Proton, which Yen states will “make irrevocable our wish that Proton remains in perpetuity an organization that places people ahead of profits.” Among other members of the Foundation’s board is Sir Tim Berners-Lee, inventor of HTML, HTTP, and almost everything else about the web.

Of particular importance is where Proton and the Proton Foundation are located: Switzerland. As Yen noted, Swiss foundations do not have shareholders and are instead obligated to act “in accordance with the purpose for which they were established.” While the for-profit entity Proton AG can still do things like offer stock options to recruits and even raise its own capital on private markets, the Foundation serves as a backstop against moving too far from Proton’s founding mission, Yen wrote.

There’s a lot more Proton to protect these days

Proton has gone from a single email offering to a wide range of services, many of which specifically target the often invasive offerings of other companies (read, mostly: Google). You can now take your cloud files, passwords, and calendars over to Proton and use its VPN services, most of which offer end-to-end encryption and open source core software hosted in Switzerland, with its notably strong privacy laws.

None of that guarantees that a Swiss court can’t compel some forms of compliance from Proton, as happened in 2021. But compared to most service providers, Proton offers a far clearer and easier-to-grasp privacy model: It can’t see your stuff, and it only makes money from subscriptions.

Of course, foundations are only as strong as the people who guide them, and seemingly firewalled profit/non-profit models can be changed. Time will tell if Proton’s new model can keep up with changing markets—and people.

Proton is taking its privacy-first apps to a nonprofit foundation model Read More »

ransomware-attackers-quickly-weaponize-php-vulnerability-with-9.8-severity-rating

Ransomware attackers quickly weaponize PHP vulnerability with 9.8 severity rating

FILES LOCKED —

TellYouThePass group opportunistically infects servers that have yet to update.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Ransomware criminals have quickly weaponized an easy-to-exploit vulnerability in the PHP programming language that executes malicious code on web servers, security researchers said.

As of Thursday, Internet scans performed by security firm Censys had detected 1,000 servers infected by a ransomware strain known as TellYouThePass, down from 1,800 detected on Monday. The servers, primarily located in China, no longer display their usual content; instead, many list the site’s file directory, which shows all files have been given a .locked extension, indicating they have been encrypted. An accompanying ransom note demands roughly $6,500 in exchange for the decryption key.

The output of PHP servers infected by TellYouThePass ransomware.

Enlarge / The output of PHP servers infected by TellYouThePass ransomware.

Censys

The accompanying ransom note.

Enlarge / The accompanying ransom note.

Censys

When opportunity knocks

The vulnerability, tracked as CVE-2024-4577 and carrying a severity rating of 9.8 out of 10, stems from errors in the way PHP converts Unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to convert user-supplied input into characters that pass malicious commands to the main PHP application. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.

CVE-2024-4577 affects PHP only when it runs in a mode known as CGI, in which a web server parses HTTP requests and passes them to a PHP script for processing. Even when PHP isn’t set to CGI mode, however, the vulnerability may still be exploitable when PHP executables such as php.exe and php-cgi.exe are in directories that are accessible by the web server. This configuration is extremely rare, with the exception of the XAMPP platform, which uses it by default. An additional requirement appears to be that the Windows locale—used to personalize the OS to the local language of the user—must be set to either Chinese or Japanese.

The critical vulnerability was published on June 6, along with a security patch. Within 24 hours, threat actors were exploiting it to install TellYouThePass, researchers from security firm Imperva reported Monday. The exploits executed code that used the mshta.exe Windows binary to run an HTML application file hosted on an attacker-controlled server. Use of the binary indicated an approach known as living off the land, in which attackers use native OS functionalities and tools in an attempt to blend in with normal, non-malicious activity.

In a post published Friday, Censys researchers said that the exploitation by the TellYouThePass gang started on June 7 and mirrored past incidents that opportunistically mass scan the Internet for vulnerable systems following a high-profile vulnerability and indiscriminately targeting any accessible server. The vast majority of the infected servers have IP addresses geolocated to China, Taiwan, Hong Kong, or Japan, likely stemming from the fact that Chinese and Japanese locales are the only ones confirmed to be vulnerable, Censys researchers said in an email.

Since then, the number of infected sites—detected by observing the public-facing HTTP response serving an open directory listing showing the server’s filesystem, along with the distinctive file-naming convention of the ransom note—has fluctuated from a low of 670 on June 8 to a high of 1,800 on Monday.

Image tracking day-to-day compromises of PHP servers and their geolocation.

Enlarge / Image tracking day-to-day compromises of PHP servers and their geolocation.

Censys

Censys researchers said in an email that they’re not entirely sure what’s causing the changing numbers.

“From our perspective, many of the compromised hosts appear to remain online, but the port running the PHP-CGI or XAMPP service stops responding—hence the drop in detected infections,” they wrote. “Another point to consider is that there are currently no observed ransom payments to the only Bitcoin address listed in the ransom notes (source). Based on these facts, our intuition is that this is likely the result of those services being decommissioned or going offline in some other manner.”

XAMPP used in production, really?

The researchers went on to say that roughly half of the compromises observed show clear signs of running XAMPP, but that estimate is likely an undercount since not all services explicitly show what software they use.

“Given that XAMPP is vulnerable by default, it’s reasonable to guess that most of the infected systems are running XAMPP,” the researchers said. This Censys query lists the infections that are explicitly affecting the platform. The researchers aren’t aware of any specific platforms other than XAMPP that have been compromised.

The discovery of compromised XAMPP servers took Will Dormann, a senior vulnerability analyst at security firm Analygence, by surprise because XAMPP maintainers explicitly say their software isn’t suitable for production systems.

“People choosing to run not-for-production software have to deal with the consequences of that decision,” he wrote in an online interview.

While XAMPP is the only platform confirmed to be vulnerable, people running PHP on any Windows system should install the update as soon as possible. The Imperva post linked above provides IP addresses, file names, and file hashes that administrators can use to determine whether they have been targeted in the attacks.

Ransomware attackers quickly weaponize PHP vulnerability with 9.8 severity rating Read More »

retired-engineer-discovers-55-year-old-bug-in-lunar-lander-computer-game-code

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code

The world’s oldest feature —

A physics simulation flaw in text-based 1969 computer game went unnoticed until today.

Illustration of the Apollo lunar lander Eagle over the Moon.

Enlarge / Illustration of the Apollo lunar lander Eagle over the Moon.

On Friday, a retired software engineer named Martin C. Martin announced that he recently discovered a bug in the original Lunar Lander computer game’s physics code while tinkering with the software. Created by a 17-year-old high school student named Jim Storer in 1969, this primordial game rendered the action only as text status updates on a teletype, but it set the stage for future versions to come.

The legendary game—which Storer developed on a PDP-8 minicomputer in a programming language called FOCAL just months after Neil Armstrong and Buzz Aldrin made their historic moonwalks—allows players to control a lunar module’s descent onto the Moon’s surface. Players must carefully manage their fuel usage to achieve a gentle landing, making critical decisions every ten seconds to burn the right amount of fuel.

In 2009, just short of the 40th anniversary of the first Moon landing, I set out to find the author of the original Lunar Lander game, which was then primarily known as a graphical game, thanks to the graphical version from 1974 and a 1979 Atari arcade title. When I discovered that Storer created the oldest known version as a teletype game, I interviewed him and wrote up a history of the game. Storer later released the source code to the original game, written in FOCAL, on his website.

Lunar Lander game, provided by Jim Storer.” height=”524″ src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/lunar_lander_teletype_output-640×524.jpg” width=”640″>

Enlarge / A scan of printed teletype output from the original Lunar Lander game, provided by Jim Storer.

Jim Storer

Fast forward to 2024, when Martin—an AI expert, game developer, and former postdoctoral associate at MIT—stumbled upon a bug in Storer’s high school code while exploring what he believed was the optimal strategy for landing the module with maximum fuel efficiency—a technique known among Kerbal Space Program enthusiasts as the “suicide burn.” This method involves falling freely to build up speed and then igniting the engines at the last possible moment to slow down just enough to touch down safely. He also tried another approach—a more gentle landing.

“I recently explored the optimal fuel burn schedule to land as gently as possible and with maximum remaining fuel,” Martin wrote on his blog. “Surprisingly, the theoretical best strategy didn’t work. The game falsely thinks the lander doesn’t touch down on the surface when in fact it does. Digging in, I was amazed by the sophisticated physics and numerical computing in the game. Eventually I found a bug: a missing ‘divide by two’ that had seemingly gone unnoticed for nearly 55 years.”

A matter of division

Diagram of launch escape system on top of the Apollo capsule.

Enlarge / Diagram of launch escape system on top of the Apollo capsule.

NASA

Despite applying what should have been a textbook landing strategy, Martin found that the game inconsistently reported that the lander had missed the Moon’s surface entirely. Intrigued by the anomaly, Martin dug into the game’s source code and discovered that the landing algorithm was based on highly sophisticated physics for its time, including the Tsiolkovsky rocket equation and a Taylor series expansion.

As mentioned in the quote above, the root of the problem was a simple computational oversight—a missing division by two in the formula used to calculate the lander’s trajectory. This seemingly minor error had big consequences, causing the simulation to underestimate the time until the lander reached its lowest trajectory point and miscalculate the landing.

Despite the bug, Martin was impressed that Storer, then a high school senior, managed to incorporate advanced mathematical concepts into his game, a feat that remains impressive even by today’s standards. Martin reached out to Storer himself, and the Lunar Lander author told Martin that his father was a physicist who helped him derive the equations used in the game simulation.

While people played and enjoyed Storer’s game for years with the bug in place, it goes to show that realism isn’t always the most important part of a compelling interactive experience. And thankfully for Aldrin and Armstrong, the real Apollo lunar landing experience didn’t suffer from the same issue.

You can read more about Martin’s exciting debugging adventure over on his blog.

Retired engineer discovers 55-year-old bug in Lunar Lander computer game code Read More »

report:-apple-isn’t-paying-openai-for-chatgpt-integration-into-oses

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes

in the pocket —

Apple thinks pushing OpenAI’s brand to hundreds of millions is worth more than money.

The OpenAI and Apple logos together.

OpenAI / Apple / Benj Edwards

On Monday, Apple announced it would be integrating OpenAI’s ChatGPT AI assistant into upcoming versions of its iPhone, iPad, and Mac operating systems. It paves the way for future third-party AI model integrations, but given Google’s multi-billion-dollar deal with Apple for preferential web search, the OpenAI announcement inspired speculation about who is paying whom. According to a Bloomberg report published Wednesday, Apple considers ChatGPT’s placement on its devices as compensation enough.

“Apple isn’t paying OpenAI as part of the partnership,” writes Bloomberg reporter Mark Gurman, citing people familiar with the matter who wish to remain anonymous. “Instead, Apple believes pushing OpenAI’s brand and technology to hundreds of millions of its devices is of equal or greater value than monetary payments.”

The Bloomberg report states that neither company expects the agreement to generate meaningful revenue in the short term, and in fact, the partnership could burn extra money for OpenAI, because it pays Microsoft to host ChatGPT’s capabilities on its Azure cloud. However, OpenAI could benefit by converting free users to paid subscriptions, and Apple potentially benefits by providing easy, built-in access to ChatGPT during a time when its own in-house LLMs are still catching up.

And there’s another angle at play. Currently, OpenAI offers subscriptions (ChatGPT Plus, Enterprise, Team) that unlock additional features. If users subscribe to OpenAI through the ChatGPT app on an Apple device, the process will reportedly use Apple’s payment platform, which may give Apple a significant cut of the revenue. According to the report, Apple hopes to negotiate additional revenue-sharing deals with AI vendors in the future.

Why OpenAI

The rise of ChatGPT in the public eye over the past 18 months has made OpenAI a power player in the tech industry, allowing it to strike deals with publishers for AI training content—and ensure continued support from Microsoft in the form of investments that trade vital funding and compute for access to OpenAI’s large language model (LLM) technology like GPT-4.

Still, Apple’s choice of ChatGPT as Apple’s first external AI integration has led to widespread misunderstanding, especially since Apple buried the lede about its own in-house LLM technology that powers its new “Apple Intelligence” platform.

On Apple’s part, CEO Tim Cook told The Washington Post that it chose OpenAI as its first third-party AI partner because he thinks the company controls the leading LLM technology at the moment: “I think they’re a pioneer in the area, and today they have the best model,” he said. “We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

Apple’s choice also brings risk. OpenAI’s record isn’t spotless, racking up a string of public controversies over the past month that include an accusation from actress Scarlett Johansson that the company intentionally imitated her voice, resignations from a key scientist and safety personnel, the revelation of a restrictive NDA for ex-employees that prevented public criticism, and accusations against OpenAI CEO Sam Altman of “psychological abuse” related by a former member of the OpenAI board.

Meanwhile, critics of privacy issues related to gathering data for training AI models—including OpenAI foe Elon Musk, who took to X on Monday to spread misconceptions about how the ChatGPT integration might work—also worried that the Apple-OpenAI deal might expose personal data to the AI company, although both companies strongly deny that will be the case.

Looking ahead, Apple’s deal with OpenAI is not exclusive, and the company is already in talks to offer Google’s Gemini chatbot as an additional option later this year. Apple has also reportedly held talks with Anthropic (maker of Claude 3) as a potential chatbot partner, signaling its intention to provide users with a range of AI services, much like how the company offers various search engine options in Safari.

Report: Apple isn’t paying OpenAI for ChatGPT integration into OSes Read More »

turkish-student-creates-custom-ai-device-for-cheating-university-exam,-gets-arrested

Turkish student creates custom AI device for cheating university exam, gets arrested

spy hard —

Elaborate scheme involved hidden camera and an earpiece to hear answers.

A photo illustration of what a shirt-button camera <em>could</em> look like. ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/06/shirt-button-camera-800×450.jpg”></img><figcaption>
<p><a data-height=Enlarge / A photo illustration of what a shirt-button camera could look like.

Aurich Lawson | Getty Images

On Saturday, Turkish police arrested and detained a prospective university student who is accused of developing an elaborate scheme to use AI and hidden devices to help him cheat on an important entrance exam, reports Reuters and The Daily Mail.

The unnamed student is reportedly jailed pending trial after the incident, which took place in the southwestern province of Isparta, where the student was caught behaving suspiciously during the TYT. The TYT is a nationally held university aptitude exam that determines a person’s eligibility to attend a university in Turkey—and cheating on the high-stakes exam is a serious offense.

According to police reports, the student used a camera disguised as a shirt button, connected to AI software via a “router” (possibly a mistranslation of a cellular modem) hidden in the sole of their shoe. The system worked by scanning the exam questions using the button camera, which then relayed the information to an unnamed AI model. The software generated the correct answers and recited them to the student through an earpiece.

A video released by the Isparta police demonstrated how the cheating system functioned. In the video, a police officer scans a question, and the AI software provides the correct answer through the earpiece.

In addition to the student, Turkish police detained another individual for assisting the student during the exam. The police discovered a mobile phone that could allegedly relay spoken sounds to the other person, allowing for two-way communication.

A history of calling on computers for help

The recent arrest recalls other attempts to cheat using wireless communications and computers, such as the famous case of the Eudaemons in the late 1970s. The Eudaemons were a group of physics graduate students from the University of California, Santa Cruz, who developed a wearable computer device designed to predict the outcome of roulette spins in casinos.

The Eudaemons’ device consisted of a shoe with a computer built into it, connected to a timing device operated by the wearer’s big toe. The wearer would click the timer when the ball and the spinning roulette wheel were in a specific position, and the computer would calculate the most likely section of the wheel where the ball would land. This prediction would be transmitted to an earpiece worn by another team member, who would quickly place bets on the predicted section.

While the Eudaemons’ plan didn’t involve a university exam, it shows that the urge to call upon remote computational powers greater than oneself is apparently timeless.

Turkish student creates custom AI device for cheating university exam, gets arrested Read More »

ridiculed-stable-diffusion-3-release-excels-at-ai-generated-body-horror

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror

unstable diffusion —

Users react to mangled SD3 generations and ask, “Is this release supposed to be a joke?”

An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

Enlarge / An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

On Wednesday, Stability AI released weights for Stable Diffusion 3 Medium, an AI image-synthesis model that turns text prompts into AI-generated images. Its arrival has been ridiculed online, however, because it generates images of humans in a way that seems like a step backward from other state-of-the-art image-synthesis models like Midjourney or DALL-E 3. As a result, it can churn out wild anatomically incorrect visual abominations with ease.

A thread on Reddit, titled, “Is this release supposed to be a joke? [SD3-2B],” details the spectacular failures of SD3 Medium at rendering humans, especially human limbs like hands and feet. Another thread, titled, “Why is SD3 so bad at generating girls lying on the grass?” shows similar issues, but for entire human bodies.

Hands have traditionally been a challenge for AI image generators due to lack of good examples in early training data sets, but more recently, several image-synthesis models seemed to have overcome the issue. In that sense, SD3 appears to be a huge step backward for the image-synthesis enthusiasts that gather on Reddit—especially compared to recent Stability releases like SD XL Turbo in November.

“It wasn’t too long ago that StableDiffusion was competing with Midjourney, now it just looks like a joke in comparison. At least our datasets are safe and ethical!” wrote one Reddit user.

  • An AI-generated image created using Stable Diffusion 3 Medium.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated image created using Stable Diffusion 3 of a girl lying in the grass.

  • An AI-generated image created using Stable Diffusion 3 that shows mangled hands.

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “woman wearing a dress on the beach.”

  • An AI-generated SD3 Medium image a Reddit user made with the prompt “photograph of a person napping in a living room.”

AI image fans are so far blaming the Stable Diffusion 3’s anatomy fails on Stability’s insistence on filtering out adult content (often called “NSFW” content) from the SD3 training data that teaches the model how to generate images. “Believe it or not, heavily censoring a model also gets rid of human anatomy, so… that’s what happened,” wrote one Reddit user in the thread.

Basically, any time a user prompt homes in on a concept that isn’t represented well in the AI model’s training dataset, the image-synthesis model will confabulate its best interpretation of what the user is asking for. And sometimes that can be completely terrifying.

The release of Stable Diffusion 2.0 in 2022 suffered from similar problems in depicting humans well, and AI researchers soon discovered that censoring adult content that contains nudity can severely hamper an AI model’s ability to generate accurate human anatomy. At the time, Stability AI reversed course with SD 2.1 and SD XL, regaining some abilities lost by strongly filtering NSFW content.

Another issue that can occur during model pre-training is that sometimes the NSFW filter researchers use remove adult images from the dataset is too picky, accidentally removing images that might not be offensive and depriving the model of depictions of humans in certain situations. “[SD3] works fine as long as there are no humans in the picture, I think their improved nsfw filter for filtering training data decided anything humanoid is nsfw,” wrote one Redditor on the topic.

Using a free online demo of SD3 on Hugging Face, we ran prompts and saw similar results to those being reported by others. For example, the prompt “a man showing his hands” returned an image of a man holding up two giant-sized backward hands, although each hand at least had five fingers.

  • A SD3 Medium example we generated with the prompt “A woman lying on the beach.”

  • A SD3 Medium example we generated with the prompt “A man showing his hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “A woman showing her hands.”

    Stability AI

  • A SD3 Medium example we generated with the prompt “a muscular barbarian with weapons beside a CRT television set, cinematic, 8K, studio lighting.”

  • A SD3 Medium example we generated with the prompt “A cat in a car holding a can of beer.”

Stability first announced Stable Diffusion 3 in February, and the company has planned to make it available in a variety of different model sizes. Today’s release is for the “Medium” version, which is a 2 billion-parameter model. In addition to the weights being available on Hugging Face, they are also available for experimentation through the company’s Stability Platform. The weights are available for download and use for free under a non-commercial license only.

Soon after its February announcement, delays in releasing the SD3 model weights inspired rumors that the release was being held back due to technical issues or mismanagement. Stability AI as a company fell into a tailspin recently with the resignation of its founder and CEO, Emad Mostaque, in March and then a series of layoffs. Just prior to that, three key engineers—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—left the company. And its troubles go back even farther, with news of the company’s dire financial position lingering since 2023.

To some Stable Diffusion fans, the failures with Stable Diffusion 3 Medium are a visual manifestation of the company’s mismanagement—and an obvious sign of things falling apart. Although the company has not filed for bankruptcy, some users made dark jokes about the possibility after seeing SD3 Medium:

“I guess now they can go bankrupt in a safe and ethically [sic] way, after all.”

Ridiculed Stable Diffusion 3 release excels at AI-generated body horror Read More »

one-of-the-major-sellers-of-detailed-driver-behavioral-data-is-shutting-down

One of the major sellers of detailed driver behavioral data is shutting down

Products driving products —

Selling “hard braking event” data seems less lucrative after public outcry.

Interior of car with different aspects of it highlighted, as if by a camera or AI

Getty Images

One of the major data brokers engaged in the deeply alienating practice of selling detailed driver behavior data to insurers has shut down that business.

Verisk, which had collected data from cars made by General Motors, Honda, and Hyundai, has stopped receiving that data, according to The Record, a news site run by security firm Recorded Future. According to a statement provided to Privacy4Cars, and reported by The Record, Verisk will no longer provide a “Driving Behavior Data History Report” to insurers.

Skeptics have long assumed that car companies had at least some plan to monetize the rich data regularly sent from cars back to their manufacturers, or telematics. But a concrete example of this was reported by The New York Times’ Kashmir Hill, in which drivers of GM vehicles were finding insurance more expensive, or impossible to acquire, because of the kinds of reports sent along the chain from GM to data brokers to insurers. Those who requested their collected data from the brokers found details of every trip they took: times, distances, and every “hard acceleration” or “hard braking event,” among other data points.

While the data was purportedly coming from an opt-in “Smart Driver” program in GM cars, many customers reported having no memory of opting in to the program or believing that dealership salespeople activated it themselves or rushed them through the process. The Mozilla Foundation considers cars to be “the worst product category we have ever reviewed for privacy,” given the overly broad privacy policies owners must agree to, extensive data gathering, and general lack of safeguards or privacy guarantees available for US car buyers.

GM quickly announced a halt to data sharing in late March, days after the Times’ reporting sparked considerable outcry. GM had been sending data to both Verisk and LexisNexis Risk Solutions, the latter of which is not signaling any kind of retreat from the telematics pipeline. LexisNexis’ telematics page shows logos for carmakers Kia, Mitsubishi, and Subaru.

Ars contacted LexisNexis for comment and will update this post with new information.

Disclosure of GM’s stealthily authorized data sharing has sparked numerous lawsuits, investigations from California and Texas agencies, and interest from Congress and the Federal Trade Commission.

One of the major sellers of detailed driver behavioral data is shutting down Read More »

china-state-hackers-infected-20,000-fortinet-vpns,-dutch-spy-service-says

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

DISCLOSURE FUBAR —

Critical code-execution flaw was under exploitation 2 months before company disclosed it.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says

Hackers working for the Chinese government gained access to more than 20,000 VPN appliances sold by Fortinet using a critical vulnerability that the company failed to disclose for two weeks after fixing it, Netherlands government officials said.

The vulnerability, tracked as CVE-2022-42475, is a heap-based buffer overflow that allows hackers to remotely execute malicious code. It carries a severity rating of 9.8 out of 10. A maker of network security software, Fortinet silently fixed the vulnerability on November 28, 2022, but failed to mention the threat until December 12 of that year, when the company said it became aware of an “instance where this vulnerability was exploited in the wild.” On January 11, 2023—more than six weeks after the vulnerability was fixed—Fortinet warned a threat actor was exploiting it to infect government and government-related organizations with advanced custom-made malware.

Enter CoatHanger

The Netherlands officials first reported in February that Chinese state hackers had exploited CVE-2022-42475 to install an advanced and stealthy backdoor tracked as CoatHanger on Fortigate appliances inside the Dutch Ministry of Defense. Once installed, the never-before-seen malware, specifically designed for the underlying FortiOS operating system, was able to permanently reside on devices even when rebooted or receiving a firmware update. CoatHanger could also escape traditional detection measures, the officials warned. The damage resulting from the breach was limited, however, because infections were contained inside a segment reserved for non-classified uses.

On Monday, officials with the Military Intelligence and Security Service (MIVD) and the General Intelligence and Security Service in the Netherlands said that to date, Chinese state hackers have used the critical vulnerability to infect more than 20,000 FortiGate VPN appliances sold by Fortinet. Targets include dozens of Western government agencies, international organizations, and companies within the defense industry.

“Since then, the MIVD has conducted further investigation and has shown that the Chinese cyber espionage campaign appears to be much more extensive than previously known,” Netherlands officials with the National Cyber Security Center wrote. “The NCSC therefore calls for extra attention to this campaign and the abuse of vulnerabilities in edge devices.”

Monday’s report said that exploitation of the vulnerability started two months before Fortinet first disclosed it and that 14,000 servers were backdoored during this zero-day period. The officials warned that the Chinese threat group likely still has access to many victims because CoatHanger is so hard to detect and remove.

Netherlands government officials wrote in Monday’s report:

Since the publication in February, the MIVD has continued to investigate the broader Chinese cyber espionage campaign. This revealed that the state actor gained access to at least 20,000 FortiGate systems worldwide within a few months in both 2022 and 2023 through the vulnerability with the identifier CVE-2022-42475 . Furthermore, research shows that the state actor behind this campaign was already aware of this vulnerability in FortiGate systems at least two months before Fortinet announced the vulnerability. During this so-called ‘zero-day’ period, the actor alone infected 14,000 devices. Targets include dozens of (Western) governments, international organizations and a large number of companies within the defense industry.

The state actor installed malware at relevant targets at a later date. This gave the state actor permanent access to the systems. Even if a victim installs security updates from FortiGate, the state actor continues to have this access.

It is not known how many victims actually have malware installed. The Dutch intelligence services and the NCSC consider it likely that the state actor could potentially expand its access to hundreds of victims worldwide and carry out additional actions such as stealing data.

Even with the technical report on the COATHANGER malware, infections from the actor are difficult to identify and remove. The NCSC and the Dutch intelligence services therefore state that it is likely that the state actor still has access to systems of a significant number of victims.

Fortinet’s failure to timely disclose is particularly acute given the severity of the vulnerability. Disclosures are crucial because they help users prioritize the installation of patches. When a new version fixes minor bugs, many organizations often wait to install it. When it fixes a vulnerability with a 9.8 severity rating, they’re much more likely to expedite the update process. Given the vulnerability was being exploited even before Fortinet fixed it, the disclosure likely wouldn’t have prevented all of the infections, but it stands to reason it could have stopped some.

Fortinet officials have never explained why they didn’t disclose the critical vulnerability when it was fixed. They have also declined to disclose what the company policy is for the disclosure of security vulnerabilities. Company representatives didn’t immediately respond to an email seeking comment for this post.

China state hackers infected 20,000 Fortinet VPNs, Dutch spy service says Read More »

apple-and-openai-currently-have-the-most-misunderstood-partnership-in-tech

Apple and OpenAI currently have the most misunderstood partnership in tech

A man talks into a smartphone.

Enlarge / He isn’t using an iPhone, but some people talk to Siri like this.

On Monday, Apple premiered “Apple Intelligence” during a wide-ranging presentation at its annual Worldwide Developers Conference in Cupertino, California. However, the heart of its new tech, an array of Apple-developed AI models, was overshadowed by the announcement of ChatGPT integration into its device operating systems.

Since rumors of the partnership first emerged, we’ve seen confusion on social media about why Apple didn’t develop a cutting-edge GPT-4-like chatbot internally. Despite Apple’s year-long development of its own large language models (LLMs), many perceived the integration of ChatGPT (and opening the door for others, like Google Gemini) as a sign of Apple’s lack of innovation.

“This is really strange. Surely Apple could train a very good competing LLM if they wanted? They’ve had a year,” wrote AI developer Benjamin De Kraker on X. Elon Musk has also been grumbling about the OpenAI deal—and spreading misinformation about it—saying things like, “It’s patently absurd that Apple isn’t smart enough to make their own AI, yet is somehow capable of ensuring that OpenAI will protect your security & privacy!”

While Apple has developed many technologies internally, it has also never been shy about integrating outside tech when necessary in various ways, from acquisitions to built-in clients—in fact, Siri was initially developed by an outside company. But by making a deal with a company like OpenAI, which has been the source of a string of tech controversies recently, it’s understandable that some people don’t understand why Apple made the call—and what it might entail for the privacy of their on-device data.

“Our customers want something with world knowledge some of the time”

While Apple Intelligence largely utilizes its own Apple-developed LLMs, Apple also realized that there may be times when some users want to use what the company considers the current “best” existing LLM—OpenAI’s GPT-4 family. In an interview with The Washington Post, Apple CEO Tim Cook explained the decision to integrate OpenAI first:

“I think they’re a pioneer in the area, and today they have the best model,” he said. “And I think our customers want something with world knowledge some of the time. So we considered everything and everyone. And obviously we’re not stuck on one person forever or something. We’re integrating with other people as well. But they’re first, and I think today it’s because they’re best.”

The proposed benefit of Apple integrating ChatGPT into various experiences within iOS, iPadOS, and macOS is that it allows AI users to access ChatGPT’s capabilities without the need to switch between different apps—either through the Siri interface or through Apple’s integrated “Writing Tools.” Users will also have the option to connect their paid ChatGPT account to access extra features.

As an answer to privacy concerns, Apple says that before any data is sent to ChatGPT, the OS asks for the user’s permission, and the entire ChatGPT experience is optional. According to Apple, requests are not stored by OpenAI, and users’ IP addresses are hidden. Apparently, communication with OpenAI servers happens through API calls similar to using the ChatGPT app on iOS, and there is reportedly no deeper OS integration that might expose user data to OpenAI without the user’s permission.

We can only take Apple’s word for it at the moment, of course, and solid details about Apple’s AI privacy efforts will emerge once security experts get their hands on the new features later this year.

Apple’s history of tech integration

So you’ve seen why Apple chose OpenAI. But why look to outside companies for tech? In some ways, Apple building an external LLM client into its operating systems isn’t too different from what it has previously done with streaming video (the YouTube app on the original iPhone), Internet search (Google search integration), and social media (integrated Twitter and Facebook sharing).

The press has positioned Apple’s recent AI moves as Apple “catching up” with competitors like Google and Microsoft in terms of chatbots and generative AI. But playing it slow and cool has long been part of Apple’s M.O.—not necessarily introducing the bleeding edge of technology but improving existing tech through refinement and giving it a better user interface.

Apple and OpenAI currently have the most misunderstood partnership in tech Read More »

nasty-bug-with-very-simple-exploit-hits-php-just-in-time-for-the-weekend

Nasty bug with very simple exploit hits PHP just in time for the weekend

WORST FIT EVER —

With PoC code available and active Internet scans, speed is of the essence.

Nasty bug with very simple exploit hits PHP just in time for the weekend

A critical vulnerability in the PHP programming language can be trivially exploited to execute malicious code on Windows devices, security researchers warned as they urged those affected to take action before the weekend starts.

Within 24 hours of the vulnerability and accompanying patch being published, researchers from the nonprofit security organization Shadowserver reported Internet scans designed to identify servers that are susceptible to attacks. That—combined with (1) the ease of exploitation, (2) the availability of proof-of-concept attack code, (3) the severity of remotely executing code on vulnerable machines, and (4) the widely used XAMPP platform being vulnerable by default—has prompted security practitioners to urge admins check to see if their PHP servers are affected before starting the weekend.

When “Best Fit” isn’t

“A nasty bug with a very simple exploit—perfect for a Friday afternoon,” researchers with security firm WatchTowr wrote.

CVE-2024-4577, as the vulnerability is tracked, stems from errors in the way PHP converts unicode characters into ASCII. A feature built into Windows known as Best Fit allows attackers to use a technique known as argument injection to pass user-supplied input into commands executed by an application, in this case, PHP. Exploits allow attackers to bypass CVE-2012-1823, a critical code execution vulnerability patched in PHP in 2012.

“While implementing PHP, the team did not notice the Best-Fit feature of encoding conversion within the Windows operating system,” researchers with Devcore, the security firm that discovered CVE-2024-4577, wrote. “This oversight allows unauthenticated attackers to bypass the previous protection of CVE-2012-1823 by specific character sequences. Arbitrary code can be executed on remote PHP servers through the argument injection attack.”

CVE-2024-4577 affects PHP only when it runs in a mode known as CGI, in which a web server parses HTTP requests and passes them to a PHP script for processing. Even when PHP isn’t set to CGI mode, however, the vulnerability may still be exploitable when PHP executables such as php.exe and php-cgi.exe are in directories that are accessible by the web server. This configuration is set by default in XAMPP for Windows, making the platform vulnerable unless it has been modified.

One example, WatchTowr noted, occurs when queries are parsed and sent through a command line. The result: a harmless request such as http://host/cgi.php?foo=bar could be converted into php.exe cgi.php foo=bar, a command that would be executed by the main PHP engine.

No escape

Like many other languages, PHP converts certain types of user input to prevent it from being interpreted as a command for execution. This is a process known as escaping. For example, in HTML, the < and > characters are often escaped by converting them into their unicode hex value equivalents < and > to prevent them from being interpreted as HTML tags by a browser.

The WatchTowr researchers demonstrate how Best Fit fails to escape characters such as a soft hyphen (with unicode value 0xAD) and instead converts it to an unescaped regular hyphen (0x2D), a character that’s instrumental in many code syntaxes.

The researchers went on to explain:

It turns out that, as part of unicode processing, PHP will apply what’s known as a ‘best fit’ mapping, and helpfully assume that, when the user entered a soft hyphen, they actually intended to type a real hyphen, and interpret it as such. Herein lies our vulnerability—if we supply a CGI handler with a soft hyphen (0xAD), the CGI handler won’t feel the need to escape it, and will pass it to PHP. PHP, however, will interpret it as if it were a real hyphen, which allows an attacker to sneak extra command line arguments, which begin with hyphens, into the PHP process.

This is remarkably similar to an older PHP bug (when in CGI mode), CVE-2012-1823, and so we can borrow some exploitation techniques developed for this older bug and adapt them to work with our new bug. A helpful writeup advises that, to translate our injection into RCE, we should aim to inject the following arguments:

-d allow_url_include=1 -d auto_prepend_file=php://input  

This will accept input from our HTTP request body, and process it using PHP. Straightforward enough – let’s try a version of this equipped with our 0xAD ‘soft hyphen’ instead of the usual hyphen. Maybe it’s enough to slip through the escaping?

POST /test.php?%ADd+allow_url_include%3d1+%ADd+auto_prepend_file%3dphp://input HTTP/1.1  Host: host  User-Agent: curl/8.3.0  Accept: */Content-Length: 23  Content-Type: application/x-www-form-urlencoded  Connection: keep-alive       

Oh joy—we’re rewarded with a phpinfo page, showing us we have indeed achieved RCE.

The vulnerability was discovered by Devcore researcher Orange Tsai, who said: “The bug is incredibly simple, but that’s also what makes it interesting.”

The Devcore writeup said that the researchers have confirmed that XAMPP is vulnerable when Windows is configured to use the locales for Traditional Chinese, Simplified Chinese, or Japanese. In Windows, a locale is a set of user preference information related to the user’s language, environment, and/or cultural conventions. The researchers haven’t tested other locales and have urged people using them to perform a comprehensive asset assessment to test their usage scenarios.

CVE-2024-4577 affects all versions of PHP running on a Windows device. That includes version branches 8.3 prior to 8.3.8, 8.2 prior to 8.2.20, and 8.1 prior to 8.1.29.

The 8.0, 7, and 5 version branches are also vulnerable, but since they’re no longer supported, admins will have to follow mitigation advice since patches aren’t available. One option is to apply what are known as rewrite rules such as:

RewriteEngine On  RewriteCond %QUERY_STRING ^%ad [NC]  RewriteRule .? - [F,L]

The researchers caution these rules have been tested only for the three locales they have confirmed as vulnerable.

XAMPP for Windows had yet to release a fix at the time this post went live. For admins without the need for PHP CGI, they can turn it off using the following Apache HTTP Server configuration:

C:/xampp/apache/conf/extra/httpd-xampp.conf

Locating the corresponding lines:

ScriptAlias /php-cgi/ "C:/xampp/php/"  

And comment it out:

# ScriptAlias /php-cgi/ "C:/xampp/php/"  

Additional analysis of the vulnerability is available here.

Nasty bug with very simple exploit hits PHP just in time for the weekend Read More »