Biz & IT

us-says-ai-models-can’t-hold-patents

US says AI models can’t hold patents

Robot inventors dismayed —

Inventors must be human, but there’s still a condition where AI can officially help.

An illustrated concept of a digital brain, crossed out.

On Tuesday, the United States Patent and Trademark Office (USPTO) published guidance on inventorship for AI-assisted inventions, clarifying that while AI systems can play a role in the creative process, only natural persons (human beings) who make significant contributions to the conception of an invention can be named as inventors. It also rules out using AI models to churn out patent ideas without significant human input.

The USPTO says this position is supported by “the statutes, court decisions, and numerous policy considerations,” including the Executive Order on AI issued by President Biden. We’ve previously covered attempts, which have been repeatedly rejected by US courts, by Dr. Stephen Thaler to have an AI program called “DABUS” named as the inventor on a US patent (a process begun in 2019).

This guidance follows themes previously set by the US Copyright Office (and agreed upon by a judge) that an AI model cannot own a copyright for a piece of media and that substantial human contributions are required for copyright protection.

Even though an AI model itself cannot be named an inventor or joint inventor on a patent, using AI assistance to create an invention does not necessarily disqualify a human from holding a patent, as the USPTO explains:

“While AI systems and other non-natural persons cannot be listed as inventors on patent applications or patents, the use of an AI system by a natural person(s) does not preclude a natural person(s) from qualifying as an inventor (or joint inventors) if the natural person(s) significantly contributed to the claimed invention.”

However, the USPTO says that significant human input is required for an invention to be patentable: “Maintaining ‘intellectual domination’ over an AI system does not, on its own, make a person an inventor of any inventions created through the use of the AI system.” So a person simply overseeing an AI system isn’t suddenly an inventor. The person must make a significant contribution to the conception of the invention.

If someone does use an AI model to help create patents, the guidance describes how the application process would work. First, patent applications for AI-assisted inventions must name “the natural person(s) who significantly contributed to the invention as the inventor,” and additionally, applications must not list “any entity that is not a natural person as an inventor or joint inventor, even if an AI system may have been instrumental in the creation of the claimed invention.”

Reading between the lines, it seems the contributions made by AI systems are akin to contributions made by other tools that assist in the invention process. The document does not explicitly say that the use of AI is required to be disclosed during the application process.

Even with the published guidance, the USPTO is seeking public comment on the newly released guidelines and issues related to AI inventorship on its website.

US says AI models can’t hold patents Read More »

broadcom-owned-vmware-kills-the-free-version-of-esxi-virtualization-software

Broadcom-owned VMware kills the free version of ESXi virtualization software

freesphere —

Software’s free version was a good fit for tinkerers and hobbyists.

Broadcom-owned VMware kills the free version of ESXi virtualization software

VMware

Since Broadcom’s $61 billion acquisition of VMware closed in November 2023, Broadcom has been charging ahead with major changes to the company’s personnel and products. In December, Broadcom began laying off thousands of employees and stopped selling perpetually licensed versions of VMware products, pushing its customers toward more stable and lucrative software subscriptions instead. In January, it ended its partner programs, potentially disrupting sales and service for many users of its products.

This week, Broadcom is making a change that is smaller in scale but possibly more relevant for home users of its products: The free version of VMware’s vSphere Hypervisor, also known as ESXi, is being discontinued.

ESXi is what is known as a “bare-metal hypervisor,” lightweight software that runs directly on hardware without requiring a separate operating system layer in between. ESXi allows you to split a PC’s physical resources (CPUs and CPU cores, RAM, storage, networking components, and so on) among multiple virtual machines. ESXi also supports passthrough for PCI, SATA, and USB accessories, allowing guest operating systems direct access to components like graphics cards and hard drives.

The free version of ESXi had limits compared to the full, paid enterprise versions—it could only support up to two physical CPUs, didn’t come with any software support, and lacked automated load-balancing and management features. But it was still useful for enthusiasts and home users who wanted to run multipurpose home servers or to split a system’s time between Windows and one or more Linux distributions without the headaches of dual booting. It was also a useful tool for people who used the enterprise versions of the vSphere Hypervisor but wanted to test the software or learn its ins and outs without dealing with paid licensing.

For the latter group, a 60-day trial of the VMware vSphere 8 software is still available. Tinkerers will be better off trying to migrate to an alternative product instead, like Proxmox, XCP-ng, or even the Hyper-V capabilities built into the Pro versions of Windows 10 and 11.

Broadcom-owned VMware kills the free version of ESXi virtualization software Read More »

openai-experiments-with-giving-chatgpt-a-long-term-conversation-memory

OpenAI experiments with giving ChatGPT a long-term conversation memory

“I remember…the Alamo” —

AI chatbot “memory” will recall facts from previous conversations when enabled.

A pixelated green illustration of a pair of hands looking through file records.

Enlarge / When ChatGPT looks things up, a pair of green pixelated hands look through paper records, much like this. Just kidding.

Benj Edwards / Getty Images

On Tuesday, OpenAI announced that it is experimenting with adding a form of long-term memory to ChatGPT that will allow it to remember details between conversations. You can ask ChatGPT to remember something, see what it remembers, and ask it to forget. Currently, it’s only available to a small number of ChatGPT users for testing.

So far, large language models have typically used two types of memory: one baked into the AI model during the training process (before deployment) and an in-context memory (the conversation history) that persists for the duration of your session. Usually, ChatGPT forgets what you have told it during a conversation once you start a new session.

Various projects have experimented with giving LLMs a memory that persists beyond a context window. (The context window is the hard limit on the number of tokens the LLM can process at once.) The techniques include dynamically managing context history, compressing previous history through summarization, links to vector databases that store information externally, or simply periodically injecting information into a system prompt (the instructions ChatGPT receives at the beginning of every chat).

A screenshot of ChatGPT memory controls provided by OpenAI.

Enlarge / A screenshot of ChatGPT memory controls provided by OpenAI.

OpenAI

OpenAI hasn’t explained which technique it uses here, but the implementation reminds us of Custom Instructions, a feature OpenAI introduced in July 2023 that lets users add custom additions to the ChatGPT system prompt to change its behavior.

Possible applications for the memory feature provided by OpenAI include explaining how you prefer your meeting notes to be formatted, telling it you run a coffee shop and having ChatGPT assume that’s what you’re talking about, keeping information about your toddler that loves jellyfish so it can generate relevant graphics, and remembering preferences for kindergarten lesson plan designs.

Also, OpenAI says that memories may help ChatGPT Enterprise and Team subscribers work together better since shared team memories could remember specific document formatting preferences or which programming frameworks your team uses. And OpenAI plans to bring memories to GPTs soon, with each GPT having its own siloed memory capabilities.

Memory control

Obviously, any tendency to remember information brings privacy implications. You should already know that sending information to OpenAI for processing on remote servers introduces the possibility of privacy leaks and that OpenAI trains AI models on user-provided information by default unless conversation history is disabled or you’re using an Enterprise or Team account.

Along those lines, OpenAI says that your saved memories are also subject to OpenAI training use unless you meet the criteria listed above. Still, the memory feature can be turned off completely. Additionally, the company says, “We’re taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details—unless you explicitly ask it to.”

Users will also be able to control what ChatGPT remembers using a “Manage Memory” interface that lists memory items. “ChatGPT’s memories evolve with your interactions and aren’t linked to specific conversations,” OpenAI says. “Deleting a chat doesn’t erase its memories; you must delete the memory itself.”

ChatGPT’s memory features are not currently available to every ChatGPT account, so we have not experimented with it yet. Access during this testing period appears to be random among ChatGPT (free and paid) accounts for now. “We are rolling out to a small portion of ChatGPT free and Plus users this week to learn how useful it is,” OpenAI writes. “We will share plans for broader roll out soon.”

OpenAI experiments with giving ChatGPT a long-term conversation memory Read More »

the-super-bowl’s-best-and-wackiest-ai-commercials

The Super Bowl’s best and wackiest AI commercials

Superb Owl News —

It’s nothing like “crypto bowl” in 2022, but AI made a notable splash during the big game.

A still image from BodyArmor's 2024

Enlarge / A still image from BodyArmor’s 2024 “Field of Fake” Super Bowl commercial.

BodyArmor

Heavily hyped tech products have a history of appearing in Super Bowl commercials during football’s biggest game—including the Apple Macintosh in 1984, dot-com companies in 2000, and cryptocurrency firms in 2022. In 2024, the hot tech in town is artificial intelligence, and several companies showed AI-related ads at Super Bowl LVIII. Here’s a rundown of notable appearances that range from serious to wacky.

Microsoft Copilot

Microsoft Game Day Commercial | Copilot: Your everyday AI companion.

It’s been a year since Microsoft launched the AI assistant Microsoft Copilot (as “Bing Chat“), and Microsoft is leaning heavily into its AI-assistant technology, which is powered by large language models from OpenAI. In Copilot’s first-ever Super Bowl commercial, we see scenes of various people with defiant text overlaid on the screen: “They say I will never open my own business or get my degree. They say I will never make my movie or build something. They say I’m too old to learn something new. Too young to change the world. But I say watch me.”

Then the commercial shows Copilot creating solutions to some of these problems, with prompts like, “Generate storyboard images for the dragon scene in my script,” “Write code for my 3d open world game,” “Quiz me in organic chemistry,” and “Design a sign for my classic truck repair garage Mike’s.”

Of course, since generative AI is an unfinished technology, many of these solutions are more aspirational than practical at the moment. On Bluesky, writer Ed Zitron put Microsoft’s truck repair logo to the test and saw results that weren’t nearly as polished as those seen in the commercial. On X, others have criticized and poked fun at the “3d open world game” generation prompt, which is a complex task that would take far more than a single, simple prompt to produce useful code.

Google Pixel 8 “Guided Frame” feature

Javier in Frame | Google Pixel SB Commercial 2024.

Instead of focusing on generative aspects of AI, Google’s commercial showed off a feature called “Guided Frame” on the Pixel 8 phone that uses machine vision technology and a computer voice to help people with blindness or low vision to take photos by centering the frame on a face or multiple faces. Guided Frame debuted in 2022 in conjunction with the Google Pixel 7.

The commercial tells the story of a person named Javier, who says, “For many people with blindness or low vision, there hasn’t always been an easy way to capture daily life.” We see a simulated blurry first-person view of Javier holding a smartphone and hear a computer-synthesized voice describing what the AI model sees, directing the person to center on a face to snap various photos and selfies.

Considering the controversies that generative AI currently generates (pun intended), it’s refreshing to see a positive application of AI technology used as an accessibility feature. Relatedly, an app called Be My Eyes (powered by OpenAI’s GPT-4V) also aims to help low-vision people interact with the world.

Despicable Me 4

Despicable Me 4 – Minion Intelligence (Big Game Spot).

So far, we’ve covered a couple attempts to show AI-powered products as positive features. Elsewhere in Super Bowl ads, companies weren’t as generous about the technology. In an ad for the film Despicable Me 4, we see two Minions creating a series of terribly disfigured AI-generated still images reminiscent of Stable Diffusion 1.4 from 2022. There’s three-legged people doing yoga, a painting of Steve Carell and Will Ferrell as Elizabethan gentlemen, a handshake with too many fingers, people eating spaghetti in a weird way, and a pair of people riding dachshunds in a race.

The images are paired with an earnest voiceover that says, “Artificial intelligence is changing the way we see the world, showing us what we never thought possible, transforming the way we do business, and bringing family and friends closer together. With artificial intelligence, the future is in good hands.” When the voiceover ends, the camera pans out to show hundreds of Minions generating similarly twisted images on computers.

Speaking of image synthesis at the Super Bowl, people mistook a Christian commercial created by He Gets Us, LLC as having been AI-generated, likely due to its gaudy technicolor visuals. With the benefit of a YouTube replay and the ability to look at details, the “He washed feet” commercial doesn’t appear AI-generated to us, but it goes to show how the concept of image synthesis has begun to cast doubt on human-made creations.

The Super Bowl’s best and wackiest AI commercials Read More »

canada-declares-flipper-zero-public-enemy-no.-1-in-car-theft-crackdown

Canada declares Flipper Zero public enemy No. 1 in car-theft crackdown

FLIPPING YOUR LID —

How do you ban a device built with open source hardware and software anyway?

A Flipper Zero device

Enlarge / A Flipper Zero device

https://flipperzero.one/

Canadian Prime Minister Justin Trudeau has identified an unlikely public enemy No. 1 in his new crackdown on car theft: the Flipper Zero, a $200 piece of open source hardware used to capture, analyze and interact with simple radio communications.

On Thursday, the Innovation, Science and Economic Development Canada agency said it will “pursue all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry, such as the Flipper Zero, which would allow for the removal of those devices from the Canadian marketplace through collaboration with law enforcement agencies.” A social media post by François-Philippe Champagne, the minister of that agency, said that as part of the push “we are banning the importation, sale and use of consumer hacking devices, like flippers, used to commit these crimes.”

In remarks made the same day, Trudeau said the push will target similar tools that he said can be used to defeat anti-theft protections built into virtually all new cars.

“In reality, it has become too easy for criminals to obtain sophisticated electronic devices that make their jobs easier,” he said. “For example, to copy car keys. It is unacceptable that it is possible to buy tools that help car theft on major online shopping platforms.”

Presumably, such tools subject to the ban would include HackRF One and LimeSDR, which have become crucial for analyzing and testing the security of all kinds of electronic devices to find vulnerabilities before they’re exploited. None of the government officials identified any of these tools, but in an email, a representative of the Canadian government reiterated the use of the phrase “pursuing all avenues to ban devices used to steal vehicles by copying the wireless signals for remote keyless entry.”

A humble hobbyist device

The push to ban any of these tools has been met with fierce criticism from hobbyists and security professionals. Their case has only been strengthened by Trudeau’s focus on Flipper Zero. This slim, lightweight device bearing the logo of an adorable dolphin acts as a Swiss Army knife for sending, receiving, and analyzing all kinds of wireless communications. It can interact with radio signals, including RFID, NFC, Bluetooth, Wi-Fi, or standard radio. People can use them to change the channels of a TV at a bar covertly, clone simple hotel key cards, read the RFID chip implanted in pets, open and close some garage doors, and, until Apple issued a patch, send iPhones into a never-ending DoS loop.

The price and ease of use make Flipper Zero ideal for beginners and hobbyists who want to understand how increasingly ubiquitous communications protocols such as NFC and Wi-Fi work. It bundles various open source hardware and software into a portable form factor that sells for an affordable price. Lost on the Canadian government, the device isn’t especially useful in stealing cars because it lacks the more advanced capabilities required to bypass anti-theft protections introduced in more than two decades.

One thing the Flipper Zero is exceedingly ill-equipped for is defeating modern antihack protections built into cars, smartcards, phones, and other electronic devices.

The most prevalent form of electronics-assisted car theft these days, for instance, uses what are known as signal amplification relay devices against keyless ignition and entry systems. This form of hack works by holding one device near a key fob and a second device near the vehicle the fob works with. In the most typical scenario, the fob is located on a shelf near a locked front door, and the car is several dozen feet away in a driveway. By placing one device near the front door and another one next to the car, the hack beams the radio signals necessary to unlock and start the device.

Canada declares Flipper Zero public enemy No. 1 in car-theft crackdown Read More »

london-underground-is-testing-real-time-ai-surveillance-tools-to-spot-crime

London Underground is testing real-time AI surveillance tools to spot crime

tube tracking —

Computer vision system tried to detect crime, weapons, people falling, and fare dodgers.

Commuters wait on the platform as a Central Line tube train arrives at Liverpool Street London Transport Tube Station in 2023.

Thousands of people using the London Underground had their movements, behavior, and body language watched by AI surveillance software designed to see if they were committing crimes or were in unsafe situations, new documents obtained by WIRED reveal. The machine-learning software was combined with live CCTV footage to try to detect aggressive behavior and guns or knives being brandished, as well as looking for people falling onto Tube tracks or dodging fares.

From October 2022 until the end of September 2023, Transport for London (TfL), which operates the city’s Tube and bus network, tested 11 algorithms to monitor people passing through Willesden Green Tube station, in the northwest of the city. The proof of concept trial is the first time the transport body has combined AI and live video footage to generate alerts that are sent to frontline staff. More than 44,000 alerts were issued during the test, with 19,000 being delivered to station staff in real time.

Documents sent to WIRED in response to a Freedom of Information Act request detail how TfL used a wide range of computer vision algorithms to track people’s behavior while they were at the station. It is the first time the full details of the trial have been reported, and it follows TfL saying, in December, that it will expand its use of AI to detect fare dodging to more stations across the British capital.

In the trial at Willesden Green—a station that had 25,000 visitors per day before the COVID-19 pandemic—the AI system was set up to detect potential safety incidents to allow staff to help people in need, but it also targeted criminal and antisocial behavior. Three documents provided to WIRED detail how AI models were used to detect wheelchairs, prams, vaping, people accessing unauthorized areas, or putting themselves in danger by getting close to the edge of the train platforms.

The documents, which are partially redacted, also show how the AI made errors during the trial, such as flagging children who were following their parents through ticket barriers as potential fare dodgers, or not being able to tell the difference between a folding bike and a non-folding bike. Police officers also assisted the trial by holding a machete and a gun in the view of CCTV cameras, while the station was closed, to help the system better detect weapons.

Privacy experts who reviewed the documents question the accuracy of object detection algorithms. They also say it is not clear how many people knew about the trial, and warn that such surveillance systems could easily be expanded in the future to include more sophisticated detection systems or face recognition software that attempts to identify specific individuals. “While this trial did not involve facial recognition, the use of AI in a public space to identify behaviors, analyze body language, and infer protected characteristics raises many of the same scientific, ethical, legal, and societal questions raised by facial recognition technologies,” says Michael Birtwistle, associate director at the independent research institute the Ada Lovelace Institute.

In response to WIRED’s Freedom of Information request, the TfL says it used existing CCTV images, AI algorithms, and “numerous detection models” to detect patterns of behavior. “By providing station staff with insights and notifications on customer movement and behaviour they will hopefully be able to respond to any situations more quickly,” the response says. It also says the trial has provided insight into fare evasion that will “assist us in our future approaches and interventions,” and the data gathered is in line with its data policies.

In a statement sent after publication of this article, Mandy McGregor, TfL’s head of policy and community safety, says the trial results are continuing to be analyzed and adds, “there was no evidence of bias” in the data collected from the trial. During the trial, McGregor says, there were no signs in place at the station that mentioned the tests of AI surveillance tools.

“We are currently considering the design and scope of a second phase of the trial. No other decisions have been taken about expanding the use of this technology, either to further stations or adding capability.” McGregor says. “Any wider roll out of the technology beyond a pilot would be dependent on a full consultation with local communities and other relevant stakeholders, including experts in the field.”

London Underground is testing real-time AI surveillance tools to spot crime Read More »

report:-sam-altman-seeking-trillions-for-ai-chip-fabrication-from-uae,-others

Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others

chips ahoy —

WSJ: Audacious $5-$7 trillion investment would aim to expand global AI chip supply.

WASHINGTON, DC - JANUARY 11: OpenAI Chief Executive Officer Sam Altman walks on the House side of the U.S. Capitol on January 11, 2024 in Washington, DC. Meanwhile, House Freedom Caucus members who left a meeting in the Speakers office say that they were talking to the Speaker about abandoning the spending agreement that Johnson announced earlier in the week. (Photo by Kent Nishimura/Getty Images)

Enlarge / OpenAI Chief Executive Officer Sam Altman walks on the House side of the US Capitol on January 11, 2024, in Washington, DC. (Photo by Kent Nishimura/Getty Images)

Getty Images

On Thursday, The Wall Street Journal reported that OpenAI CEO Sam Altman is in talks with investors to raise as much as $5 trillion to $7 trillion for AI chip manufacturing, according to people familiar with the matter. The funding seeks to address the scarcity of graphics processing units (GPUs) crucial for training and running large language models like those that power ChatGPT, Microsoft Copilot, and Google Gemini.

The high dollar amount reflects the huge amount of capital necessary to spin up new semiconductor manufacturing capability. “As part of the talks, Altman is pitching a partnership between OpenAI, various investors, chip makers and power providers, which together would put up money to build chip foundries that would then be run by existing chip makers,” writes the Wall Street Journal in its report. “OpenAI would agree to be a significant customer of the new factories.”

To hit these ambitious targets—which are larger than the entire semiconductor industry’s current $527 billion global sales combined—Altman has reportedly met with a range of potential investors worldwide, including sovereign wealth funds and government entities, notably the United Arab Emirates, SoftBank CEO Masayoshi Son, and representatives from Taiwan Semiconductor Manufacturing Co. (TSMC).

TSMC is the world’s largest dedicated independent semiconductor foundry. It’s a critical linchpin that companies such as Nvidia, Apple, Intel, and AMD rely on to fabricate SoCs, CPUs, and GPUs for various applications.

Altman reportedly seeks to expand the global capacity for semiconductor manufacturing significantly, funding the infrastructure necessary to support the growing demand for GPUs and other AI-specific chips. GPUs are excellent at parallel computation, which makes them ideal for running AI models that heavily rely on matrix multiplication to work. However, the technology sector currently faces a significant shortage of these important components, constraining the potential for AI advancements and applications.

In particular, the UAE’s involvement, led by Sheikh Tahnoun bin Zayed al Nahyan, a key security official and chair of numerous Abu Dhabi sovereign wealth vehicles, reflects global interest in AI’s potential and the strategic importance of semiconductor manufacturing. However, the prospect of substantial UAE investment in a key tech industry raises potential geopolitical concerns, particularly regarding the US government’s strategic priorities in semiconductor production and AI development.

The US has been cautious about allowing foreign control over the supply of microchips, given their importance to the digital economy and national security. Reflecting this, the Biden administration has undertaken efforts to bolster domestic chip manufacturing through subsidies and regulatory scrutiny of foreign investments in important technologies.

To put the $5 trillion to $7 trillion estimate in perspective, the White House just today announced a $5 billion investment in R&D to advance US-made semiconductor technologies. TSMC has already sunk $40 billion—one of the largest foreign investments in US history—into a US chip plant in Arizona. As of now, it’s unclear whether Altman has secured any commitments toward his fundraising goal.

Updated on February 9, 2024 at 8: 45 PM Eastern with a quote from the WSJ that clarifies the proposed relationship between OpenAI and partners in the talks.

Report: Sam Altman seeking trillions for AI chip fabrication from UAE, others Read More »

a-password-manager-lastpass-calls-“fraudulent”-booted-from-app-store

A password manager LastPass calls “fraudulent” booted from App Store

GREAT PRETENDER —

“LassPass” mimicked the name and logo of real LastPass password manager.

A password manager LastPass calls “fraudulent” booted from App Store

Getty Images

As Apple has stepped up its promotion of its App Store as a safer and more trustworthy source of apps, its operators scrambled Thursday to correct a major threat to that narrative: a listing that password manager maker LastPass said was a “fraudulent app impersonating” its brand.

At the time this article on Ars went live, Apple had removed the app—titled LassPass and bearing a logo strikingly similar to the one used by LastPass—from its App Store. At the same time, Apple allowed a separate app submitted by the same developer to remain. Apple provided no explanation for the reason for removing the former app or for allowing the latter one to remain.

Apple warns of “new risks” from competition

The move comes as Apple has beefed up its efforts to promote the App Store as a safer alternative to competing sources of iOS apps mandated recently by the European Union. In an interview with App Store head Phil Schiller published this month by FastCompany, Schiller said the new app stores will “bring new risks”—including pornography, hate speech, and other forms of objectionable content—that Apple has long kept at bay.

“I have no qualms in saying that our goal is going to always be to make the App Store the safest, best place for users to get apps,” he told writer Michael Grothaus. “I think users—and the whole developer ecosystem—have benefited from that work that we’ve done together with them. And we’re going to keep doing that.”

Somehow, Apple’s app vetting process—long vaunted even though Apple has provided few specifics—failed to spot the LastPass lookalike. Apple removed LassPass Thursday morning, two days, LastPass said, after it flagged the app to Apple and one day after warning its users the app was fraudulent.

“We are raising this to our customers’ attention to avoid potential confusion and/or loss of personal data,” LastPass Senior Principal Intelligence Analyst Mike Kosak wrote.

There’s no denying that the logo and name were strikingly similar to the official ones. Below is a screenshot of how LassPass appeared, followed by the official LastPass listing:

The LassPass entry as it appeared in the App Store.

Enlarge / The LassPass entry as it appeared in the App Store.

The official LastPass entry.

Enlarge / The official LastPass entry.

Here yesterday, gone today

Thomas Reed, director of Mac offerings at security firm Malwarebytes, noted that the LassPass entry in the App Store said the app’s privacy policy was available on bluneel[.]com, but that the page was gone by Thursday, and the main page shows a generic landing page. Whois records indicated the domain was registered five months ago.

There’s no indication that LassPass collected users’ LastPass credentials or copied any of the data it stored. The app did, however, provide fields for users to enter a wealth of sensitive personal information, including passwords, email and physical addresses, and bank, credit, and debit card data. The app had an option for paid subscriptions.

A LastPass representative said the company learned of the app on Tuesday and focused its efforts on getting it removed rather than analyzing its behavior. Company officials don’t have information about precisely what LassPass did when it was installed or when it first appeared in the App Store.

The App Store continues to host a separate app from the same developer who is listed simply as Parvati Patel. (A quick Internet search reveals many individuals with the same name. At the moment, it wasn’t possible to identify the specific one.) The separate app is named PRAJAPATI SAMAJ 42 Gor ABD-GNR, and a corresponding privacy policy (at psag42[.]in/policy.html) is dated December 2023. It’s described as an “application for Ahmedabad-Gandhinager Prajapati Samaj app” and further as a “platform for community.” The app was also recently listed on Google Play but was no longer available for download at the time of publication. Attempts to contact the developer were unsuccessful.

There’s no indication the separate app violates any App Store policy. Apple representatives didn’t respond to an email asking questions about the incident or its vetting process or policies.

A password manager LastPass calls “fraudulent” booted from App Store Read More »

critical-vulnerability-affecting-most-linux-distros-allows-for-bootkits

Critical vulnerability affecting most Linux distros allows for bootkits

Critical vulnerability affecting most Linux distros allows for bootkits

Linux developers are in the process of patching a high-severity vulnerability that, in certain cases, allows the installation of malware that runs at the firmware level, giving infections access to the deepest parts of a device where they’re hard to detect or remove.

The vulnerability resides in shim, which in the context of Linux is a small component that runs in the firmware early in the boot process before the operating system has started. More specifically, the shim accompanying virtually all Linux distributions plays a crucial role in secure boot, a protection built into most modern computing devices to ensure every link in the boot process comes from a verified, trusted supplier. Successful exploitation of the vulnerability allows attackers to neutralize this mechanism by executing malicious firmware at the earliest stages of the boot process before the Unified Extensible Firmware Interface firmware has loaded and handed off control to the operating system.

The vulnerability, tracked as CVE-2023-40547, is what’s known as a buffer overflow, a coding bug that allows attackers to execute code of their choice. It resides in a part of the shim that processes booting up from a central server on a network using the same HTTP that the Internet is based on. Attackers can exploit the code-execution vulnerability in various scenarios, virtually all following some form of successful compromise of either the targeted device or the server or network the device boots from.

“An attacker would need to be able to coerce a system into booting from HTTP if it’s not already doing so, and either be in a position to run the HTTP server in question or MITM traffic to it,” Matthew Garrett, a security developer and one of the original shim authors, wrote in an online interview. “An attacker (physically present or who has already compromised root on the system) could use this to subvert secure boot (add a new boot entry to a server they control, compromise shim, execute arbitrary code).”

Stated differently, these scenarios include:

  • Acquiring the ability to compromise a server or perform an adversary-in-the-middle impersonation of it to target a device that’s already configured to boot using HTTP
  • Already having physical access to a device or gaining administrative control by exploiting a separate vulnerability.

While these hurdles are steep, they’re by no means impossible, particularly the ability to compromise or impersonate a server that communicates with devices over HTTP, which is unencrypted and requires no authentication. These particular scenarios could prove useful if an attacker has already gained some level of access inside a network and is looking to take control of connected end-user devices. These scenarios, however, are largely remedied if servers use HTTPS, the variant of HTTP that requires a server to authenticate itself. In that case, the attacker would first have to forge the digital certificate the server uses to prove it’s authorized to provide boot firmware to devices.

The ability to gain physical access to a device is also difficult and is widely regarded as grounds for considering it to be already compromised. And, of course, already obtaining administrative control through exploiting a separate vulnerability in the operating system is hard and allows attackers to achieve all kinds of malicious objectives.

Critical vulnerability affecting most Linux distros allows for bootkits Read More »

as-if-two-ivanti-vulnerabilities-under-exploit-weren’t-bad-enough,-now-there-are-3

As if two Ivanti vulnerabilities under exploit weren’t bad enough, now there are 3

CHAOS REIGNS —

Hackers looking to diversify, began mass exploiting a new vulnerability over the weekend.

As if two Ivanti vulnerabilities under exploit weren’t bad enough, now there are 3

Mass exploitation began over the weekend for yet another critical vulnerability in widely used VPN software sold by Ivanti, as hackers already targeting two previous vulnerabilities diversified, researchers said Monday.

The new vulnerability, tracked as CVE-2024-21893, is what’s known as a server-side request forgery. Ivanti disclosed it on January 22, along with a separate vulnerability that so far has shown no signs of being exploited. Last Wednesday, nine days later, Ivanti said CVE-2024-21893 was under active exploitation, aggravating an already chaotic few weeks. All of the vulnerabilities affect Ivanti’s Connect Secure and Policy Secure VPN products.

A tarnished reputation and battered security professionals

The new vulnerability came to light as two other vulnerabilities were already under mass exploitation, mostly by a hacking group researchers have said is backed by the Chinese government. Ivanti provided mitigation guidance for the two vulnerabilities on January 11, and released a proper patch last week. The Cybersecurity and Infrastructure Security Agency, meanwhile, mandated all federal agencies under its authority disconnect Ivanti VPN products from the Internet until they are rebuilt from scratch and running the latest software version.

By Sunday, attacks targeting CVE-2024-21893 had mushroomed, from hitting what Ivanti said was a “small number of customers” to a mass base of users, research from security organization Shadowserver showed. The steep line in the right-most part of the following graph tracks the vulnerability’s meteoric rise starting on Friday. At the time this Ars post went live, the exploitation volume of the vulnerability exceeded that of CVE-2023-46805 and CVE-2024-21887, the previous Ivanti vulnerabilities under active targeting.

Shadowserver

Systems that had been inoculated against the two older vulnerabilities by following Ivanti’s mitigation process remained wide open to the newest vulnerability, a status that likely made it attractive to hackers. There’s something else that makes CVE-2024-21893 attractive to threat actors: because it resides in Ivanti’s implementation of the open-source Security Assertion Markup Language—which handles authentication and authorization between parties—people who exploit the bug can bypass normal authentication measures and gain access directly to the administrative controls of the underlying server.

Exploitation likely got a boost from proof-of-concept code released by security firm Rapid7 on Friday, but the exploit wasn’t the sole contributor. Shadowserver said it began seeing working exploits a few hours before the Rapid7 release. All of the different exploits work roughly the same way. Authentication in Ivanti VPNs occurs through the doAuthCheck function in an HTTP web server binary located at /root/home/bin/web. The endpoint /dana-ws/saml20.ws doesn’t require authentication. As this Ars post was going live, Shadowserver counted a little more than 22,000 instances of Connect Secure and Policy Secure.

Shadowserver

VPNs are an ideal target for hackers seeking access deep inside a network. The devices, which allow employees to log into work portals using an encrypted connection, sit at the very edge of the network, where they respond to requests from any device that knows the correct port configuration. Once attackers establish a beachhead on a VPN, they can often pivot to more sensitive parts of a network.

The three-week spree of non-stop exploitation has tarnished Ivanti’s reputation for security and battered security professionals as they have scrambled—often in vain—to stanch the flow of compromises. Compounding the problem was a slow patch time that missed Ivanti’s own January 24 deadline by a week. Making matters worse still: hackers figured out how to bypass the mitigation advice Ivanti provided for the first pair of vulnerabilities.

Given the false starts and high stakes, CISA’s Friday mandate of rebuilding all servers from scratch once they have installed the latest patch is prudent. The requirement doesn’t apply to non-government agencies, but given the chaos and difficulty securing the Ivanti VPNs in recent weeks, it’s a common-sense move that all users should have taken by now.

As if two Ivanti vulnerabilities under exploit weren’t bad enough, now there are 3 Read More »

microsoft-in-deal-with-semafor-to-create-news-stories-with-aid-of-ai-chatbot

Microsoft in deal with Semafor to create news stories with aid of AI chatbot

a meeting-deadline helper —

Collaboration comes as tech giant faces multibillion-dollar lawsuit from The New York Times.

Cube with Microsoft logo on top of their office building on 8th Avenue and 42nd Street near Times Square in New York City.

Enlarge / Cube with Microsoft logo on top of their office building on 8th Avenue and 42nd Street near Times Square in New York City.

Microsoft is working with media startup Semafor to use its artificial intelligence chatbot to help develop news stories—part of a journalistic outreach that comes as the tech giant faces a multibillion-dollar lawsuit from the New York Times.

As part of the agreement, Microsoft is paying an undisclosed sum of money to Semafor to sponsor a breaking news feed called “Signals.” The companies would not share financial details, but the amount of money is “substantial” to Semafor’s business, said a person familiar with the matter.

Signals will offer a feed of breaking news and analysis on big stories, with about a dozen posts a day. The goal is to offer different points of view from across the globe—a key focus for Semafor since its launch in 2022.

Semafor co-founder Ben Smith emphasized that Signals will be written entirely by journalists, with artificial intelligence providing a research tool to inform posts.

Microsoft on Monday was also set to announce collaborations with journalist organizations including the Craig Newmark School of Journalism, the Online News Association, and the GroundTruth Project.

The partnerships come as media companies have become increasingly concerned over generative AI and its potential threat to their businesses. News publishers are grappling with how to use AI to improve their work and stay ahead of technology, while also fearing that they could lose traffic, and therefore revenue, to AI chatbots—which can churn out humanlike text and information in seconds.

The New York Times in December filed a lawsuit against Microsoft and OpenAI, alleging the tech companies have taken a “free ride” on millions of its articles to build their artificial intelligence chatbots, and seeking billions of dollars in damages.

Gina Chua, Semafor’s executive editor, has been involved in developing Semafor’s AI research tools, which are powered by ChatGPT and Microsoft’s Bing.

“Journalism has always used technology whether it’s carrier pigeons, the telegraph or anything else . . . this represents a real opportunity, a set of tools that are really a quantum leap above many of the other tools that have come along,” Chua said.

For a breaking news event, Semafor journalists will use AI tools to quickly search for reporting and commentary from other news sources across the globe in multiple languages. A Signals post might include perspectives from Chinese, Indian, or Russian media, for example, with Semafor’s reporters summarizing and contextualizing the different points of view, while citing its sources.

Noreen Gillespie, a former Associated Press journalist, joined Microsoft three months ago to forge relationships with news companies. “Journalists need to adopt these tools in order to survive and thrive for another generation,” she said.

Semafor was founded by Ben Smith, the former BuzzFeed editor, and Justin Smith, the former chief executive of Bloomberg Media.

Semafor, which is free to read, is funded by wealthy individuals, including 3G capital founder Jorge Paulo Lemann and KKR co-founder Henry Kravis. The company made more than $10 million in revenue in 2023 and has more than 500,000 subscriptions to its free newsletters. Justin Smith said Semafor was “very close to a profit” in the fourth quarter of 2023.

“What we’re trying to go after is this really weird space of breaking news on the Internet now, in which you have these really splintered, fragmented, rushed efforts to get the first sentence of a story out for search engines . . . and then never really make any effort to provide context,” Ben Smith said.

“We’re trying to go the other way. Here are the confirmed facts. Here are three or four pieces of really sophisticated, meaningful analysis.”

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

Microsoft in deal with Semafor to create news stories with aid of AI chatbot Read More »