Biz & IT

microsoft-in-deal-with-semafor-to-create-news-stories-with-aid-of-ai-chatbot

Microsoft in deal with Semafor to create news stories with aid of AI chatbot

a meeting-deadline helper —

Collaboration comes as tech giant faces multibillion-dollar lawsuit from The New York Times.

Cube with Microsoft logo on top of their office building on 8th Avenue and 42nd Street near Times Square in New York City.

Enlarge / Cube with Microsoft logo on top of their office building on 8th Avenue and 42nd Street near Times Square in New York City.

Microsoft is working with media startup Semafor to use its artificial intelligence chatbot to help develop news stories—part of a journalistic outreach that comes as the tech giant faces a multibillion-dollar lawsuit from the New York Times.

As part of the agreement, Microsoft is paying an undisclosed sum of money to Semafor to sponsor a breaking news feed called “Signals.” The companies would not share financial details, but the amount of money is “substantial” to Semafor’s business, said a person familiar with the matter.

Signals will offer a feed of breaking news and analysis on big stories, with about a dozen posts a day. The goal is to offer different points of view from across the globe—a key focus for Semafor since its launch in 2022.

Semafor co-founder Ben Smith emphasized that Signals will be written entirely by journalists, with artificial intelligence providing a research tool to inform posts.

Microsoft on Monday was also set to announce collaborations with journalist organizations including the Craig Newmark School of Journalism, the Online News Association, and the GroundTruth Project.

The partnerships come as media companies have become increasingly concerned over generative AI and its potential threat to their businesses. News publishers are grappling with how to use AI to improve their work and stay ahead of technology, while also fearing that they could lose traffic, and therefore revenue, to AI chatbots—which can churn out humanlike text and information in seconds.

The New York Times in December filed a lawsuit against Microsoft and OpenAI, alleging the tech companies have taken a “free ride” on millions of its articles to build their artificial intelligence chatbots, and seeking billions of dollars in damages.

Gina Chua, Semafor’s executive editor, has been involved in developing Semafor’s AI research tools, which are powered by ChatGPT and Microsoft’s Bing.

“Journalism has always used technology whether it’s carrier pigeons, the telegraph or anything else . . . this represents a real opportunity, a set of tools that are really a quantum leap above many of the other tools that have come along,” Chua said.

For a breaking news event, Semafor journalists will use AI tools to quickly search for reporting and commentary from other news sources across the globe in multiple languages. A Signals post might include perspectives from Chinese, Indian, or Russian media, for example, with Semafor’s reporters summarizing and contextualizing the different points of view, while citing its sources.

Noreen Gillespie, a former Associated Press journalist, joined Microsoft three months ago to forge relationships with news companies. “Journalists need to adopt these tools in order to survive and thrive for another generation,” she said.

Semafor was founded by Ben Smith, the former BuzzFeed editor, and Justin Smith, the former chief executive of Bloomberg Media.

Semafor, which is free to read, is funded by wealthy individuals, including 3G capital founder Jorge Paulo Lemann and KKR co-founder Henry Kravis. The company made more than $10 million in revenue in 2023 and has more than 500,000 subscriptions to its free newsletters. Justin Smith said Semafor was “very close to a profit” in the fourth quarter of 2023.

“What we’re trying to go after is this really weird space of breaking news on the Internet now, in which you have these really splintered, fragmented, rushed efforts to get the first sentence of a story out for search engines . . . and then never really make any effort to provide context,” Ben Smith said.

“We’re trying to go the other way. Here are the confirmed facts. Here are three or four pieces of really sophisticated, meaningful analysis.”

© 2024 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

Microsoft in deal with Semafor to create news stories with aid of AI chatbot Read More »

a-startup-allegedly-“hacked-the-world”-then-came-the-censorship—and-now-the-backlash.

A startup allegedly “hacked the world.” Then came the censorship—and now the backlash.

hacker-for-hire —

Anti-censorship voices are working to highlight reports of one Indian company’s hacker past.

A startup allegedly “hacked the world.” Then came the censorship—and now the backlash.

Hacker-for-hire firms like NSO Group and Hacking Team have become notorious for enabling their customers to spy on vulnerable members of civil society. But as far back as a decade ago in India, a startup called Appin Technology and its subsidiaries allegedly played a similar cyber-mercenary role while attracting far less attention. Over the past two years, a collection of people with direct and indirect links to that company have been working to keep it that way, using a campaign of legal threats to silence publishers and anyone else reporting on Appin Technology’s alleged hacking past. Now, a loose coalition of anti-censorship voices is working to make that strategy backfire.

For months, lawyers and executives with ties to Appin Technology and to a newer organization that shares part of its name, called the Association of Appin Training Centers, have used lawsuits and legal threats to carry out an aggressive censorship campaign across the globe. These efforts have demanded that more than a dozen publications amend or fully remove references to the original Appin Technology’s alleged illegal hacking or, in some cases, mentions of that company’s co-founder, Rajat Khare. Most prominently, a lawsuit against Reuters brought by the Association of Appin Training Centers resulted in a stunning order from a Delhi court: It demanded that Reuters take down its article based on a blockbuster investigation into Appin Technology that had detailed its alleged targeting and spying on opposition leaders, corporate competitors, lawyers, and wealthy individuals on behalf of customers worldwide. Reuters “temporarily” removed its article in compliance with that injunction and is fighting the order in Indian court.

As Appin Training Centers has sought to enforce that same order against a slew of other news outlets, however, resistance is building. Earlier this week, the digital rights group the Electronic Frontier Foundation (EFF) sent a response—published here—pushing back against Appin Training Centers’ legal threats on behalf of media organizations caught in this crossfire, including the tech blog Techdirt and the investigative news nonprofit MuckRock.

No media outlet has claimed that Appin Training Centers—a group that describes itself as an educational firm run in part by former franchisees of the original Appin Technology, which reportedly ceased its alleged hacking operations more than a decade ago—has been involved in any illegal hacking. In December, however, Appin Training Centers sent emails to Techdirt and MuckRock demanding they too take down all content related to allegations that Appin Technology previously engaged in widespread cyberspying operations, citing the court order against Reuters.

Techdirt, Appin Training Centers argued, fell under that injunction by writing about Reuters’ story and the takedown order targeting it. So had MuckRock, the plaintiffs claimed, which hosted some of the documents that Reuters had cited in its story and uploaded to MuckRock’s DocumentCloud service. In the response sent on their behalf, the EFF states that the two media organizations are refusing to comply, arguing that the Indian court’s injunction “is in no way the global takedown order your correspondence represents it to be.” It also cites an American law called the SPEECH Act that deems any foreign court’s libel ruling that violates the First Amendment unenforceable in the US.

“It’s not a good state for a free press when one company can, around the world, disappear news articles,” Michael Morisy, the CEO and co-founder of MuckRock, tells WIRED. “That’s something that fundamentally we need to push back against.”

Techdirt founder Mike Masnick says that, beyond defeating the censorship of the Appin Technology story, he hopes their public response to that censorship effort will ultimately bring even more attention to the group’s past. In fact, 19 years ago, Masnick coined the term “the Streisand effect” to describe a situation in which someone’s attempt to hide information results in its broader exposure—exactly the situation he hopes to help create in this case. “The suppression of accurate reporting is problematic,” says Masnick. “When it happens, it deserves to be called out, and there should be more attention paid to those trying to silence it.”

The anti-secrecy nonprofit Distributed Denial of Secrets (DDoSecrets) has also joined the effort to spark that Streisand Effect, “uncensoring” Reuters’ story on the original Appin Technology as part of a new initiative it calls the Greenhouse Project. DDoSecrets cofounder Emma Best says the name comes from its intention to foster a “warming effect”—the opposite of the “chilling effect” used to describe the self-censorship created by legal threats. “It sends a signal to would-be censors, telling them that their success may be fleeting and limited,” Best says. “And it assures other journalists that their work can survive.”

Neither Appin Training Centers nor Rajat Khare responded to WIRED’s request for comment, nor did Reuters.

The fight to expose the original Appin Technology’s alleged hacking history began to reach a head in November of 2022, when the Association for Appin Training Centers sued Reuters based only on its reporters’ unsolicited messages to Appin Training Centers’ employees and students. The company’s legal complaint, filed in India’s judicial system, accused Reuters not only of defamation, but “mental harassment, stalking, sexual misconduct and trauma.”

Nearly a full year later, Reuters nonetheless published its article, “How an Indian Startup Hacked the World.” The judge in the case initially sided with Appin Training Centers, writing that the article could have a “devastating effect on the general students population of India.” He quickly ordered an injunction stating that Appin Training Centers can demand Reuters take down their claims about Appin Technology.

A startup allegedly “hacked the world.” Then came the censorship—and now the backlash. Read More »

agencies-using-vulnerable-ivanti-products-have-until-saturday-to-disconnect-them

Agencies using vulnerable Ivanti products have until Saturday to disconnect them

TOUGH MEDICINE —

Things were already bad with two critical zero-days. Then Ivanti disclosed a new one.

Photograph depicts a security scanner extracting virus from a string of binary code. Hand with the word

Getty Images

Federal civilian agencies have until midnight Saturday morning to sever all network connections to Ivanti VPN software, which is currently under mass exploitation by multiple threat groups. The US Cybersecurity and Infrastructure Security Agency mandated the move on Wednesday after disclosing three critical vulnerabilities in recent weeks.

Three weeks ago, Ivanti disclosed two critical vulnerabilities that it said threat actors were already actively exploiting. The attacks, the company said, targeted “a limited number of customers” using the company’s Connect Secure and Policy Secure VPN products. Security firm Volexity said on the same day that the vulnerabilities had been under exploitation since early December. Ivanti didn’t have a patch available and instead advised customers to follow several steps to protect themselves against attacks. Among the steps was running an integrity checker the company released to detect any compromises.

Almost two weeks later, researchers said the zero-days were under mass exploitation in attacks that were backdooring customer networks around the globe. A day later, Ivanti failed to make good on an earlier pledge to begin rolling out a proper patch by January 24. The company didn’t start the process until Wednesday, two weeks after the deadline it set for itself.

And then, there were three

Ivanti disclosed two new critical vulnerabilities in Connect Secure on Wednesday, tracked as CVE-2024-21888 and CVE-2024-21893. The company said that CVE-2024-21893—a class of vulnerability known as a server-side request forgery—“appears to be targeted,” bringing the number of actively exploited vulnerabilities to three. German government officials said they had already seen successful exploits of the newest one. The officials also warned that exploits of the new vulnerabilities neutralized the mitigations Ivanti advised customers to implement.

Hours later, the Cybersecurity and Infrastructure Security Agency—typically abbreviated as CISA—ordered all federal agencies under its authority to “disconnect all instances of Ivanti Connect Secure and Ivanti Policy Secure solution products from agency networks” no later than 11: 59 pm on Friday. Agency officials set the same deadline for the agencies to complete the Ivanti-recommended steps, which are designed to detect if their Ivanti VPNs have already been compromised in the ongoing attacks.

The steps include:

  • Identifying any additional systems connected or recently connected to the affected Ivanti device
  • Monitoring the authentication or identity management services that could be exposed
  • Isolating the systems from any enterprise resources to the greatest degree possible
  • Continuing to audit privilege-level access accounts.

The directive went on to say that before agencies can bring their Ivanti products back online, they must follow a long series of steps that include factory resetting their system, rebuilding them following Ivanti’s previously issued instructions, and installing the Ivanti patches.

“Agencies running the affected products must assume domain accounts associated with the affected products have been compromised,” Wednesday’s directive said. Officials went on to mandate that by March 1, agencies must have reset passwords “twice” for on-premise accounts, revoke Kerberos-enabled authentication tickets, and then revoke tokens for cloud accounts in hybrid deployments.

Steven Adair, the president of Volexity, the security firm that discovered the initial two vulnerabilities, said its most recent scans indicate that at least 2,200 customers of the affected products have been compromised to date. He applauded CISA’s Wednesday directive.

“This is effectively the best way to alleviate any concern that a device might still be compromised,” Adair said in an email. “We saw that attackers were actively looking for ways to circumvent detection from the integrity checker tools. With the previous and new vulnerabilities, this course of action around a completely fresh and patched system might be the best way to go for organizations to not have to wonder if their device is actively compromised.”

The directive is binding only on agencies under CISA’s authority. Any user of the vulnerable products, however, should follow the same steps immediately if they haven’t already.

Agencies using vulnerable Ivanti products have until Saturday to disconnect them Read More »

chinese-malware-removed-from-soho-routers-after-fbi-issues-covert-commands

Chinese malware removed from SOHO routers after FBI issues covert commands

REBOOT OR, BETTER yet, REPLACE YOUR OLD ROUTERS! —

Routers were being used to conceal attacks on critical infrastructure.

A wireless router with an Ethernet cable hooked into it.

Enlarge / A Wi-Fi router.

The US Justice Department said Wednesday that the FBI surreptitiously sent commands to hundreds of infected small office and home office routers to remove malware China state-sponsored hackers were using to wage attacks on critical infrastructure.

The routers—mainly Cisco and Netgear devices that had reached their end of life—were infected with what’s known as KV Botnet malware, Justice Department officials said. Chinese hackers from a group tracked as Volt Typhoon used the malware to wrangle the routers into a network they could control. Traffic passing between the hackers and the compromised devices was encrypted using a VPN module KV Botnet installed. From there, the campaign operators connected to the networks of US critical infrastructure organizations to establish posts that could be used in future cyberattacks. The arrangement caused traffic to appear as originating from US IP addresses with trustworthy reputations rather than suspicious regions in China.

Seizing infected devices

Before the takedown could be conducted legally, FBI agents had to receive authority—technically for what’s called a seizure of infected routers or “target devices”—from a federal judge. An initial affidavit seeking authority was filed in US federal court in Houston in December. Subsequent requests have been filed since then.

“To effect these seizures, the FBI will issue a command to each Target Device to stop it from running the KV Botnet VPN process,” an agency special agent wrote in an affidavit dated January 9. “This command will also stop the Target Device from operating as a VPN node, thereby preventing the hackers from further accessing Target Devices through any established VPN tunnel. This command will not affect the Target Device if the VPN process is not running, and will not otherwise affect the Target Device, including any legitimate VPN process installed by the owner of the Target Device.”

Wednesday’s Justice Department statement said authorities had followed through on the takedown, which disinfected “hundreds” of infected routers and removed them from the botnet. To prevent the devices from being reinfected, the takedown operators issued additional commands that the affidavit said would “interfere with the hackers’ control over the instrumentalities of their crimes (the Target Devices), including by preventing the hackers from easily re-infecting the Target Devices.”

The affidavit said elsewhere that the prevention measures would be neutralized if the routers were restarted. These devices would then be once again vulnerable to infection.

Redactions in the affidavit make the precise means used to prevent re-infections unclear. Portions that weren’t censored, however, indicated the technique involved a loop-back mechanism that prevented the devices from communicating with anyone trying to hack them.

Portions of the affidavit explained:

22. To effect these seizures, the FBI will simultaneously issue commands that will interfere with the hackers’ control over the instrumentalities of their crimes (the Target Devices), including by preventing the hackers from easily re-infecting the Target Devices with KV Botnet malware.

  1. a. When the FBI deletes the KV Botnet malware from the Target Devices [redacted. To seize the Target Devices and interfere with the hackers’ control over them, the FBI [redacted]. This [redacted] will have no effect except to protect the Target Device from reinfection by the KV Botnet [redacted] The effect of can be undone by restarting the Target Device [redacted] make the Target Device vulnerable to re-infection.
  2. b. [redacted] the FBI will seize each such Target Device by causing the malware on it to communicate with only itself. This method of seizure will interfere with the ability of the hackers to control these Target Devices. This communications loopback will, like the malware itself, not survive a restart of a Target Device.
  3. c. To seize Target Devices, the FBI will [redacted] block incoming traffic [redacted] used exclusively by the KV Botnet malware on Target Devices, to block outbound traffic to [redacted] the Target Devices’ parent and command-and-control nodes, and to allow a Target Device to communicate with itself [redacted] are not normally used by the router, and so the router’s legitimate functionality is not affected. The effect of [redacted] to prevent other parts of the botnet from contacting the victim router, undoing the FBI’s commands, and reconnecting it to the botnet. The effect of these commands is undone by restarting the Target Devices.

23. To effect these seizures, the FBI will issue a command to each Target Device to stop it from running the KV Botnet VPN process. This command will also stop the Target Device from operating as a VPN node, thereby preventing the hackers from further accessing Target Devices through any established VPN tunnel. This command will not affect the Target Device if the VPN process is not running, and will not otherwise affect the Target Device, including any legitimate VPN process installed by the owner of the Target Device.

Chinese malware removed from SOHO routers after FBI issues covert commands Read More »

chatgpt’s-new-@-mentions-bring-multiple-personalities-into-your-ai-convo

ChatGPT’s new @-mentions bring multiple personalities into your AI convo

team of rivals —

Bring different AI roles into the same chatbot conversation history.

Illustration of a man jugging at symbols.

Enlarge / With so many choices, selecting the perfect GPT can be confusing.

On Tuesday, OpenAI announced a new feature in ChatGPT that allows users to pull custom personalities called “GPTs” into any ChatGPT conversation with the @ symbol. It allows a level of quasi-teamwork within ChatGPT among expert roles that was previously impractical, making collaborating with a team of AI agents within OpenAI’s platform one step closer to reality.

You can now bring GPTs into any conversation in ChatGPT – simply type @ and select the GPT,” wrote OpenAI on the social media network X. “This allows you to add relevant GPTs with the full context of the conversation.”

OpenAI introduced GPTs in November as a way to create custom personalities or roles for ChatGPT to play. For example, users can build their own GPTs to focus on certain topics or certain skills. Paid ChatGPT subscribers can also freely download a host of GPTs developed by other ChatGPT users through the GPT Store.

Previously, if you wanted to share information between GPT profiles, you had to copy the text, select a new chat with the GPT, paste it, and explain the context of what the information means or what you want to do with it. Now, ChatGPT users can stay in the default ChatGPT window and bring in GPTs as needed without losing the history of the conversation.

For example, we created a “Wellness Guide” GPT that is crafted as an expert in human health conditions (of course, this being ChatGPT, always consult a human doctor if you’re having medical problems), and we created a “Canine Health Advisor” for dog-related health questions.

A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Enlarge / A screenshot of ChatGPT where we @-mentioned a human wellness advisor, then a dog advisor in the same conversation history.

Benj Edwards

We started in a default ChatGPT chat, hit the @ symbol, then typed the first few letters of “Wellness” and selected it from a list. It filled out the rest. We asked a question about food poisoning in humans, and then we switched to the canine advisor in the same way with an @ symbol and asked about the dog.

Using this feature, you could alternatively consult, say, an “ad copywriter” GPT and an “editor” GPT—ask the copywriter to write some text, then rope in the editor GPT to check it, looking at it from a different angle. Different system prompts (the instructions that define a GPT’s personality) make for significant behavior differences.

We also tried swapping between GPT profiles that write software and others designed to consult on historical tech subjects. Interestingly, ChatGPT does not differentiate between GPTs as different personalities as you change. It will still say, “I did this earlier” when a different GPT is talking about a previous GPT’s output in the same conversation history. From its point of view, it’s just ChatGPT and not multiple agents.

From our vantage point, this feature seems to represent baby steps toward a future where GPTs, as independent agents, could work together as a team to fulfill more complex tasks directed by the user. Similar experiments have been done outside of OpenAI in the past (using API access), but OpenAI has so far resisted a more agentic model for ChatGPT. As we’ve seen (first with GPTs and now with this), OpenAI seems to be slowly angling toward that goal itself, but only time will tell if or when we see true agentic teamwork in a shipping service.

ChatGPT’s new @-mentions bring multiple personalities into your AI convo Read More »

ars-technica-used-in-malware-campaign-with-never-before-seen-obfuscation

Ars Technica used in malware campaign with never-before-seen obfuscation

WHEN USERS ATTACK —

Vimeo also used by legitimate user who posted booby-trapped content.

Ars Technica used in malware campaign with never-before-seen obfuscation

Getty Images

Ars Technica was recently used to serve second-stage malware in a campaign that used a never-before-seen attack chain to cleverly cover its tracks, researchers from security firm Mandiant reported Tuesday.

A benign image of a pizza was uploaded to a third-party website and was then linked with a URL pasted into the “about” page of a registered Ars user. Buried in that URL was a string of characters that appeared to be random—but were actually a payload. The campaign also targeted the video-sharing site Vimeo, where a benign video was uploaded and a malicious string was included in the video description. The string was generated using a technique known as Base 64 encoding. Base 64 converts text into a printable ASCII string format to represent binary data. Devices already infected with the first-stage malware used in the campaign automatically retrieved these strings and installed the second stage.

Not typically seen

“This is a different and novel way we’re seeing abuse that can be pretty hard to detect,” Mandiant researcher Yash Gupta said in an interview. “This is something in malware we have not typically seen. It’s pretty interesting for us and something we wanted to call out.”

The image posted on Ars appeared in the about profile of a user who created an account on November 23. An Ars representative said the photo, showing a pizza and captioned “I love pizza,” was removed by Ars staff on December 16 after being tipped off by email from an unknown party. The Ars profile used an embedded URL that pointed to the image, which was automatically populated into the about page. The malicious base 64 encoding appeared immediately following the legitimate part of the URL. The string didn’t generate any errors or prevent the page from loading.

Pizza image posted by user.

Enlarge / Pizza image posted by user.

Malicious string in URL.

Enlarge / Malicious string in URL.

Mandiant researchers said there were no consequences for people who may have viewed the image, either as displayed on the Ars page or on the website that hosted it. It’s also not clear that any Ars users visited the about page.

Devices that were infected by the first stage automatically accessed the malicious string at the end of the URL. From there, they were infected with a second stage.

The video on Vimeo worked similarly, except that the string was included in the video description.

Ars representatives had nothing further to add. Vimeo representatives didn’t immediately respond to an email.

The campaign came from a threat actor Mandiant tracks as UNC4990, which has been active since at least 2020 and bears the hallmarks of being motivated by financial gain. The group has already used a separate novel technique to fly under the radar. That technique spread the second stage using a text file that browsers and normal text editors showed to be blank.

Opening the same file in a hex editor—a tool for analyzing and forensically investigating binary files—showed that a combination of tabs, spaces, and new lines were arranged in a way that encoded executable code. Like the technique involving Ars and Vimeo, the use of such a file is something the Mandiant researchers had never seen before. Previously, UNC4990 used GitHub and GitLab.

The initial stage of the malware was transmitted by infected USB drives. The drives installed a payload Mandiant has dubbed explorerps1. Infected devices then automatically reached out to either the malicious text file or else to the URL posted on Ars or the video posted to Vimeo. The base 64 strings in the image URL or video description, in turn, caused the malware to contact a site hosting the second stage. The second stage of the malware, tracked as Emptyspace, continuously polled a command-and-control server that, when instructed, would download and execute a third stage.

Mandiant

Mandiant has observed the installation of this third stage in only one case. This malware acts as a backdoor the researchers track as Quietboard. The backdoor, in that case, went on to install a cryptocurrency miner.

Anyone who is concerned they may have been infected by any of the malware covered by Mandiant can check the indicators of compromise section in Tuesday’s post.

Ars Technica used in malware campaign with never-before-seen obfuscation Read More »

rhyming-ai-powered-clock-sometimes-lies-about-the-time,-makes-up-words

Rhyming AI-powered clock sometimes lies about the time, makes up words

Confabulation time —

Poem/1 Kickstarter seeks $103K for fun ChatGPT-fed clock that may hallucinate the time.

A CAD render of the Poem/1 sitting on a bookshelf.

Enlarge / A CAD render of the Poem/1 sitting on a bookshelf.

On Tuesday, product developer Matt Webb launched a Kickstarter funding project for a whimsical e-paper clock called the “Poem/1” that tells the current time using AI and rhyming poetry. It’s powered by the ChatGPT API, and Webb says that sometimes ChatGPT will lie about the time or make up words to make the rhymes work.

“Hey so I made a clock. It tells the time with a brand new poem every minute, composed by ChatGPT. It’s sometimes profound, and sometimes weird, and occasionally it fibs about what the actual time is to make a rhyme work,” Webb writes on his Kickstarter page.

The $126 clock is the product of Webb’s Acts Not Facts, which he bills as “.” Despite the net-connected service aspect of the clock, Webb says it will not require a subscription to function.

A labeled CAD rendering of the Poem/1 clock, representing its final shipping configuration.

Enlarge / A labeled CAD rendering of the Poem/1 clock, representing its final shipping configuration.

There are 1,440 minutes in a day, so Poem/1 needs to display 1,440 unique poems to work. The clock features a monochrome e-paper screen and pulls its poetry rhymes via Wi-Fi from a central server run by Webb’s company. To save money, that server pulls poems from ChatGPT’s API and will share them out to many Poem/1 clocks at once. This prevents costly API fees that would add up if your clock were querying OpenAI’s servers 1,440 times a day, non-stop, forever. “I’m reserving a % of the retail price from each clock in a bank account to cover AI and server costs for 5 years,” Webb writes.

For hackers, Webb says that you’ll be able to change the back-end server URL of the Poem/1 from the default to whatever you want, so it can display custom text every minute of the day. Webb says he will document and publish the API when Poem/1 ships.

Hallucination time

A photo of a Poem/1 prototype with a hallucinated time, according to Webb.

Enlarge / A photo of a Poem/1 prototype with a hallucinated time, according to Webb.

Given the Poem/1’s large language model pedigree, it’s perhaps not surprising that Poem/1 may sometimes make up things (also called “hallucination” or “confabulation” in the AI field) to fulfill its task. The LLM that powers ChatGPT is always searching for the most likely next word in a sequence, and sometimes factuality comes second to fulfilling that mission.

Further down on the Kickstarter page, Webb provides a photo of his prototype Poem/1 where the screen reads, “As the clock strikes eleven forty two, / I rhyme the time, as I always do.” Just below, Webb warns, “Poem/1 fibs occasionally. I don’t believe it was actually 11.42 when this photo was taken. The AI hallucinated the time in order to make the poem work. What we do for art…”

In other clocks, the tendency to unreliably tell the time might be a fatal flaw. But judging by his humorous angle on the Kickstarter page, Webb apparently sees the clock as more of a fun art project than a precision timekeeping instrument. “Don’t rely on this clock in situations where timekeeping is vital,” Webb writes, “such as if you work in air traffic control or rocket launches or the finish line of athletics competitions.”

Poem/1 also sometimes takes poetic license with vocabulary to tell the time. During a humorous moment in the Kickstarter promotional video, Webb looks at his clock prototype and reads the rhyme, “A clock that defies all rhyme and reason / 4: 30 PM, a temporal teason.” Then he says, “I had to look ‘teason’ up. It doesn’t mean anything, so it’s a made-up word.”

Rhyming AI-powered clock sometimes lies about the time, makes up words Read More »

raspberry-pi-is-planning-a-london-ipo,-but-its-ceo-expects-“no-change”-in-focus

Raspberry Pi is planning a London IPO, but its CEO expects “no change” in focus

Just enough RAM to move markets —

Eben Upton says hobbyists remain “incredibly important” while he’s involved.

Updated

Raspberry Pi 5 with Active Cooler installed on a wood desktop

Enlarge / Is it not a strange fate that we should suffer so much fear and doubt for so small a thing? So small a thing!

Andrew Cunningham

The business arm of Raspberry Pi is preparing to make an initial public offering (IPO) in London. CEO Eben Upton tells Ars that should the IPO happen, it will let Raspberry Pi’s not-for-profit side expand by “at least a factor of 2X.” And while it’s “an understandable thing” that Raspberry Pi enthusiasts could be concerned, “while I’m involved in running the thing, I don’t expect people to see any change in how we do things.”

CEO Eben Upton confirmed in an interview with Bloomberg News that Raspberry Pi had appointed bankers at London firms Peel Hunt and Jefferies to prepare for “when the IPO market reopens.”

Raspberry previously raised money from Sony and semiconductor and software design firm ARM, and it sought public investment. Upton denied or didn’t quite deny IPO rumors in 2021, and Bloomberg reported Raspberry Pi was considering an IPO in early 2022. After ARM took a minority stake in the company in November 2023, Raspberry Pi was valued at roughly 400 million pounds, or just over $500 million.

Given the company’s gradual recovery from pandemic supply chain shortages, and the success of the Raspberry Pi 5 launch, the company’s IPO will likely jump above that level, even with a listing in the UK rather than the more typical US IPO. Upton told The Register that “the business is in a much better place than it was last time we looked at it [an IPO]. We partly stopped because the markets got bad. And we partly stopped because our business became unpredictable.”

News of the potential transformation of Raspberry Pi Ltd from the private arm of the education-minded Raspberry Pi Foundation into a publicly traded company, obligated to generate profits for shareholders, reverberated about the way you’d expect on Reddit, Hacker News, and elsewhere. Many pointed with concern to the company’s decision to prioritize small business customers requiring Pi boards for their businesses as a portent of what investors might prioritize. Many expressed confusion over the commercial entity’s relationship to the foundation and what an IPO meant for that arrangement.

Seeing comments after the Bloomberg story, Upton said he understood concerns about a potential shift in mission or a change in the pricing structure. “It’s a good thing, in that people care about us,” Upton said in a phone interview. But he noted that Raspberry Pi’s business arm has had both strategic and private investors in its history, along with a majority shareholder in its Foundation (which in 2016 owned 75 percent of shares), and that he doesn’t see changes to what Pi has built.

“What Raspberry Pi [builds] are the products we want to buy, and then we sell them to people like us,” Upton said. “Certainly, while I’m involved in it, I can’t imagine an environment in which the hobbyists are not going to be incredibly important.”

The IPO is “about the foundation,” Upton said, with that charitable arm selling some of its majority stake in the business entity to raise funds and expand. (“We’ve not cooked up some new way for a not-for-profit to do an IPO, no,” he noted.) The foundation was previously funded by dividends from the business side, Upton said. “We do this transaction, and the proceeds of that transaction allow the foundation to train teachers, run clubs, expand programs, and… do those things at, at least, a factor of 2X. That’s what I’m most excited about.”

Asked about concerns that Raspberry Pi could focus its attention on higher-volume customers after public investors are involved, Upton said there would be “no change” to the kinds of products Pi makes, and that makers are “culturally important to us.” Upton noted that Raspberry Pi, apart from a single retail store, doesn’t sell Pis directly but through resellers. Margin structures at Raspberry Pi have “stayed the same all the way through,” Upton said and should remain so after the IPO.

Raspberry Pi’s lower-cost products, like the Zero 2 W and Pico, are fulfilling the educational and tinkering missions of the project, now at far better capability and lower price points than the original Pi products, Upton said. “If people think that an IPO means we’re going to … push prices up, push the margins up, push down the feature sets, the only answer we can give is, watch us. Keep watching,” he said. “Let’s look at it in 15, 20 years’ time.”

This post was updated at 2: 30 pm ET on January 30 to include an Ars interview with Raspberry Pi CEO Eben Upton.

Raspberry Pi is planning a London IPO, but its CEO expects “no change” in focus Read More »

chatgpt-is-leaking-passwords-from-private-conversations-of-its-users,-ars-reader-says

ChatGPT is leaking passwords from private conversations of its users, Ars reader says

OPENAI SPRINGS A LEAK —

Names of unpublished research papers, presentations, and PHP scripts also leaked.

OpenAI logo displayed on a phone screen and ChatGPT website displayed on a laptop screen.

Getty Images

ChatGPT is leaking private conversations that include login credentials and other personal details of unrelated users, screenshots submitted by an Ars reader on Monday indicated.

Two of the seven screenshots the reader submitted stood out in particular. Both contained multiple pairs of usernames and passwords that appeared to be connected to a support system used by employees of a pharmacy prescription drug portal. An employee using the AI chatbot seemed to be troubleshooting problems that encountered while using the portal.

“Horrible, horrible, horrible”

“THIS is so f-ing insane, horrible, horrible, horrible, i cannot believe how poorly this was built in the first place, and the obstruction that is being put in front of me that prevents it from getting better,” the user wrote. “I would fire [redacted name of software] just for this absurdity if it was my choice. This is wrong.”

Besides the candid language and the credentials, the leaked conversation includes the name of the app the employee is troubleshooting and the store number where the problem occurred.

The entire conversation goes well beyond what’s shown in the redacted screenshot above. A link Ars reader Chase Whiteside included showed the chat conversation in its entirety. The URL disclosed additional credential pairs.

The results appeared Monday morning shortly after reader Whiteside had used ChatGPT for an unrelated query.

“I went to make a query (in this case, help coming up with clever names for colors in a palette) and when I returned to access moments later, I noticed the additional conversations,” Whiteside wrote in an email. “They weren’t there when I used ChatGPT just last night (I’m a pretty heavy user). No queries were made—they just appeared in my history, and most certainly aren’t from me (and I don’t think they’re from the same user either).”

Other conversations leaked to Whiteside include the name of a presentation someone was working on, details of an unpublished research proposal, and a script using the PHP programming language. The users for each leaked conversation appeared to be different and unrelated to each other. The conversation involving the prescription portal included the year 2020. Dates didn’t appear in the other conversations.

The episode, and others like it, underscore the wisdom of stripping out personal details from queries made to ChatGPT and other AI services whenever possible. Last March, ChatGPT maker OpenAI took the AI chatbot offline after a bug caused the site to show titles from one active user’s chat history to unrelated users.

In November, researchers published a paper reporting how they used queries to prompt ChatGPT into divulging email addresses, phone and fax numbers, physical addresses, and other private data that was included in material used to train the ChatGPT large language model.

Concerned about the possibility of proprietary or private data leakage, companies, including Apple, have restricted their employees’ use of ChatGPT and similar sites.

As mentioned in an article from December when multiple people found that Ubiquity’s UniFy devices broadcasted private video belonging to unrelated users, these sorts of experiences are as old as the Internet is. As explained in the article:

The precise root causes of this type of system error vary from incident to incident, but they often involve “middlebox” devices, which sit between the front- and back-end devices. To improve performance, middleboxes cache certain data, including the credentials of users who have recently logged in. When mismatches occur, credentials for one account can be mapped to a different account.

An OpenAI representative said the company was investigating the report.

ChatGPT is leaking passwords from private conversations of its users, Ars reader says Read More »

openai-and-common-sense-media-partner-to-protect-teens-from-ai-harms-and-misuse

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse

Adventures in chatbusting —

Site gave ChatGPT 3 stars and 48% privacy score: “Best used for creativity, not facts.”

Boy in Living Room Wearing Robot Mask

On Monday, OpenAI announced a partnership with the nonprofit Common Sense Media to create AI guidelines and educational materials targeted at parents, educators, and teens. It includes the curation of family-friendly GPTs in OpenAI’s GPT store. The collaboration aims to address concerns about the impacts of AI on children and teenagers.

Known for its reviews of films and TV shows aimed at parents seeking appropriate media for their kids to watch, Common Sense Media recently branched out into AI and has been reviewing AI assistants on its site.

“AI isn’t going anywhere, so it’s important that we help kids understand how to use it responsibly,” Common Sense Media wrote on X. “That’s why we’ve partnered with @OpenAI to help teens and families safely harness the potential of AI.”

OpenAI CEO Sam Altman and Common Sense Media CEO James Steyer announced the partnership onstage in San Francisco at the Common Sense Summit for America’s Kids and Families, an event that was well-covered by media members on the social media site X.

For his part, Altman offered a canned statement in the press release, saying, “AI offers incredible benefits for families and teens, and our partnership with Common Sense will further strengthen our safety work, ensuring that families and teens can use our tools with confidence.”

The announcement feels slightly non-specific in the official news release, with Steyer offering, “Our guides and curation will be designed to educate families and educators about safe, responsible use of ChatGPT, so that we can collectively avoid any unintended consequences of this emerging technology.”

The partnership seems aimed mostly at bringing a patina of family-friendliness to OpenAI’s GPT store, with the most solid reveal being the aforementioned fact that Common Sense media will help with the “curation of family-friendly GPTs in the GPT Store based on Common Sense ratings and standards.”

Common Sense AI reviews

As mentioned above, Common Sense Media began reviewing AI assistants on its site late last year. This puts Common Sense Media in an interesting position with potential conflicts of interest regarding the new partnership with OpenAI. However, it doesn’t seem to be offering any favoritism to OpenAI so far.

For example, Common Sense Media’s review of ChatGPT calls the AI assistant “A powerful, at times risky chatbot for people 13+ that is best used for creativity, not facts.” It labels ChatGPT as being suitable for ages 13 and up (which is in OpenAI’s Terms of Service) and gives the OpenAI assistant three out of five stars. ChatGPT also scores a 48 percent privacy rating (which is oddly shown as 55 percent on another page that goes into privacy details). The review we cited was last updated on October 13, 2023, as of this writing.

For reference, Google Bard gets a three-star overall rating and a 75 percent privacy rating in its Common Sense Media review. Stable Diffusion, the image synthesis model, nets a one-star rating with the description, “Powerful image generator can unleash creativity, but is wildly unsafe and perpetuates harm.” OpenAI’s DALL-E gets two stars and a 48 percent privacy rating.

The information that Common Sense Media includes about each AI model appears relatively accurate and detailed (and the organization cited an Ars Technica article as a reference in one explanation), so they feel fair, even in the face of the OpenAI partnership. Given the low scores, it seems that most AI models aren’t off to a great start, but that may change. It’s still early days in generative AI.

OpenAI and Common Sense Media partner to protect teens from AI harms and misuse Read More »

the-life-and-times-of-cozy-bear,-the-russian-hackers-who-just-hit-microsoft-and-hpe

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE

FROM RUSSIA WITH ROOT —

Hacks by Kremlin-backed group continue to hit hard.

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE

Getty Images

Hewlett Packard Enterprise (HPE) said Wednesday that Kremlin-backed actors hacked into the email accounts of its security personnel and other employees last May—and maintained surreptitious access until December. The disclosure was the second revelation of a major corporate network breach by the hacking group in five days.

The hacking group that hit HPE is the same one that Microsoft said Friday broke into its corporate network in November and monitored email accounts of senior executives and security team members until being driven out earlier this month. Microsoft tracks the group as Midnight Blizzard. (Under the company’s recently retired threat actor naming convention, which was based on chemical elements, the group was known as Nobelium.) But it is perhaps better known by the name Cozy Bear—though researchers have also dubbed it APT29, the Dukes, Cloaked Ursa, and Dark Halo.

“On December 12, 2023, Hewlett Packard Enterprise was notified that a suspected nation-state actor, believed to be the threat actor Midnight Blizzard, the state-sponsored actor also known as Cozy Bear, had gained unauthorized access to HPE’s cloud-based email environment,” company lawyers wrote in a filing with the Securities and Exchange Commission. “The Company, with assistance from external cybersecurity experts, immediately activated our response process to investigate, contain, and remediate the incident, eradicating the activity. Based on our investigation, we now believe that the threat actor accessed and exfiltrated data beginning in May 2023 from a small percentage of HPE mailboxes belonging to individuals in our cybersecurity, go-to-market, business segments, and other functions.”

An HPE representative said in an email that Cozy Bear’s initial entry into the network was through “a compromised, internal HPE Office 365 email account [that] was leveraged to gain access.” The representative declined to elaborate. The representative also declined to say how HPE discovered the breach.

Cozy Bear hacking its way into the email systems of two of the world’s most powerful companies and monitoring top employees’ accounts for months aren’t the only similarities between the two events. Both breaches also involved compromising a single device on each corporate network, then escalating that toehold to the network itself. From there, Cozy Bear camped out undetected for months. The HPE intrusion was all the more impressive because Wednesday’s disclosure said that the hackers also gained access to Sharepoint servers in May. Even after HPE detected and contained that breach a month later, it would take HPE another six months to discover the compromised email accounts.

The pair of disclosures, coming within five days of each other, may create the impression that there has been a recent flurry of hacking activity. But Cozy Bear has actually been one of the most active nation-state groups since at least 2010. In the intervening 14 years, it has waged an almost constant series of attacks, mostly on the networks of governmental organizations and the technology companies that supply them. Multiple intelligence services and private research companies have attributed the hacking group as an arm of Russia’s Foreign Intelligence Service, also known as the SVR.

The life and times of Cozy Bear (so far)

In its earliest years, Cozy Bear operated in relative obscurity—precisely the domain it prefers—as it hacked mostly Western governmental agencies and related organizations such as political think tanks and governmental subcontractors. In 2013, researchers from security firm Kaspersky unearthed MiniDuke, a sophisticated piece of malware that had taken hold of 60 government agencies, think tanks, and other high-profile organizations in 23 countries, including the US, Hungary, Ukraine, Belgium, and Portugal.

MiniDuke was notable for its odd combination of advanced programming and the gratuitous references to literature found embedded into its code. (It contained strings that alluded to Dante Alighieri’s Divine Comedy and to 666, the Mark of the Beast discussed in a verse from the Book of Revelation.) Written in assembly, employing multiple levels of encryption, and relying on hijacked Twitter accounts and automated Google searches to maintain stealthy communications with command-and-control servers, MiniDuke was among the most advanced pieces of malware found at the time.

It wasn’t immediately clear who was behind the mysterious malware—another testament to the stealth of its creators. In 2015, however, researchers linked MiniDuke—and seven other pieces of previously unidentified malware—to Cozy Bear. After a half-decade of lurking, the shadowy group was suddenly brought into the light of day.

Cozy Bear once again came to prominence the following year when researchers discovered the group (along with Fancy Bear, a separate Russian-state hacking group) inside the servers of the Democratic National Committee, looking for intelligence such as opposition research into Donald Trump, the Republican nominee for president at the time. The hacking group resurfaced in the days following Trump’s election victory that year with a major spear-phishing blitz that targeted dozens of organizations in government, military, defense contracting, media, and other industries.

One of Cozy Bear’s crowning achievements came in late 2020 with the discovery of an extensive supply chain attack that targeted customers of SolarWinds, the Austin, Texas, maker of network management tools. After compromising SolarWinds’ software build system, the hacking group pushed infected updates to roughly 18,000 customers. The hackers then used the updates to compromise nine federal agencies and about 100 private companies, White House officials have said.

Cozy Bear has remained active, with multiple campaigns coming to light in 2021, including one that used zero-day vulnerabilities to infect fully updated iPhones. Last year, the group devoted much of its time to hacks of Ukraine.

The life and times of Cozy Bear, the Russian hackers who just hit Microsoft and HPE Read More »

in-major-gaffe,-hacked-microsoft-test-account-was-assigned-admin-privileges

In major gaffe, hacked Microsoft test account was assigned admin privileges

In major gaffe, hacked Microsoft test account was assigned admin privileges

The hackers who recently broke into Microsoft’s network and monitored top executives’ email for two months did so by gaining access to an aging test account with administrative privileges, a major gaffe on the company’s part, a researcher said.

The new detail was provided in vaguely worded language included in a post Microsoft published on Thursday. It expanded on a disclosure Microsoft published late last Friday. Russia-state hackers, Microsoft said, used a technique known as password spraying to exploit a weak credential for logging into a “legacy non-production test tenant account” that wasn’t protected by multifactor authentication. From there, they somehow acquired the ability to access email accounts that belonged to senior executives and employees working in security and legal teams.

A “pretty big config error”

In Thursday’s post updating customers on findings from its ongoing investigation, Microsoft provided more details on how the hackers achieved this monumental escalation of access. The hackers, part of a group Microsoft tracks as Midnight Blizzard, gained persistent access to the privileged email accounts by abusing the OAuth authorization protcol, which is used industry-wide to allow an array of apps to access resources on a network. After compromising the test tenant, Midnight Blizzard used it to create a malicious app and assign it rights to access every email address on Microsoft’s Office 365 email service.

In Thursday’s update, Microsoft officials said as much, although in language that largely obscured the extent of the major blunder. They wrote:

Threat actors like Midnight Blizzard compromise user accounts to create, modify, and grant high permissions to OAuth applications that they can misuse to hide malicious activity. The misuse of OAuth also enables threat actors to maintain access to applications, even if they lose access to the initially compromised account. Midnight Blizzard leveraged their initial access to identify and compromise a legacy test OAuth application that had elevated access to the Microsoft corporate environment. The actor created additional malicious OAuth applications. They created a new user account to grant consent in the Microsoft corporate environment to the actor controlled malicious OAuth applications. The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes. [Emphasis added.]

Kevin Beaumont—a researcher and security professional with decades of experience, including a stint working for Microsoft—pointed out on Mastodon that the only way for an account to assign the all-powerful full_access_as_app role to an OAuth app is for the account to have administrator privileges. “Somebody,” he said, “made a pretty big config error in production.”

In major gaffe, hacked Microsoft test account was assigned admin privileges Read More »