Biz & IT

$40-billion-worth-of-crypto-crime-enabled-by-stablecoins-since-2022

$40 billion worth of crypto crime enabled by stablecoins since 2022

illustration of cryptocurrency breaking through brick wall

Anjali Nair; Getty Images

Stablecoins, cryptocurrencies pegged to a stable value like the US dollar, were created with the promise of bringing the frictionless, border-crossing fluidity of bitcoin to a form of digital money with far less volatility. That combination has proved to be wildly popular, rocketing the total value of stablecoin transactions since 2022 past even that of Bitcoin itself.

It turns out, however, that as stablecoins have become popular among legitimate users over the past two years, they were even more popular among a different kind of user: those exploiting them for billions of dollars of international sanctions evasion and scams.

As part of its annual crime report, cryptocurrency-tracing firm Chainalysis today released new numbers on the disproportionate use of stablecoins for both of those massive categories of illicit crypto transactions over the last year. By analyzing blockchains, Chainalysis determined that stablecoins were used in fully 70 percent of crypto scam transactions in 2023, 83 percent of crypto payments to sanctioned countries like Iran and Russia, and 84 percent of crypto payments to specifically sanctioned individuals and companies. Those numbers far outstrip stablecoins’ growing overall use—including for legitimate purposes—which accounted for 59 percent of all cryptocurrency transaction volume in 2023.

In total, Chainalysis measured $40 billion in illicit stablecoin transactions in 2022 and 2023 combined. The largest single category of that stablecoin-enabled crime was sanctions evasion. In fact, across all cryptocurrencies, sanctions evasion accounted for more than half of the $24.2 billion in criminal transactions Chainalysis observed in 2023, with stablecoins representing the vast majority of those transactions.

The attraction of stablecoins for both sanctioned people and countries, argues Andrew Fierman, Chainalysis’ head of sanctions strategy, is that it allows targets of sanctions to circumvent any attempt to deny them a stable currency like the US dollar. “Whether it’s an individual located in Iran or a bad guy trying to launder money—either way, there’s a benefit to the stability of the US dollar that people are looking to obtain,” Fierman says. “If you’re in a jurisdiction where you don’t have access to the US dollar due to sanctions, stablecoins become an interesting play.”

As examples, Fierman points to Nobitex, the largest cryptocurrency exchange operating in the sanctioned country of Iran, as well as Garantex, a notorious exchange based in Russia that has been specifically sanctioned for its widespread criminal use. Stablecoin usage on Nobitex outstrips bitcoin by a 9:1 ratio, and on Garantex by a 5:1 ratio, Chainalysis found. That’s a stark difference from the roughly 1:1 ratio between stablecoins and bitcoins on a few nonsanctioned mainstream exchanges that Chainalysis checked for comparison.

Chainalysis' chart showing the growth in stablecoins as a fraction of the value of total illicit crypto transactions over time.

Enlarge / Chainalysis’ chart showing the growth in stablecoins as a fraction of the value of total illicit crypto transactions over time.

Chainanalysis

$40 billion worth of crypto crime enabled by stablecoins since 2022 Read More »

convicted-murderer,-filesystem-creator-writes-of-regrets-to-linux-list

Convicted murderer, filesystem creator writes of regrets to Linux list

Pre-release notes —

“The man I am now would do things very differently,” Reiser says in long letter.

Hans Reiser letter to Fredrick Brennan

Enlarge / A portion of the cover letter attached to Hans Reiser’s response to Fredrick Brennan’s prompt about his filesystem’s obsolescence.

Fredrick Brennan

With the ReiserFS recently considered obsolete and slated for removal from the Linux kernel entirely, Fredrick R. Brennan, font designer and (now regretful) founder of 8chan, wrote to the filesystem’s creator, Hans Reiser, asking if he wanted to reply to the discussion on the Linux Kernel Mailing List (LKML).

Reiser, 59, serving a potential life sentence in a California prison for the 2006 murder of his estranged wife, Nina Reiser, wrote back with more than 6,500 words, which Brennan then forwarded to the LKML. It’s not often you see somebody apologize for killing their wife, explain their coding decisions around balanced trees versus extensible hashing, and suggest that elementary schools offer the same kinds of emotional intelligence curriculum that they’ve worked through in prison, in a software mailing list. It’s quite a document.

What follows is a relative summary of Reiser’s letter, dated November 26, 2023, which we first saw on the Phoronix blog, and which, by all appearances, is authentic (or would otherwise be an epic bit of minutely detailed fraud for no particular reason). It covers, broadly, why Reiser believes his system failed to gain mindshare among Linux users, beyond the most obvious reason. This leads Reiser to detail the technical possibilities, his interpersonal and leadership failings and development, some lingering regrets about dealings with SUSE and Oracle and the Linux community at large, and other topics, including modern Russian geopolitics.

“LKML and Slashdot.org seem like reasonable places to send it (as of 2006)”

In a cover letter, Reiser tells Brennan that he hopes he can use OCR to import his lengthy letter and asks him to use his best judgment in where to send his reply. He also asks, if he has time, Brennan might send him information on “Reiser5, or any interesting papers on other Filesystems, compression (especially Deep Learning based compression), etc.”

Then Reiser addresses the kernel mailing list directly—very directly:

I was asked by a kind Fredrick Brennan for my comments that I might offer on the discussion of removing ReiserFS V3 from the kernel. I don’t post directly because I am in prison for killing my wife Nina in 2006.

I am very sorry for my crime–a proper apology would be off topic for this forum, but available to any who ask.

A detailed apology for how I interacted with the Linux kernel community, and some history of V3 and V4, are included, along with descriptions of what the technical issues were. I have been attending prison workshops, and working hard on improving my social skills to aid my becoming less of a danger to society. The man I am now would do things very differently from how I did things then.

ReiserFS V3 was “our first filesystem, and in doing it we made mistakes, because we didn’t know what we were doing,” Reiser writes. He worked through “years of dark depression” to get V3 up to the performance speeds of ext2, but regrets how he celebrated that milestone. “The man I was then presented papers with benchmarks showing that ReiserFS was faster than ext2. The man I am now would stat his papers … crediting them for being faster than the filesystems of other operating systems, and thanking them for the years we used their filesystem to write ours.” It was “my first serious social mistake in the Linux community, and it was completely unnecessary.”

Reiser asks that a number of people who worked on ReiserFS be included in “one last release” of the README, and to “delete anything in there I might have said about why they were not credited.” He says prison has changed him in conflict resolution and with his “tendency to see people in extremes.”

Reiser extensively praises Mikhail Gilula, the “brightest mind in his generation of computer scientists,” for his work on ReiserFS from Russia and for his ideas on rewriting everything the field knew about data structures. With their ideas on filesystems and namespaces combined, it would be “the most important refactoring of code ever.” His analogy at the time, Reiser wrote, was Adam Smith’s ideas of how roads, waterways, and free trade affected civilization development; ReiserFS’ ideas could similarly change “the expressive power of the operating system.”

Convicted murderer, filesystem creator writes of regrets to Linux list Read More »

inventor-of-ntp-protocol-that-keeps-time-on-billions-of-devices-dies-at-age-85

Inventor of NTP protocol that keeps time on billions of devices dies at age 85

A legend in his own time —

Dave Mills created NTP, the protocol that holds the temporal Internet together, in 1985.

A photo of David L. Mills taken by David Woolley on April 27, 2005.

Enlarge / A photo of David L. Mills taken by David Woolley on April 27, 2005.

David Woolley / Benj Edwards / Getty Images

On Thursday, Internet pioneer Vint Cerf announced that Dr. David L. Mills, the inventor of Network Time Protocol (NTP), died peacefully at age 85 on January 17, 2024. The announcement came in a post on the Internet Society mailing list after Cerf was informed of David’s death by Mills’ daughter, Leigh.

“He was such an iconic element of the early Internet,” wrote Cerf.

Dr. Mills created the Network Time Protocol (NTP) in 1985 to address a crucial challenge in the online world: the synchronization of time across different computer systems and networks. In a digital environment where computers and servers are located all over the world, each with its own internal clock, there’s a significant need for a standardized and accurate timekeeping system.

NTP provides the solution by allowing clocks of computers over a network to synchronize to a common time source. This synchronization is vital for everything from data integrity to network security. For example, NTP keeps network financial transaction timestamps accurate, and it ensures accurate and synchronized timestamps for logging and monitoring network activities.

In the 1970s, during his tenure at COMSAT and involvement with ARPANET (the precursor to the Internet), Mills first identified the need for synchronized time across computer networks. His solution aligned computers to within tens of milliseconds. NTP now operates on billions of devices worldwide, coordinating time across every continent, and has become a cornerstone of modern digital infrastructure.

As detailed in an excellent 2022 New Yorker profile by Nate Hopper, Mills faced significant challenges in maintaining and evolving the protocol, especially as the Internet grew in scale and complexity. His work highlighted the often under-appreciated role of key open source software developers (a topic explored quite well in a 2020 xkcd comic). Mills was born with glaucoma and lost his sight, eventually becoming completely blind. Due to difficulties with his sight, Mills turned over control of the protocol to Harlan Stenn in the 2000s.

A screenshot of Dr. David L. Mills' website at the University of Delaware captured on January 19, 2024.

Enlarge / A screenshot of Dr. David L. Mills’ website at the University of Delaware captured on January 19, 2024.

Aside from his work on NTP, Mills also invented the first “Fuzzball router” for NSFNET (one of the first modern routers, based on the DEC PDP-11 computer), created one of the first implementations of FTP, inspired the creation of “ping,” and played a key role in Internet architecture as the first chairman of the Internet Architecture Task Force.

Mills was widely recognized for his work, becoming a Fellow of the Association for Computing Machinery in 1999 and the Institute of Electrical and Electronics Engineers in 2002, as well as receiving the IEEE Internet Award in 2013 for contributions to network protocols and timekeeping in the development of the Internet.

Mills received his PhD in Computer and Communication Sciences from the University of Michigan in 1971. At the time of his death, Mills was an emeritus professor at the University of Delaware, having retired in 2008 after teaching there for 22 years.

Inventor of NTP protocol that keeps time on billions of devices dies at age 85 Read More »

openai-opens-the-door-for-military-uses-but-maintains-ai-weapons-ban

OpenAI opens the door for military uses but maintains AI weapons ban

Skynet deferred —

Despite new Pentagon collab, OpenAI won’t allow customers to “develop or use weapons” with its tools.

The OpenAI logo over a camoflage background.

On Tuesday, ChatGPT developer OpenAI revealed that it is collaborating with the United States Defense Department on cybersecurity projects and exploring ways to prevent veteran suicide, reports Bloomberg. OpenAI revealed the collaboration during an interview with the news outlet at the World Economic Forum in Davos. The AI company recently modified its policies, allowing for certain military applications of its technology, while maintaining prohibitions against using it to develop weapons.

According to Anna Makanju, OpenAI’s vice president of global affairs, “many people thought that [a previous blanket prohibition on military applications] would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world.” OpenAI removed terms from its service agreement that previously blocked AI use in “military and warfare” situations, but the company still upholds a ban on its technology being used to develop weapons or to cause harm or property damage.

Under the “Universal Policies” section of OpenAI’s Usage Policies document, section 2 says, “Don’t use our service to harm yourself or others.” The prohibition includes using its AI products to “develop or use weapons.” Changes to the terms that removed the “military and warfare” prohibitions appear to have been made by OpenAI on January 10.

The shift in policy appears to align OpenAI more closely with the needs of various governmental departments, including the possibility of preventing veteran suicides. “We’ve been doing work with the Department of Defense on cybersecurity tools for open-source software that secures critical infrastructure,” Makanju said in the interview. “We’ve been exploring whether it can assist with (prevention of) veteran suicide.”

The efforts mark a significant change from OpenAI’s original stance on military partnerships, Bloomberg says. Meanwhile, Microsoft Corp., a large investor in OpenAI, already has an established relationship with the US military through various software contracts.

OpenAI opens the door for military uses but maintains AI weapons ban Read More »

as-2024-election-looms,-openai-says-it-is-taking-steps-to-prevent-ai-abuse

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse

Don’t Rock the vote —

ChatGPT maker plans transparency for gen AI content and improved access to voting info.

A pixelated photo of Donald Trump.

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

OpenAI says it regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, especially in the context of elections. The organization has implemented restrictions on using its technologies for political campaigning and lobbying until it better understands the potential for personalized persuasion. Also, OpenAI prohibits creating chatbots that impersonate real individuals or institutions and disallows the development of applications that could deter people from “participation in democratic processes.” Users can report GPTs that may violate the rules.

OpenAI claims to be proactively engaged in detailed strategies to safeguard its technologies against misuse. According to their statements, this includes red-teaming new systems to anticipate challenges, engaging with users and partners for feedback, and implementing robust safety mitigations. OpenAI asserts that these efforts are integral to its mission of continually refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests

Regarding transparency, OpenAI says it is advancing its efforts in classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI says it is testing a tool designed to identify DALL-E-generated images.

In an effort to connect users with authoritative information, particularly concerning voting procedures, OpenAI says it has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

“We want to make sure that our AI systems are built, deployed, and used safely,” writes OpenAI. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse Read More »

famous-xkcd-comic-comes-full-circle-with-ai-bird-identifying-binoculars

Famous xkcd comic comes full circle with AI bird-identifying binoculars

Who watches the bird watchers —

Swarovski AX Visio, billed as first “smart binoculars,” names species and tracks location.

The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called

Enlarge / The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called “Tasks” in the corner.

xckd / Swarovski

Last week, Austria-based Swarovski Optik introduced the AX Visio 10×32 binoculars, which the company says can identify over 9,000 species of birds and mammals using image recognition technology. The company is calling the product the world’s first “smart binoculars,” and they come with a hefty price tag—$4,799.

“The AX Visio are the world’s first AI-supported binoculars,” the company says in the product’s press release. “At the touch of a button, they assist with the identification of birds and other creatures, allow discoveries to be shared, and offer a wide range of practical extra functions.”

The binoculars, aimed mostly at bird watchers, gain their ability to identify birds from the Merlin Bird ID project, created by Cornell Lab of Ornithology. As confirmed by a hands-on demo conducted by The Verge, the user looks at an animal through the binoculars and presses a button. A red progress circle fills in while the binoculars process the image, then the identified animal name pops up on the built-in binocular HUD screen within about five seconds.

In 2014, a famous xkcd comic strip titled Tasks depicted someone asking a developer to create an app that, when a user takes a photo, will check whether the user is in a national park (deemed easy due to GPS) and check whether the photo is of a bird (to which the developer says, “I’ll need a research team and five years”). The caption below reads, “In CS, it can be hard to explain the difference between the easy and the virtually impossible.”

The xkcd comic titled

The xkcd comic titled “Tasks” from September 24, 2014.

It’s been just over nine years since the comic was published, and while identifying the presence of a bird in a photo was solved some time ago, these binoculars arguably go further by identifying the species of the bird in the photo (it also keeps track of location due to GPS). While apps to identify bird species already exist, this feature is now packed into a handheld pair of binoculars.

According to Swarovski, the development of the AX Visio took approximately five years, involving around 390 “hardware parts.” The binoculars incorporate a neural processing unit (NPU) for object recognition processing. The company claims that the device will have a long product life cycle, with ongoing updates and improvements. The company also mentions “an open programming interface” in its press release, potentially allowing industrious users (or handy hackers) to expand the unit’s features over time.

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

The binoculars, which feature industrial design from Marc Newson, include built-in digital camera, compass, GPS, and discovery-sharing features that can “immediately show your companion where you have seen an animal.” The Visio unit also wirelessly ties into the “SWAROVSKI OPTIK Outdoor App” that can run on a smartphone. The app manages sharing photos and videos captured through the binoculars. (As an aside, we’ve come a long way from computer-connected gadgets that required pesky serial cables in the late 1990s.)

Swarovski says the AX Visio will be available at select retailers and online starting February 1, 2024. While this tech is at a premium price right now, given the speed of tech progress and market competition, we may see similar image-recognizing features built into much cheaper models in the years ahead.

Famous xkcd comic comes full circle with AI bird-identifying binoculars Read More »

apple-airdrop-leaks-user-data-like-a-sieve-chinese-authorities-say-they’re-scooping-it-up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up.

Aurich Lawson | Getty Images

Chinese authorities recently said they’re using an advanced encryption attack to de-anonymize users of AirDrop in an effort to crack down on citizens who use the Apple file-sharing feature to mass-distribute content that’s outlawed in that country.

According to a 2022 report from The New York Times, activists have used AirDrop to distribute scathing critiques of the Communist Party of China to nearby iPhone users in subway trains and stations and other public venues. A document one protester sent in October of that year called General Secretary Xi Jinping a “despotic traitor.” A few months later, with the release of iOS 16.1.1, the AirDrop users in China found that the “everyone” configuration, the setting that makes files available to all other users nearby, automatically reset to the more contacts-only setting. Apple has yet to acknowledge the move. Critics continue to see it as a concession Apple CEO Tim Cook made to Chinese authorities.

The rainbow connection

On Monday, eight months after the half-measure was put in place, officials with the local government in Beijing said some people have continued mass-sending illegal content. As a result, the officials said, they were now using an advanced technique publicly disclosed in 2021 to fight back.

“Some people reported that their iPhones received a video with inappropriate remarks in the Beijing subway,” the officials wrote, according to translations. “After preliminary investigation, the police found that the suspect used the AirDrop function of the iPhone to anonymously spread the inappropriate information in public places. Due to the anonymity and difficulty of tracking AirDrop, some netizens have begun to imitate this behavior.”

In response, the authorities said they’ve implemented the technical measures to identify the people mass-distributing the content.

  • Screenshot showing log files containing the hashes to be extracted

  • Screenshot showing a dedicated tool converting extracted AirDrop hashes.

The scant details and the quality of Internet-based translations don’t explicitly describe the technique. All the translations, however, have said it involves the use of what are known as rainbow tables to defeat the technical measures AirDrop uses to obfuscate users’ phone numbers and email addresses.

Rainbow tables were first proposed in 1980 as a means for vastly reducing what at the time was the astronomical amount of computing resources required to crack at-scale hashes, the one-way cryptographic representations used to conceal passwords and other types of sensitive data. Additional refinements made in 2003 made rainbow tables more useful still.

When AirDrop is configured to distribute files only between people who know each other, Apple says, it relies heavily on hashes to conceal the real-world identities of each party until the service determines there’s a match. Specifically, AirDrop broadcasts Bluetooth advertisements that contain a partial cryptographic hash of the sender’s phone number and/or email address.

If any of the truncated hashes match any phone number or email address in the address book of the other device, or if the devices are set to send or receive from everyone, the two devices will engage in a mutual authentication handshake. When the hashes match, the devices exchange the full SHA-256 hashes of the owners’ phone numbers and email addresses. This technique falls under an umbrella term known as private set intersection, often abbreviated as PSI.

In 2021, researchers at Germany’s Technical University of Darmstadt reported that they had devised practical ways to crack what Apple calls the identity hashes used to conceal identities while AirDrop determines if a nearby person is in the contacts of another. One of the researchers’ attack methods relies on rainbow tables.

Apple AirDrop leaks user data like a sieve. Chinese authorities say they’re scooping it up. Read More »

at-senate-ai-hearing,-news-executives-fight-against-“fair-use”-claims-for-ai-training-data

At Senate AI hearing, news executives fight against “fair use” claims for AI training data

All’s fair in love and AI —

Media orgs want AI firms to license content for training, and Congress is sympathetic.

WASHINGTON, DC - JANUARY 10: Danielle Coffey, President and CEO of News Media Alliance, Professor Jeff Jarvis, CUNY Graduate School of Journalism, Curtis LeGeyt President and CEO of National Association of Broadcasters, Roger Lynch CEO of Condé Nast, are strong in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism” at the U.S. Capitol on January 10, 2024 in Washington, DC. Lawmakers continue to hear testimony from experts and business leaders about artificial intelligence and its impact on democracy, elections, privacy, liability and news. (Photo by Kent Nishimura/Getty Images)

Enlarge / Danielle Coffey, president and CEO of News Media Alliance; Professor Jeff Jarvis, CUNY Graduate School of Journalism; Curtis LeGeyt, president and CEO of National Association of Broadcasters; and Roger Lynch, CEO of Condé Nast, are sworn in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

On Wednesday, news industry executives urged Congress for legal clarification that using journalism to train AI assistants like ChatGPT is not fair use, as claimed by companies such as OpenAI. Instead, they would prefer a licensing regime for AI training content that would force Big Tech companies to pay for content in a method similar to rights clearinghouses for music.

The plea for action came during a US Senate Judiciary Committee hearing titled “Oversight of A.I.: The Future of Journalism,” chaired by Sen. Richard Blumenthal of Connecticut, with Sen. Josh Hawley of Missouri also playing a large role in the proceedings. Last year, the pair of senators introduced a bipartisan framework for AI legislation and held a series of hearings on the impact of AI.

Blumenthal described the situation as an “existential crisis” for the news industry and cited social media as a cautionary tale for legislative inaction about AI. “We need to move more quickly than we did on social media and learn from our mistakes in the delay there,” he said.

Companies like OpenAI have admitted that vast amounts of copyrighted material are necessary to train AI large language models, but they claim their use is transformational and covered under fair use precedents of US copyright law. Currently, OpenAI is negotiating licensing content from some news providers and striking deals, but the executives in the hearing said those efforts are not enough, highlighting closing newsrooms across the US and dropping media revenues while Big Tech’s profits soar.

“Gen AI cannot replace journalism,” said Condé Nast CEO Roger Lynch in his opening statement. (Condé Nast is the parent company of Ars Technica.) “Journalism is fundamentally a human pursuit, and it plays an essential and irreplaceable role in our society and our democracy.” Lynch said that generative AI has been built with “stolen goods,” referring to the use of AI training content from news outlets without authorization. “Gen AI companies copy and display our content without permission or compensation in order to build massive commercial businesses that directly compete with us.”

Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Enlarge / Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

In addition to Lynch, the hearing featured three other witnesses: Jeff Jarvis, a veteran journalism professor and pundit; Danielle Coffey, the president and CEO of News Media Alliance; and Curtis LeGeyt, president and CEO of the National Association of Broadcasters.

Coffey also shared concerns about generative AI using news material to create competitive products. “These outputs compete in the same market, with the same audience, and serve the same purpose as the original articles that feed the algorithms in the first place,” she said.

When Sen. Hawley asked Lynch what kind of legislation might be needed to fix the problem, Lynch replied, “I think quite simply, if Congress could clarify that the use of our content and other publisher content for training and output of AI models is not fair use, then the free market will take care of the rest.”

Lynch used the music industry as a model: “You think about millions of artists, millions of ultimate consumers consuming that content, there have been models that have been set up, ASCAP, BMI, CSAC, GMR, these collective rights organizations to simplify the content that’s being used.”

Curtis LeGeyt, CEO of the National Association of Broadcasters, said that TV broadcast journalists are also affected by generative AI. “The use of broadcasters’ news content in AI models without authorization diminishes our audience’s trust and our reinvestment in local news,” he said. “Broadcasters have already seen numerous examples where content created by our journalists has been ingested and regurgitated by AI bots with little or no attribution.”

At Senate AI hearing, news executives fight against “fair use” claims for AI training data Read More »

openai’s-gpt-store-lets-chatgpt-users-discover-popular-user-made-chatbot-roles

OpenAI’s GPT Store lets ChatGPT users discover popular user-made chatbot roles

The bot of 1,000 faces —

Like an app store, people can find novel ChatGPT personalities—and some creators will get paid.

Two robots hold a gift box.

On Wednesday, OpenAI announced the launch of its GPT Store—a way for ChatGPT users to share and discover custom chatbot roles called “GPTs”—and ChatGPT Team, a collaborative ChatGPT workspace and subscription plan. OpenAI bills the new store as a way to “help you find useful and popular custom versions of ChatGPT” for members of Plus, Team, or Enterprise subscriptions.

“It’s been two months since we announced GPTs, and users have already created over 3 million custom versions of ChatGPT,” writes OpenAI in its promotional blog. “Many builders have shared their GPTs for others to use. Today, we’re starting to roll out the GPT Store to ChatGPT Plus, Team and Enterprise users so you can find useful and popular GPTs.”

OpenAI launched GPTs on November 6, 2023, as part of its DevDay event. Each GPT includes custom instructions and/or access to custom data or external APIs that can potentially make a custom GPT personality more useful than the vanilla ChatGPT-4 model. Before the GPT Store launch, paying ChatGPT users could create and share custom GPTs with others (by setting the GPT public and sharing a link to the GPT), but there was no central repository for browsing and discovering user-designed GPTs on the OpenAI website.

According to OpenAI, the ChatGPT Store will feature new GPTs every week, and the company shared a list a group of six notable early GPTs that are available now: AllTrails for finding hiking trails, Consensus for searching 200 million academic papers, Code Tutor for learning coding with Khan Academy, Canva for designing presentations, Books for discovering reading material, and CK-12 Flexi for learning math and science.

A screenshot of the OpenAI GPT Store provided by OpenAI.

Enlarge / A screenshot of the OpenAI GPT Store provided by OpenAI.

OpenAI

ChatGPT members can include their own GPTs in the GPT Store by setting them to be accessible to “Everyone” and then verifying a builder profile in ChatGPT settings. OpenAI plans to review GPTs to ensure they meet their policies and brand guidelines. GPTs that violate the rules can also be reported by users.

As promised by CEO Sam Altman during DevDay, OpenAI plans to share revenue with GPT creators. Unlike a smartphone app store, it appears that users will not sell their GPTs in the GPT Store, but instead, OpenAI will pay developers “based on user engagement with their GPTs.” The revenue program will launch in the first quarter of 2024, and OpenAI will provide more details on the criteria for receiving payments later.

“ChatGPT Team” is for teams who use ChatGPT

Also on Monday, OpenAI announced the cleverly named ChatGPT Team, a new group-based ChatGPT membership program akin to ChatGPT Enterprise, which the company launched last August. Unlike Enterprise, which is for large companies and does not have publicly listed prices, ChatGPT Team is a plan for “teams of all sizes” and costs US $25 a month per user (when billed annually) or US $30 a month per user (when billed monthly). By comparison, ChatGPT Plus costs $20 per month.

So what does ChatGPT Team offer above the usual ChatGPT Plus subscription? According to OpenAI, it “provides a secure, collaborative workspace to get the most out of ChatGPT at work.” Unlike Plus, OpenAI says it will not train AI models based on ChatGPT Team business data or conversations. It features an admin console for team management and the ability to share custom GPTs with your team. Like Plus, it also includes access to GPT-4 with the 32K context window, DALL-E 3, GPT-4 with Vision, Browsing, and Advanced Data Analysis—all with higher message caps.

Why would you want to use ChatGPT at work? OpenAI says it can help you generate better code, craft emails, analyze data, and more. Your mileage may vary, of course. As usual, our standard Ars warning about AI language models applies: “Bring your own data” for analysis, don’t rely on ChatGPT as a factual resource, and don’t rely on its outputs in ways you cannot personally confirm. OpenAI has provided more details about ChatGPT Team on its website.

OpenAI’s GPT Store lets ChatGPT users discover popular user-made chatbot roles Read More »

linux-devices-are-under-attack-by-a-never-before-seen-worm

Linux devices are under attack by a never-before-seen worm

NEW WORM ON THE BLOCK —

Based on Mirai malware, self-replicating NoaBot installs cryptomining app on infected devices.

Linux devices are under attack by a never-before-seen worm

Getty Images

For the past year, previously unknown self-replicating malware has been compromising Linux devices around the world and installing cryptomining malware that takes unusual steps to conceal its inner workings, researchers said.

The worm is a customized version of Mirai, the botnet malware that infects Linux-based servers, routers, web cameras, and other so-called Internet of Things devices. Mirai came to light in 2016 when it was used to deliver record-setting distributed denial-of-service attacks that paralyzed key parts of the Internet that year. The creators soon released the underlying source code, a move that allowed a wide array of crime groups from around the world to incorporate Mirai into their own attack campaigns. Once taking hold of a Linux device, Mirai uses it as a platform to infect other vulnerable devices, a design that makes it a worm, meaning it self-replicates.

Dime-a-dozen malware with a twist

Traditionally, Mirai and its many variants have spread when one infected device scans the Internet looking for other devices that accept Telnet connections. The infected devices then attempt to crack the telnet password by guessing default and commonly used credential pairs. When successful, the newly infected devices target additional devices using the same technique. Mirai has primarily been used to wage DDoSes. Given the large amounts of bandwidth available to many such devices, the floods of junk traffic are often huge, giving the botnet as a whole tremendous power.

On Wednesday, researchers from network security and reliability firm Akamai revealed that a previously unknown Mirai-based network they dubbed NoaBot has been targeting Linux devices since at least last January. Instead of targeting weak telnet passwords, the NoaBot targets weak passwords connecting SSH connections. Another twist: Rather than performing DDoSes, the new botnet installs cryptocurrency mining software, which allows the attackers to generate digital coins using victims’ computing resources, electricity, and bandwidth. The cryptominer is a modified version of XMRig, another piece of open source malware. More recently, NoaBot has been used to also deliver P2PInfect, a separate worm researchers from Palo Alto Networks revealed last July.

Akamai has been monitoring NoaBot for the past 12 months in a honeypot that mimics real Linux devices to track various attacks circulating in the wild. To date, attacks have originated from 849 distinct IP addresses, almost all of which are likely hosting a device that’s already infected. The following figure tracks the number of attacks delivered to the honeypot over the past year.

Noabot malware activity over time.

Enlarge / Noabot malware activity over time.

“On the surface, NoaBot isn’t a very sophisticated campaign—it’s ‘just’ a Mirai variant and an XMRig cryptominer, and they’re a dime a dozen nowadays,” Akamai Senior Security Researcher Stiv Kupchik wrote in a report Wednesday. “However, the obfuscations added to the malware and the additions to the original source code paint a vastly different picture of the threat actors’ capabilities.”

The most advanced capability is how NoaBot installs the XMRig variant. Typically, when crypto miners are installed, the wallets’ funds are distributed to are specified in configuration settings delivered in a command line issued to the infected device. This approach has long posed a risk to threat actors because it allows researchers to track where the wallets are hosted and how much money has flowed into them.

NoaBot uses a novel technique to prevent such detection. Instead of delivering the configuration settings through a command line, the botnet stores the settings in encrypted or obfuscated form and decrypts them only after XMRig is loaded into memory. The botnet then replaces the internal variable that normally would hold the command line configuration settings and passes control to the XMRig source code.

Kupchik offered a more technical and detailed description:

In the XMRig open source code, miners can accept configurations in one of two ways — either via the command line or via environment variables. In our case, the threat actors chose not to modify the XMRig original code and instead added parts before the main function. To circumvent the need for command line arguments (which can be an indicator of compromise IOC and alert defenders), the threat actors had the miner replace its own command line (in technical terms, replacing argv) with more “meaningful” arguments before passing control to the XMRig code. The botnet runs the miner with (at most) one argument that tells it to print its logs. Before replacing its command line, however, the miner has to build its configuration. First, it copies basic arguments that are stored plaintext— the rig-id flag, which identifies the miner with three random letters, the threads flags, and a placeholder for the pool’s IP address (Figure 7).

Curiously, because the configurations are loaded via the xmm registers, IDA actually misses the first two loaded arguments, which are the binary name and the pool IP placeholder.

NoaBot code that copies miner configurations

Enlarge / NoaBot code that copies miner configurations

Akamai

Next, the miner decrypts the pool’s domain name. The domain name is stored, encrypted, in a few data blocks that are decrypted via XOR operations. Although XMRig can work with a domain name, the attackers decided to go the extra step, and implemented their own DNS

resolution function. They communicate directly with Google’s DNS server (8.8.8.8) and parse its response to resolve the domain name to an IP address.

The last part of the configuration is also encrypted in a similar way, and it is the passkey for the miner to connect to the pool. All in all, the total configuration of the miner looks something like this:

-o --rig-id --threads –pass espana*tea

Notice anything missing? Yep, no wallet address.

We believe that the threat actors chose to run their own private pool instead of a public one, thereby eliminating the need to specify a wallet (their pool, their rules!). However, in our samples, we observed that miner’s domains were not resolving with Google’s DNS, so we can’t really prove our theory or gather more data from the pool, since the domains we have are no longer resolvable. We haven’t seen any recent incident that drops the miner, so it could also be that the threat actors decided to depart for greener pastures

Linux devices are under attack by a never-before-seen worm Read More »

hackers-can-infect-network-connected-wrenches-to-install-ransomware

Hackers can infect network-connected wrenches to install ransomware

TORQUE THIS —

Researchers identify 23 vulnerabilities, some of which can exploited with no authentication.

The Rexroth Nutrunner, a line of torque wrench sold by Bosch Rexroth.

Enlarge / The Rexroth Nutrunner, a line of torque wrench sold by Bosch Rexroth.

Bosch Rexroth

Researchers have unearthed nearly two dozen vulnerabilities that could allow hackers to sabotage or disable a popular line of network-connected wrenches that factories around the world use to assemble sensitive instruments and devices.

The vulnerabilities, reported Tuesday by researchers from security firm Nozomi, reside in the Bosch Rexroth Handheld Nutrunner NXA015S-36V-B. The cordless device, which wirelessly connects to the local network of organizations that use it, allows engineers to tighten bolts and other mechanical fastenings to precise torque levels that are critical for safety and reliability. When fastenings are too loose, they risk causing the device to overheat and start fires. When too tight, threads can fail and result in torques that are too loose. The Nutrunner provides a torque-level indicator display that’s backed by a certification from the Association of German Engineers and adopted by the automotive industry in 1999. The NEXO-OS, the firmware running on devices, can be controlled using a browser-based management interface.

NEXO-OS's management web application.

Enlarge / NEXO-OS’s management web application.

Nozomi

Nozomi researchers said the device is riddled with 23 vulnerabilities that, in certain cases, can be exploited to install malware. The malware could then be used to disable entire fleets of the devices or to cause them to tighten fastenings too loosely or tightly while the display continues to indicate the critical settings are still properly in place. B

Bosch officials emailed a statement that included the usual lines about security being a top priority. It went on to say that Nozomi reached out a few weeks ago to reveal the vulnerabilities. “Bosch Rexroth immediately took up this advice and is working on a patch to solve the problem,” the statement said. “This patch will be released at the end of January 2024.”

In a post, Nozomi researchers wrote:

The vulnerabilities found on the Bosch Rexroth NXA015S-36V-B allow an unauthenticated attacker who is able to send network packets to the target device to obtain remote execution of arbitrary code (RCE) with root privileges, completely compromising it. Once this unauthorized access is gained, numerous attack scenarios become possible. Within our lab environment, we successfully reconstructed the following two scenarios:

  • Ransomware: we were able to make the device completely inoperable by preventing a local operator from controlling the drill through the onboard display and disabling the trigger button. Furthermore, we could alter the graphical user interface (GUI) to display an arbitrary message on the screen, requesting the payment of a ransom. Given the ease with which this attack can be automated across numerous devices, an attacker could swiftly render all tools on a production line inaccessible, potentially causing significant disruptions to the final asset owner.
A PoC ransomware running on the test nutrunner.

Enlarge / A PoC ransomware running on the test nutrunner.

Nozomi

  • Manipulation of Control and View: we managed to stealthily alter the configuration of tightening programs, such as by increasing or decreasing the target torque value. At the same time, by patching in-memory the GUI on the onboard display, we could show a normal value to the operator, who would remain completely unaware of the change.
A manipulation of view attack. The actual torque applied in this tightening was 0.15 Nm.

A manipulation of view attack. The actual torque applied in this tightening was 0.15 Nm.

Hackers can infect network-connected wrenches to install ransomware Read More »

how-much-detail-is-too-much?-midjourney-v6-attempts-to-find-out

How much detail is too much? Midjourney v6 attempts to find out

An AI-generated image of a

Enlarge / An AI-generated image of a “Beautiful queen of the universe looking at the camera in sci-fi armor, snow and particles flowing, fire in the background” created using alpha Midjourney v6.

Midjourney

In December, just before Christmas, Midjourney launched an alpha version of its latest image synthesis model, Midjourney v6. Over winter break, Midjourney fans put the new AI model through its paces, with the results shared on social media. So far, fans have noted much more detail than v5.2 (the current default) and a different approach to prompting. Version 6 can also handle generating text in a rudimentary way, but it’s far from perfect.

“It’s definitely a crazy update, both in good and less good ways,” artist Julie Wieland, who frequently shares her Midjourney creations online, told Ars. “The details and scenery are INSANE, the downside (for now) are that the generations are very high contrast and overly saturated (imo). Plus you need to kind of re-adapt and rethink your prompts, working with new structures and now less is kind of more in terms of prompting.”

At the same time, critics of the service still bristle about Midjourney training its models using human-made artwork scraped from the web and obtained without permission—a controversial practice common among AI model trainers we have covered in detail in the past. We’ve also covered the challenges artists might face in the future from these technologies elsewhere.

Too much detail?

With AI-generated detail ramping up dramatically between major Midjourney versions, one could wonder if there is ever such as thing as “too much detail” in an AI-generated image. Midjourney v6 seems to be testing that very question, creating many images that sometimes seem more detailed than reality in an unrealistic way, although that can be modified with careful prompting.

  • An AI-generated image of a nurse in the 1960s created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of an astronaut created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a “juicy flaming cheeseburger” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “a handsome Asian man” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of an “Apple II” sitting on a desk in the 1980s created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a “photo of a cat in a car holding a can of beer” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a forest path created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a woman among flowers created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “a plate of delicious pickles” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a barbarian beside a TV set that says “Ars Technica” on it created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “Abraham Lincoln holding a sign that says Ars Technica” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of Mickey Mouse holding a machine gun created using alpha Midjourney v6.

    Midjourney

In our testing of version 6 (which can currently be invoked with the “–v 6.0” argument at the end of a prompt), we noticed times when the new model appeared to produce worse results than v5.2, but Midjourney veterans like Wieland tell Ars that those differences are largely due to the different way that v6.0 interprets prompts. That is something Midjourney is continuously updating over time. “Old prompts sometimes work a bit better than the day they released it,” Wieland told us.

How much detail is too much? Midjourney v6 attempts to find out Read More »