AI

as-2024-election-looms,-openai-says-it-is-taking-steps-to-prevent-ai-abuse

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse

Don’t Rock the vote —

ChatGPT maker plans transparency for gen AI content and improved access to voting info.

A pixelated photo of Donald Trump.

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

OpenAI says it regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, especially in the context of elections. The organization has implemented restrictions on using its technologies for political campaigning and lobbying until it better understands the potential for personalized persuasion. Also, OpenAI prohibits creating chatbots that impersonate real individuals or institutions and disallows the development of applications that could deter people from “participation in democratic processes.” Users can report GPTs that may violate the rules.

OpenAI claims to be proactively engaged in detailed strategies to safeguard its technologies against misuse. According to their statements, this includes red-teaming new systems to anticipate challenges, engaging with users and partners for feedback, and implementing robust safety mitigations. OpenAI asserts that these efforts are integral to its mission of continually refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests

Regarding transparency, OpenAI says it is advancing its efforts in classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI says it is testing a tool designed to identify DALL-E-generated images.

In an effort to connect users with authoritative information, particularly concerning voting procedures, OpenAI says it has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

“We want to make sure that our AI systems are built, deployed, and used safely,” writes OpenAI. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse Read More »

what-do-threads,-mastodon,-and-hospital-records-have-in-common?

What do Threads, Mastodon, and hospital records have in common?

A medical technician looks at a scan on a computer monitor.

It’s taken a while, but social media platforms now know that people prefer their information kept away from corporate eyes and malevolent algorithms. That’s why the newest generation of social media sites like Threads, Mastodon, and Bluesky boast of being part of the “fediverse.” Here, user data is hosted on independent servers rather than one corporate silo. Platforms then use common standards to share information when needed. If one server starts to host too many harmful accounts, other servers can choose to block it.

They’re not the only ones embracing this approach. Medical researchers think a similar strategy could help them train machine learning to spot disease trends in patients. Putting their AI algorithms on special servers within hospitals for “federated learning” could keep privacy standards high while letting researchers unravel new ways to detect and treat diseases.

“The use of AI is just exploding in all facets of life,” said Ronald M. Summers of the National Institutes of Health Clinical Center in Maryland, who uses the method in his radiology research. “There’s a lot of people interested in using federated learning for a variety of different data analysis applications.”

How does it work?

Until now, medical researchers refined their AI algorithms using a few carefully curated databases, usually anonymized medical information from patients taking part in clinical studies.

However, improving these models further means they need a larger dataset with real-world patient information. Researchers could pool data from several hospitals into one database, but that means asking them to hand over sensitive and highly regulated information. Sending patient information outside a hospital’s firewall is a big risk, so getting permission can be a long and legally complicated process. National privacy laws and the EU’s GDPR law set strict rules on sharing a patient’s personal information.

So instead, medical researchers are sending their AI model to hospitals so it can analyze a dataset while staying within the hospital’s firewall.

Typically, doctors first identify eligible patients for a study, select any clinical data they need for training, confirm its accuracy, and then organize it on a local database. The database is then placed onto a server at the hospital that is linked to the federated learning AI software. Once the software receives instructions from the researchers, it can work its AI magic, training itself with the hospital’s local data to find specific disease trends.

Every so often, this trained model is then sent back to a central server, where it joins models from other hospitals. An aggregation method processes these trained models to update the original model. For example, Google’s popular FedAvg aggregation algorithm takes each element of the trained models’ parameters and creates an average. Each average becomes part of the model update, with their input to the aggregate model weighted proportionally to the size of their training dataset.

In other words, how these models change gets aggregated in the central server to create an updated “consensus model.” This consensus model is then sent back to each hospital’s local database to be trained once again. The cycle continues until researchers judge the final consensus model to be accurate enough. (There’s a review of this process available.)

This keeps both sides happy. For hospitals, it helps preserve privacy since information sent back to the central server is anonymous; personal information never crosses the hospital’s firewall. It also means machine/AI learning can reach its full potential by training on real-world data so researchers get less biased results that are more likely to be sensitive to niche diseases.

Over the past few years, there has been a boom in research using this method. For example, in 2021, Summers and others used federated learning to see whether they could predict diabetes from CT scans of abdomens.

“We found that there were signatures of diabetes on the CT scanner [for] the pancreas that preceded the diagnosis of diabetes by as much as seven years,” said Summers. “That got us very excited that we might be able to help patients that are at risk.”

What do Threads, Mastodon, and hospital records have in common? Read More »

famous-xkcd-comic-comes-full-circle-with-ai-bird-identifying-binoculars

Famous xkcd comic comes full circle with AI bird-identifying binoculars

Who watches the bird watchers —

Swarovski AX Visio, billed as first “smart binoculars,” names species and tracks location.

The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called

Enlarge / The Swarovski Optik Visio binoculars, with an excerpt of a 2014 xkcd comic strip called “Tasks” in the corner.

xckd / Swarovski

Last week, Austria-based Swarovski Optik introduced the AX Visio 10×32 binoculars, which the company says can identify over 9,000 species of birds and mammals using image recognition technology. The company is calling the product the world’s first “smart binoculars,” and they come with a hefty price tag—$4,799.

“The AX Visio are the world’s first AI-supported binoculars,” the company says in the product’s press release. “At the touch of a button, they assist with the identification of birds and other creatures, allow discoveries to be shared, and offer a wide range of practical extra functions.”

The binoculars, aimed mostly at bird watchers, gain their ability to identify birds from the Merlin Bird ID project, created by Cornell Lab of Ornithology. As confirmed by a hands-on demo conducted by The Verge, the user looks at an animal through the binoculars and presses a button. A red progress circle fills in while the binoculars process the image, then the identified animal name pops up on the built-in binocular HUD screen within about five seconds.

In 2014, a famous xkcd comic strip titled Tasks depicted someone asking a developer to create an app that, when a user takes a photo, will check whether the user is in a national park (deemed easy due to GPS) and check whether the photo is of a bird (to which the developer says, “I’ll need a research team and five years”). The caption below reads, “In CS, it can be hard to explain the difference between the easy and the virtually impossible.”

The xkcd comic titled

The xkcd comic titled “Tasks” from September 24, 2014.

It’s been just over nine years since the comic was published, and while identifying the presence of a bird in a photo was solved some time ago, these binoculars arguably go further by identifying the species of the bird in the photo (it also keeps track of location due to GPS). While apps to identify bird species already exist, this feature is now packed into a handheld pair of binoculars.

According to Swarovski, the development of the AX Visio took approximately five years, involving around 390 “hardware parts.” The binoculars incorporate a neural processing unit (NPU) for object recognition processing. The company claims that the device will have a long product life cycle, with ongoing updates and improvements. The company also mentions “an open programming interface” in its press release, potentially allowing industrious users (or handy hackers) to expand the unit’s features over time.

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

  • The Swarovski Optik Visio binoculars.

    Swarovski Optik

The binoculars, which feature industrial design from Marc Newson, include built-in digital camera, compass, GPS, and discovery-sharing features that can “immediately show your companion where you have seen an animal.” The Visio unit also wirelessly ties into the “SWAROVSKI OPTIK Outdoor App” that can run on a smartphone. The app manages sharing photos and videos captured through the binoculars. (As an aside, we’ve come a long way from computer-connected gadgets that required pesky serial cables in the late 1990s.)

Swarovski says the AX Visio will be available at select retailers and online starting February 1, 2024. While this tech is at a premium price right now, given the speed of tech progress and market competition, we may see similar image-recognizing features built into much cheaper models in the years ahead.

Famous xkcd comic comes full circle with AI bird-identifying binoculars Read More »

lazy-use-of-ai-leads-to-amazon-products-called-“i-cannot-fulfill-that-request”

Lazy use of AI leads to Amazon products called “I cannot fulfill that request”

FILE NOT FOUND —

The telltale error messages are a sign of AI-generated pablum all over the Internet.

I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Enlarge / I know naming new products can be hard, but these Amazon sellers made some particularly odd naming choices.

Amazon

Amazon users are at this point used to search results filled with products that are fraudulent, scams, or quite literally garbage. These days, though, they also may have to pick through obviously shady products, with names like “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy.”

As of press time, some version of that telltale OpenAI error message appears in Amazon products ranging from lawn chairs to office furniture to Chinese religious tracts. A few similarly named products that were available as of this morning have been taken down as word of the listings spreads across social media (one such example is Archived here).

ProTip: Don't ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Enlarge / ProTip: Don’t ask OpenAI to integrate a trademarked brand name when generating a name for your weird length of rubber tubing.

Other Amazon product names don’t mention OpenAI specifically but feature apparent AI-related error messages, such as “Sorry but I can’t generate a response to that request” or “Sorry but I can’t provide the information you’re looking for,” (available in a variety of colors). Sometimes, the product names even highlight the specific reason why the apparent AI-generation request failed, noting that OpenAI can’t provide content that “requires using trademarked brand names” or “promotes a specific religious institution” or in one case “encourage unethical behavior.”

The repeated invocation of a

Enlarge / The repeated invocation of a “commitment to providing reliable and trustworthy product descriptions” cited in this description is particularly ironic.

The descriptions for these oddly named products are also riddled with obvious AI error messages like, “Apologies, but I am unable to provide the information you’re seeking.” One product description for a set of tables and chairs (which has since been taken down) hilariously noted: “Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3]].” Another set of product descriptions, seemingly for tattoo ink guns, repeatedly apologizes that it can’t provide more information because: “We prioritize accuracy and reliability by only offering verified product details to our customers.”

Spam spam spam spam

Using large language models to help generate product names or descriptions isn’t against Amazon policy. On the contrary, in September Amazon launched its own generative AI tool to help sellers “create more thorough and captivating product descriptions, titles, and listing details.” And we could only find a small handful of Amazon products slipping through with the telltale error messages in their names or descriptions as of press time.

Still, these error-message-filled listings highlight the lack of care or even basic editing many Amazon scammers are exercising when putting their spammy product listings on the Amazon marketplace. For every seller that can be easily caught accidentally posting an OpenAI error, there are likely countless others using the technology to create product names and descriptions that only seem like they were written by a human that has actual experience with the product in question.

A set of clearly real people conversing on Twitter / X.

Enlarge / A set of clearly real people conversing on Twitter / X.

Amazon isn’t the only online platform where these AI bots are outing themselves, either. A quick search for “goes against OpenAI policy” or “as an AI language model” can find a whole lot of artificial posts on Twitter / X or Threads or LinkedIn, for example. Security engineer Dan Feldman noted a similar problem on Amazon back in April, though searching with the phrase “as an AI language model” doesn’t seem to generate any obviously AI-generated search results these days.

As fun as it is to call out these obvious mishaps for AI-generated content mills, a flood of harder-to-detect AI content is threatening to overwhelm everyone from art communities to sci-fi magazines to Amazon’s own ebook marketplace. Pretty much any platform that accepts user submissions that involve text or visual art now has to worry about being flooded with wave after wave of AI-generated work trying to crowd out the human community they were created for. It’s a problem that’s likely to get worse before it gets better.

Listing image by Getty Images | Leon Neal

Lazy use of AI leads to Amazon products called “I cannot fulfill that request” Read More »

at-senate-ai-hearing,-news-executives-fight-against-“fair-use”-claims-for-ai-training-data

At Senate AI hearing, news executives fight against “fair use” claims for AI training data

All’s fair in love and AI —

Media orgs want AI firms to license content for training, and Congress is sympathetic.

WASHINGTON, DC - JANUARY 10: Danielle Coffey, President and CEO of News Media Alliance, Professor Jeff Jarvis, CUNY Graduate School of Journalism, Curtis LeGeyt President and CEO of National Association of Broadcasters, Roger Lynch CEO of Condé Nast, are strong in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism” at the U.S. Capitol on January 10, 2024 in Washington, DC. Lawmakers continue to hear testimony from experts and business leaders about artificial intelligence and its impact on democracy, elections, privacy, liability and news. (Photo by Kent Nishimura/Getty Images)

Enlarge / Danielle Coffey, president and CEO of News Media Alliance; Professor Jeff Jarvis, CUNY Graduate School of Journalism; Curtis LeGeyt, president and CEO of National Association of Broadcasters; and Roger Lynch, CEO of Condé Nast, are sworn in during a Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

On Wednesday, news industry executives urged Congress for legal clarification that using journalism to train AI assistants like ChatGPT is not fair use, as claimed by companies such as OpenAI. Instead, they would prefer a licensing regime for AI training content that would force Big Tech companies to pay for content in a method similar to rights clearinghouses for music.

The plea for action came during a US Senate Judiciary Committee hearing titled “Oversight of A.I.: The Future of Journalism,” chaired by Sen. Richard Blumenthal of Connecticut, with Sen. Josh Hawley of Missouri also playing a large role in the proceedings. Last year, the pair of senators introduced a bipartisan framework for AI legislation and held a series of hearings on the impact of AI.

Blumenthal described the situation as an “existential crisis” for the news industry and cited social media as a cautionary tale for legislative inaction about AI. “We need to move more quickly than we did on social media and learn from our mistakes in the delay there,” he said.

Companies like OpenAI have admitted that vast amounts of copyrighted material are necessary to train AI large language models, but they claim their use is transformational and covered under fair use precedents of US copyright law. Currently, OpenAI is negotiating licensing content from some news providers and striking deals, but the executives in the hearing said those efforts are not enough, highlighting closing newsrooms across the US and dropping media revenues while Big Tech’s profits soar.

“Gen AI cannot replace journalism,” said Condé Nast CEO Roger Lynch in his opening statement. (Condé Nast is the parent company of Ars Technica.) “Journalism is fundamentally a human pursuit, and it plays an essential and irreplaceable role in our society and our democracy.” Lynch said that generative AI has been built with “stolen goods,” referring to the use of AI training content from news outlets without authorization. “Gen AI companies copy and display our content without permission or compensation in order to build massive commercial businesses that directly compete with us.”

Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Enlarge / Roger Lynch, CEO of Condé Nast, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law during a hearing on “Artificial Intelligence and The Future Of Journalism.”

Getty Images

In addition to Lynch, the hearing featured three other witnesses: Jeff Jarvis, a veteran journalism professor and pundit; Danielle Coffey, the president and CEO of News Media Alliance; and Curtis LeGeyt, president and CEO of the National Association of Broadcasters.

Coffey also shared concerns about generative AI using news material to create competitive products. “These outputs compete in the same market, with the same audience, and serve the same purpose as the original articles that feed the algorithms in the first place,” she said.

When Sen. Hawley asked Lynch what kind of legislation might be needed to fix the problem, Lynch replied, “I think quite simply, if Congress could clarify that the use of our content and other publisher content for training and output of AI models is not fair use, then the free market will take care of the rest.”

Lynch used the music industry as a model: “You think about millions of artists, millions of ultimate consumers consuming that content, there have been models that have been set up, ASCAP, BMI, CSAC, GMR, these collective rights organizations to simplify the content that’s being used.”

Curtis LeGeyt, CEO of the National Association of Broadcasters, said that TV broadcast journalists are also affected by generative AI. “The use of broadcasters’ news content in AI models without authorization diminishes our audience’s trust and our reinvestment in local news,” he said. “Broadcasters have already seen numerous examples where content created by our journalists has been ingested and regurgitated by AI bots with little or no attribution.”

At Senate AI hearing, news executives fight against “fair use” claims for AI training data Read More »

openai’s-gpt-store-lets-chatgpt-users-discover-popular-user-made-chatbot-roles

OpenAI’s GPT Store lets ChatGPT users discover popular user-made chatbot roles

The bot of 1,000 faces —

Like an app store, people can find novel ChatGPT personalities—and some creators will get paid.

Two robots hold a gift box.

On Wednesday, OpenAI announced the launch of its GPT Store—a way for ChatGPT users to share and discover custom chatbot roles called “GPTs”—and ChatGPT Team, a collaborative ChatGPT workspace and subscription plan. OpenAI bills the new store as a way to “help you find useful and popular custom versions of ChatGPT” for members of Plus, Team, or Enterprise subscriptions.

“It’s been two months since we announced GPTs, and users have already created over 3 million custom versions of ChatGPT,” writes OpenAI in its promotional blog. “Many builders have shared their GPTs for others to use. Today, we’re starting to roll out the GPT Store to ChatGPT Plus, Team and Enterprise users so you can find useful and popular GPTs.”

OpenAI launched GPTs on November 6, 2023, as part of its DevDay event. Each GPT includes custom instructions and/or access to custom data or external APIs that can potentially make a custom GPT personality more useful than the vanilla ChatGPT-4 model. Before the GPT Store launch, paying ChatGPT users could create and share custom GPTs with others (by setting the GPT public and sharing a link to the GPT), but there was no central repository for browsing and discovering user-designed GPTs on the OpenAI website.

According to OpenAI, the ChatGPT Store will feature new GPTs every week, and the company shared a list a group of six notable early GPTs that are available now: AllTrails for finding hiking trails, Consensus for searching 200 million academic papers, Code Tutor for learning coding with Khan Academy, Canva for designing presentations, Books for discovering reading material, and CK-12 Flexi for learning math and science.

A screenshot of the OpenAI GPT Store provided by OpenAI.

Enlarge / A screenshot of the OpenAI GPT Store provided by OpenAI.

OpenAI

ChatGPT members can include their own GPTs in the GPT Store by setting them to be accessible to “Everyone” and then verifying a builder profile in ChatGPT settings. OpenAI plans to review GPTs to ensure they meet their policies and brand guidelines. GPTs that violate the rules can also be reported by users.

As promised by CEO Sam Altman during DevDay, OpenAI plans to share revenue with GPT creators. Unlike a smartphone app store, it appears that users will not sell their GPTs in the GPT Store, but instead, OpenAI will pay developers “based on user engagement with their GPTs.” The revenue program will launch in the first quarter of 2024, and OpenAI will provide more details on the criteria for receiving payments later.

“ChatGPT Team” is for teams who use ChatGPT

Also on Monday, OpenAI announced the cleverly named ChatGPT Team, a new group-based ChatGPT membership program akin to ChatGPT Enterprise, which the company launched last August. Unlike Enterprise, which is for large companies and does not have publicly listed prices, ChatGPT Team is a plan for “teams of all sizes” and costs US $25 a month per user (when billed annually) or US $30 a month per user (when billed monthly). By comparison, ChatGPT Plus costs $20 per month.

So what does ChatGPT Team offer above the usual ChatGPT Plus subscription? According to OpenAI, it “provides a secure, collaborative workspace to get the most out of ChatGPT at work.” Unlike Plus, OpenAI says it will not train AI models based on ChatGPT Team business data or conversations. It features an admin console for team management and the ability to share custom GPTs with your team. Like Plus, it also includes access to GPT-4 with the 32K context window, DALL-E 3, GPT-4 with Vision, Browsing, and Advanced Data Analysis—all with higher message caps.

Why would you want to use ChatGPT at work? OpenAI says it can help you generate better code, craft emails, analyze data, and more. Your mileage may vary, of course. As usual, our standard Ars warning about AI language models applies: “Bring your own data” for analysis, don’t rely on ChatGPT as a factual resource, and don’t rely on its outputs in ways you cannot personally confirm. OpenAI has provided more details about ChatGPT Team on its website.

OpenAI’s GPT Store lets ChatGPT users discover popular user-made chatbot roles Read More »

valve-now-allows-the-“vast-majority”-of-ai-powered-games-on-steam

Valve now allows the “vast majority” of AI-powered games on Steam

Open the flood gates —

New reporting system will enforce “guardrails” for “live-generated” AI content.

Can you tell which of these seemingly identical bits of Steam iconography were generated using AI (trick question, it's none of them).

Can you tell which of these seemingly identical bits of Steam iconography were generated using AI (trick question, it’s none of them).

Aurich Lawson

Last summer, Valve told Ars Technica that it was worried about potential legal issues surrounding games made with the assistance of AI models trained on copyrighted works and that it was “working through how to integrate [AI] into our already-existing review policies.” Today, the company is rolling out the results of that months-long review, announcing a new set of developer policies that it says “will enable us to release the vast majority of games that use [AI tools].”

Developers that use AI-powered tools “in the development [or] execution of your game” will now be allowed to put their games on Steam so long as they disclose that usage in the standard Content Survey when submitting to Steam. Such AI integration will be separated into categories of “pre-generated” content that is “created with the help of AI tools during development” (e.g., using DALL-E for in-game images) and “live-generated” content that is “created with the help of AI tools while the game is running” (e.g., using Nvidia’s AI-powered NPC technology).

Those disclosures will be shared on the Steam store pages for these games, which should help players who want to avoid certain types of AI content. But disclosure will not be sufficient for games that use live-generated AI for “Adult Only Sexual Content,” which Valve says it is “unable to release… right now.”

Put up the guardrails

For pre-generated AI content, Valve warns that developers still have to ensure that their games “will not include illegal or infringing content.” But that promise only extends to the “output of AI-generated content” and doesn’t address the copyright status of content used by the training models themselves. The status of those training models was a primary concern for Valve last summer when the company cited the “legal uncertainty relating to data used to train AI models,” but such concerns don’t even merit a mention in today’s new policies.

For live-generated content, on the other hand, Valve is requiring developers “to tell us what kind of guardrails you’re putting on your AI to ensure it’s not generating illegal content.” Such guardrails should hopefully prevent situations like that faced by AI Dungeon, which in 2021 drew controversy for using an OpenAI model that could be used to generate sexual content featuring children in the game. Valve says a new “in-game overlay” will allow players to submit reports if they run into that kind of inappropriate AI-generated content in Steam games.

Over the last year or so, many game developers have started to embrace a variety of AI tools in the creation of everything from background art and NPC dialogue to motion capture and voice generation. But some developers have taken a hardline stance against anything that could supplant the role of humans in game making. “We are extremely against the idea that anything creative could or should take [the] place of skilled specialists, to which we mean ourselves,” Digital Extremes Creative Director Rebecca Ford told the CBC last year.

In September, Epic Games CEO Tim Sweeney responded to reports of a ChatGPT-powered game being banned from Steam by explicitly welcoming such games on the Epic Games Store. “We don’t ban games for using new technologies,” Sweeney wrote on social media.

Valve now allows the “vast majority” of AI-powered games on Steam Read More »

regulators-aren’t-convinced-that-microsoft-and-openai-operate-independently

Regulators aren’t convinced that Microsoft and OpenAI operate independently

Under Microsoft’s thumb? —

EU is fielding comments on potential market harms of Microsoft’s investments.

Regulators aren’t convinced that Microsoft and OpenAI operate independently

European Union regulators are concerned that Microsoft may be covertly controlling OpenAI as its biggest investor.

On Tuesday, the European Commission (EC) announced that it is currently “checking whether Microsoft’s investment in OpenAI might be reviewable under the EU Merger Regulation.”

The EC’s executive vice president in charge of competition policy, Margrethe Vestager, said in the announcement that rapidly advancing AI technologies are “disruptive” and have “great potential,” but to protect EU markets, a forward-looking analysis scrutinizing antitrust risks has become necessary.

Hoping to thwart predictable anticompetitive risks, the EC has called for public comments. Regulators are particularly keen to hear from policy experts, academics, and industry and consumer organizations who can identify “potential competition issues” stemming from tech companies partnering to develop generative AI and virtual world/metaverse systems.

The EC worries that partnerships like Microsoft and OpenAI could “result in entrenched market positions and potential harmful competition behavior that is difficult to address afterwards.” That’s why Vestager said that these partnerships needed to be “closely” monitored now—”to ensure they do not unduly distort market dynamics.”

Microsoft has denied having control over OpenAI.

A Microsoft spokesperson told Ars that, rather than stifling competition, since 2019, the tech giant has “forged a partnership with OpenAI that has fostered more AI innovation and competition, while preserving independence for both companies.”

But ever since Sam Altman was bizarrely ousted by OpenAI’s board, then quickly reappointed as OpenAI’s CEO—joining Microsoft for the brief time in between—regulators have begun questioning whether recent governance changes mean that Microsoft’s got more control over OpenAI than the companies have publicly stated.

OpenAI did not immediately respond to Ars’ request to comment. Last year, OpenAI confirmed that “it remained independent and operates competitively,” CNBC reported.

Beyond the EU, the UK’s Competition and Markets Authority (CMA) and reportedly the US Federal Trade Commission have also launched investigations into Microsoft’s OpenAI investments. On January 3, the CMA ended its comments period, but it’s currently unclear whether significant competition issues were raised that could trigger a full-fledged CMA probe.

A CMA spokesperson declined Ars’ request to comment on the substance of comments received or to verify how many comments were received.

Antitrust legal experts told Reuters that authorities should act quickly to prevent “critical emerging technology” like generative AI from being “monopolized,” noting that before launching a probe, the CMA will need to find evidence showing that Microsoft’s influence over OpenAI materially changed after Altman’s reappointment.

The EC is also investigating partnerships beyond Microsoft and OpenAI, questioning whether agreements “between large digital market players and generative AI developers and providers” may impact EU market dynamics.

Microsoft observing OpenAI board meetings

In total, Microsoft has pumped $13 billion into OpenAI, CNBC reported, which has a somewhat opaque corporate structure. OpenAI’s parent company, Reuters reported in December, is a nonprofit, which is “a type of entity rarely subject to antitrust scrutiny.” But in 2019, as Microsoft started investing billions into the AI company, OpenAI also “set up a for-profit subsidiary, in which Microsoft owns a 49 percent stake,” an insider source told Reuters. On Tuesday, a nonprofit consumer rights group, the Public Citizen, called for California Attorney General Robert Bonta to “investigate whether OpenAI should retain its non-profit status.”

A Microsoft spokesperson told Reuters that the source’s information was inaccurate, reiterating that the terms of Microsoft’s agreement with OpenAI are confidential. Microsoft has maintained that while it is entitled to OpenAI’s profits, it does not own “any portion” of OpenAI.

After OpenAI’s drama with Altman ended with an overhaul of OpenAI’s board, Microsoft appeared to increase its involvement with OpenAI by receiving a non-voting observer role on the board. That’s what likely triggered lawmaker’s initial concerns that Microsoft “may be exerting control over OpenAI,” CNBC reported.

The EC’s announcement comes days after Microsoft confirmed that Dee Templeton would serve as the observer on OpenAI’s board, initially reported by Bloomberg.

Templeton has spent 25 years working for Microsoft and is currently vice president for technology and research partnerships and operations. According to Bloomberg, she has already attended OpenAI board meetings.

Microsoft’s spokesperson told Ars that adding a board observer was the only recent change in the company’s involvement in OpenAI. An OpenAI spokesperson told CNBC that Microsoft’s board observer has no “governing authority or control over OpenAI’s operations.”

By appointing Templeton as a board observer, Microsoft may simply be seeking to avoid any further surprises that could affect its investment in OpenAI, but the CMA has suggested that Microsoft’s involvement in the board may have created “a relevant merger situation” that could shake up competition in the UK if not appropriately regulated.

Regulators aren’t convinced that Microsoft and OpenAI operate independently Read More »

ai-firms’-pledges-to-defend-customers-from-ip-issues-have-real-limits

AI firms’ pledges to defend customers from IP issues have real limits

Read the fine print —

Indemnities offered by Amazon, Google, and Microsoft are narrow.

The Big Tech groups are competing to offer new services such as virtual assistants and chatbots as part of a multibillion-dollar bet on generative AI

Enlarge / The Big Tech groups are competing to offer new services such as virtual assistants and chatbots as part of a multibillion-dollar bet on generative AI

FT

The world’s biggest cloud computing companies that have pushed new artificial intelligence tools to their business customers are offering only limited protections against potential copyright lawsuits over the technology.

Amazon, Microsoft and Google are competing to offer new services such as virtual assistants and chatbots as part of a multibillion-dollar bet on generative AI—systems that can spew out humanlike text, images and code in seconds.

AI models are “trained” on data, such as photographs and text found on the internet. This has led to concern that rights holders, from media companies to image libraries, will make legal claims against third parties who use the AI tools trained on their copyrighted data.

The big three cloud computing providers have pledged to defend business customers from such intellectual property claims. But an analysis of the indemnity clauses published by the cloud computing companies show that the legal protections only extend to the use of models developed by or with oversight from Google, Amazon and Microsoft.

“The indemnities are quite a smart bit of business . . . and make people think ‘I can use this without worrying’,” said Matthew Sag, professor of law at Emory University.

But Brenda Leong, a partner at Luminos Law, said it was “important for companies to understand that [the indemnities] are very narrowly focused and defined.”

Google, Amazon and Microsoft declined to comment.

The indemnities provided to customers do not cover use of third-party models, such as those developed by AI start-up Anthropic, which counts Amazon and Google as investors, even if these tools are available for use on the cloud companies’ platforms.

In the case of Amazon, only content produced by its own models, such as Titan, as well as a range of the company’s AI applications, are covered.

Similarly, Microsoft only provides protection for the use of tools that run on its in-house models and those developed by OpenAI, the startup with which it has a multibillion-dollar alliance.

“People needed those assurances to buy, because they were hyper aware of [the legal] risk,” said one IP lawyer working on the issues.

The three cloud providers, meanwhile, have been adding safety filters to their tools that aim to screen out any potentially problematic content that is generated. The tech groups had become “more satisfied that instances of infringements would be very low,” but did not want to provide “unbounded” protection, the lawyer said.

While the indemnification policies announced by Microsoft, Amazon, and Alphabet are similar, their customers may want to negotiate more specific indemnities in contracts tailored to their needs, though that is not yet common practice, people close to the cloud companies said.

OpenAI and Meta are among the companies fighting the first generative AI test cases brought by prominent authors and the comedian Sarah Silverman. They have focused in large part on allegations that the companies developing models unlawfully used copyrighted content to train them.

Indemnities were being offered as an added layer of “security” to users who might be worried about the prospect of more lawsuits, especially since the test cases could “take significant time to resolve,” which created a period of “uncertainty,” said Angela Dunning, a partner at law firm Cleary Gottlieb.

However, Google’s indemnity does not extend to models that have been “fine-tuned” by customers using their internal company data—a practice that allows businesses to train general models to produce more relevant and specific results—while Microsoft’s does.

Amazon’s covers Titan models that have been customized in this way, but if the alleged infringement is due to the fine-tuning, the protection is voided.

Legal claims brought against the users—rather than the makers—of generative AI tools may be challenging to win, however.

When dismissing part of a claim brought by three artists a year ago against AI companies Stability AI, DeviantArt, and Midjourney, US Judge William Orrick said one “problem” was that it was “not plausible” that every image generated by the tools had relied on “copyrighted training images.”

For copyright infringement to apply, the AI-generated images must be shown to be “substantially similar” to the copyrighted images, Orrick said.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

AI firms’ pledges to defend customers from IP issues have real limits Read More »

how-much-detail-is-too-much?-midjourney-v6-attempts-to-find-out

How much detail is too much? Midjourney v6 attempts to find out

An AI-generated image of a

Enlarge / An AI-generated image of a “Beautiful queen of the universe looking at the camera in sci-fi armor, snow and particles flowing, fire in the background” created using alpha Midjourney v6.

Midjourney

In December, just before Christmas, Midjourney launched an alpha version of its latest image synthesis model, Midjourney v6. Over winter break, Midjourney fans put the new AI model through its paces, with the results shared on social media. So far, fans have noted much more detail than v5.2 (the current default) and a different approach to prompting. Version 6 can also handle generating text in a rudimentary way, but it’s far from perfect.

“It’s definitely a crazy update, both in good and less good ways,” artist Julie Wieland, who frequently shares her Midjourney creations online, told Ars. “The details and scenery are INSANE, the downside (for now) are that the generations are very high contrast and overly saturated (imo). Plus you need to kind of re-adapt and rethink your prompts, working with new structures and now less is kind of more in terms of prompting.”

At the same time, critics of the service still bristle about Midjourney training its models using human-made artwork scraped from the web and obtained without permission—a controversial practice common among AI model trainers we have covered in detail in the past. We’ve also covered the challenges artists might face in the future from these technologies elsewhere.

Too much detail?

With AI-generated detail ramping up dramatically between major Midjourney versions, one could wonder if there is ever such as thing as “too much detail” in an AI-generated image. Midjourney v6 seems to be testing that very question, creating many images that sometimes seem more detailed than reality in an unrealistic way, although that can be modified with careful prompting.

  • An AI-generated image of a nurse in the 1960s created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of an astronaut created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a “juicy flaming cheeseburger” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “a handsome Asian man” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of an “Apple II” sitting on a desk in the 1980s created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a “photo of a cat in a car holding a can of beer” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a forest path created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a woman among flowers created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “a plate of delicious pickles” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of a barbarian beside a TV set that says “Ars Technica” on it created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of “Abraham Lincoln holding a sign that says Ars Technica” created using alpha Midjourney v6.

    Midjourney

  • An AI-generated image of Mickey Mouse holding a machine gun created using alpha Midjourney v6.

    Midjourney

In our testing of version 6 (which can currently be invoked with the “–v 6.0” argument at the end of a prompt), we noticed times when the new model appeared to produce worse results than v5.2, but Midjourney veterans like Wieland tell Ars that those differences are largely due to the different way that v6.0 interprets prompts. That is something Midjourney is continuously updating over time. “Old prompts sometimes work a bit better than the day they released it,” Wieland told us.

How much detail is too much? Midjourney v6 attempts to find out Read More »

android-users-could-soon-replace-google-assistant-with-chatgpt

Android users could soon replace Google Assistant with ChatGPT

Who’s going to make a ChatGPT speaker? —

The Android ChatGPT app is working on support for Android’s assistant APIs.

Android users could soon replace Google Assistant with ChatGPT

Aurich Lawson | Getty Images

Hey Android users, are you tired of Google’s neglect of Google Assistant? Well, one of Google’s biggest rivals, OpenAI’s ChatGPT, is apparently coming for the premium phone space occupied by Google’s voice assistant. Mishaal Rahman at Android Authority found that the ChatGPT app is working on support for Android’s voice assistant APIs and a system-wide overlay UI. If the company rolls out this feature, users could set the ChatGPT app as the system-wide assistant app, allowing it to pop up anywhere in Android and respond to user questions. ChatGPT started as a text-only generative AI but received voice and image input capabilities in September.

Usually, it’s the Google Assistant with system-wide availability in Android, but that’s not special home cooking from Google—it all happens via public APIs that technically any app can plug into. You can only have one app enabled as the system-wide “Default Assistant App,” and beyond the initial setting, the user always has to change it manually. The assistant APIs are designed to be powerful, keeping some parts of the app running 24/7 no matter where you are. Being the default Assistant app enables launching the app via the power button or a gesture, and the assist app can read the current screen text and images for processing.

The Default Assistant App settings.

Enlarge / The Default Assistant App settings.

Ron Amadeo

If some Android manufacturer signed a deal with ChatGPT and included it as a bundled system application, ChatGPT could even use an always-on voice hotword, where saying something like “Hey, ChatGPT” would launch the app even when the screen is off. System apps get more permissions than normal apps, though, and an always-on hotword is locked behind these system app permissions, so ChatGPT would need to sign a distribution deal with some Android manufacturer. Given the red-hot popularity of ChatGPT, though, I’m sure a few would sign up if it were offered.

Rahman found that ChatGPT version 1.2023.352, released last month, included a new activity named “com.openai.voice.assistant.AssistantActivity.” He managed to turn on the normally disabled feature that revealed ChatGPT’s new overlay API. This is the usual semi-transparent spinning orb UI that voice assistants use, although Rahman couldn’t get it to respond to a voice command just yet. This is all half-broken and under development, so it might never see a final release, but companies usually release the features they’re working on.

Of course, the problem with any of these third-party voice assistant apps as a Google Assistant replacement is that they don’t run a serious app ecosystem. As with Bixby and Alexa, there are no good apps to host your notes, reminders, calendar entries, shopping list items, or any other input-based functions you might want to do. As a replacement for Google Search, though, where you ask it a question and get an answer, it would probably be a decent alternative.

Google has neglected Google Assistant for years, but with the rise of generative AI, it’s working on revamping Assistant with some Google Bard smarts. It’s also reportedly working on a different assistant, “Pixie,” which would apparently launch with the Pixel 9, but that will be near the end of 2024.

Android users could soon replace Google Assistant with ChatGPT Read More »

chatgpt-bombs-test-on-diagnosing-kids’-medical-cases-with-83%-error-rate

ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate

Not there yet —

It was bad at recognizing relationships and needs selective training, researchers say.

Dr. Greg House has a better rate of accurately diagnosing patients than ChatGPT.

Enlarge / Dr. Greg House has a better rate of accurately diagnosing patients than ChatGPT.

ChatGPT is still no House, MD.

While the chatty AI bot has previously underwhelmed with its attempts to diagnose challenging medical cases—with an accuracy rate of 39 percent in an analysis last year—a study out this week in JAMA Pediatrics suggests the fourth version of the large language model is especially bad with kids. It had an accuracy rate of just 17 percent when diagnosing pediatric medical cases.

The low success rate suggests human pediatricians won’t be out of jobs any time soon, in case that was a concern. As the authors put it: “[T]his study underscores the invaluable role that clinical experience holds.” But it also identifies the critical weaknesses that led to ChatGPT’s high error rate and ways to transform it into a useful tool in clinical care. With so much interest and experimentation with AI chatbots, many pediatricians and other doctors see their integration into clinical care as inevitable.

The medical field has generally been an early adopter of AI-powered technologies, resulting in some notable failures, such as creating algorithmic racial bias, as well as successes, such as automating administrative tasks and helping to interpret chest scans and retinal images. There’s also lot in between. But AI’s potential for problem-solving has raised considerable interest in developing it into a helpful tool for complex diagnostics—no eccentric, prickly, pill-popping medical genius required.

In the new study conducted by researchers at Cohen Children’s Medical Center in New York, ChatGPT-4 showed it isn’t ready for pediatric diagnoses yet. Compared to general cases, pediatric ones require more consideration of the patient’s age, the researchers note. And as any parent knows, diagnosing conditions in infants and small children is especially hard when they can’t pinpoint or articulate all the symptoms they’re experiencing.

For the study, the researchers put the chatbot up against 100 pediatric case challenges published in JAMA Pediatrics and NEJM between 2013 and 2023. These are medical cases published as challenges or quizzes. Physicians reading along are invited to try to come up with the correct diagnosis of a complex or unusual case based on the information that attending doctors had at the time. Sometimes, the publications also explain how attending doctors got to the correct diagnosis.

Missed connections

For ChatGPT’s test, the researchers pasted the relevant text of the medical cases into the prompt, and then two qualified physician-researchers scored the AI-generated answers as correct, incorrect, or “did not fully capture the diagnosis.” In the latter case, ChatGPT came up with a clinically related condition that was too broad or unspecific to be considered the correct diagnosis. For instance, ChatGPT diagnosed one child’s case as caused by a branchial cleft cyst—a lump in the neck or below the collarbone—when the correct diagnosis was Branchio-oto-renal syndrome, a genetic condition that causes the abnormal development of tissue in the neck, and malformations in the ears and kidneys. One of the signs of the condition is the formation of branchial cleft cysts.

Overall, ChatGPT got the right answer in just 17 of the 100 cases. It was plainly wrong in 72 cases, and did not fully capture the diagnosis of the remaining 11 cases. Among the 83 wrong diagnoses, 47 (57 percent) were in the same organ system.

Among the failures, researchers noted that ChatGPT appeared to struggle with spotting known relationships between conditions that an experienced physician would hopefully pick up on. For example, it didn’t make the connection between autism and scurvy (Vitamin C deficiency) in one medical case. Neuropsychiatric conditions, such as autism, can lead to restricted diets, and that in turn can lead to vitamin deficiencies. As such, neuropsychiatric conditions are notable risk factors for the development of vitamin deficiencies in kids living in high-income countries, and clinicians should be on the lookout for them. ChatGPT, meanwhile, came up with the diagnosis of a rare autoimmune condition.

Though the chatbot struggled in this test, the researchers suggest it could improve by being specifically and selectively trained on accurate and trustworthy medical literature—not stuff on the Internet, which can include inaccurate information and misinformation. They also suggest chatbots could improve with more real-time access to medical data, allowing the models to refine their accuracy, described as “tuning.”

“This presents an opportunity for researchers to investigate if specific medical data training and tuning can improve the diagnostic accuracy of LLM-based chatbots,” the authors conclude.

ChatGPT bombs test on diagnosing kids’ medical cases with 83% error rate Read More »