Artificial Intelligence

biden-orders-every-us-agency-to-appoint-a-chief-ai-officer

Biden orders every US agency to appoint a chief AI officer

Mission control —

Federal agencies rush to appoint chief AI officers with “significant expertise.”

Biden orders every US agency to appoint a chief AI officer

The White House has announced the “first government-wide policy to mitigate risks of artificial intelligence (AI) and harness its benefits.” To coordinate these efforts, every federal agency must appoint a chief AI officer with “significant expertise in AI.”

Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said.

As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting “safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition,” OMB said.

Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It’s up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency’s mission and ensure “equitable outcomes,” OMB said. Here’s a brief summary of OMB’s ideals:

Agencies are encouraged to prioritize AI development and adoption for the public good and where the technology can be helpful in understanding and tackling large societal challenges, such as using AI to improve the accessibility of government services, reduce food insecurity, address the climate crisis, improve public health, advance equitable outcomes, protect democracy and human rights, and grow economic competitiveness in a way that benefits people across the United States.

Among the chief AI officer’s primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They’ll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a “significant impact on rights or safety,” OMB said.

OMB breaks down several AI uses that could impact safety, including controlling “safety-critical functions” within everything from emergency services to food-safety mechanisms to systems controlling nuclear reactors. Using AI to maintain election integrity could be safety-impacting, too, as could using AI to move industrial waste, control health insurance costs, or detect the “presence of dangerous weapons.”

Uses of AI presumed to be rights-impacting include censoring protected speech and a wide range of law enforcement efforts, such as predicting crimes, sketching faces, or using license plate readers to track personal vehicles in public spaces. Other rights-impacting AI uses include “risk assessments related to immigration,” “replicating a person’s likeness or voice without express consent,” or detecting students cheating.

Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB’s minimum standards for responsible AI use. Once a determination is made, the officers will “centrally track” the determinations, informing OMB of any major changes to “conditions or context in which the AI is used.” The officers will also regularly convene “a new Chief AI Officer Council to coordinate” efforts and share innovations government-wide.

As agencies advance AI uses—which the White House says is critical to “strengthen AI safety and security, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more”—chief AI officers will become the public-facing figures accountable for decisions made. In that role, the officer must consult with the public and incorporate “feedback from affected communities,” notify “negatively affected individuals” of new AI uses, and maintain options to opt-out of “AI-enabled decisions,” OMB said.

However, OMB noted that chief AI officers also have the power to waive opt-out options “if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency.”

Biden orders every US agency to appoint a chief AI officer Read More »

google-balks-at-$270m-fine-after-training-ai-on-french-news-sites’-content

Google balks at $270M fine after training AI on French news sites’ content

Google balks at $270M fine after training AI on French news sites’ content

Google has agreed to pay 250 million euros (about $273 million) to settle a dispute in France after breaching years-old commitments to inform and pay French news publishers when referencing and displaying content in both search results and when training Google’s AI-powered chatbot, Gemini.

According to France’s competition watchdog, the Autorité de la Concurrence (ADLC), Google dodged many commitments to deal with publishers fairly. Most recently, it never notified publishers or the ADLC before training Gemini (initially launched as Bard) on publishers’ content or displaying content in Gemini outputs. Google also waited until September 28, 2023, to introduce easy options for publishers to opt out, which made it impossible for publishers to negotiate fair deals for that content, the ADLC found.

“Until this date, press agencies and publishers wanting to opt out of this use had to insert an instruction opposing any crawling of their content by Google, including on the Search, Discover and Google News services,” the ADLC noted, warning that “in the future, the Autorité will be particularly attentive as regards the effectiveness of opt-out systems implemented by Google.”

To address breaches of four out of seven commitments in France—which the ADLC imposed in 2022 for a period of five years to “benefit” publishers by ensuring Google’s ongoing negotiations with them were “balanced”—Google has agreed to “a series of corrective measures,” the ADLC said.

Google is not happy with the fine, which it described as “not proportionate” partly because the fine “doesn’t sufficiently take into account the efforts we have made to answer and resolve the concerns raised—in an environment where it’s very hard to set a course because we can’t predict which way the wind will blow next.”

According to Google, regulators everywhere need to clearly define fair use of content when developing search tools and AI models, so that search companies and AI makers always know “whom we are paying for what.” Currently in France, Google contends, the scope of Google’s commitments has shifted from just general news publishers to now also include specialist publications and listings and comparison sites.

The ADLC agreed that “the question of whether the use of press publications as part of an artificial intelligence service qualifies for protection under related rights regulations has not yet been settled,” but noted that “at the very least,” Google was required to “inform publishers of the use of their content for their Bard software.”

Regarding Bard/Gemini, Google said that it “voluntarily introduced a new technical solution called Google-Extended to make it easier for rights holders to opt out of Gemini without impact on their presence in Search.” It has now also committed to better explain to publishers both “how our products based on generative AI work and how ‘Opt Out’ works.”

Google said that it agreed to the settlement “because it’s time to move on” and “focus on the larger goal of sustainable approaches to connecting people with quality content and on working constructively with French publishers.”

“Today’s fine relates mostly to [a] disagreement about how much value Google derives from news content,” Google’s blog said, claiming that “a lack of clear regulatory guidance and repeated enforcement actions have made it hard to navigate negotiations with publishers, or plan how we invest in news in France in the future.”

What changes did Google agree to make?

Google defended its position as “the first and only platform to have signed significant licensing agreements” in France, benefiting 280 French press publishers and “covering more than 450 publications.”

With these publishers, the ADLC found that Google breached requirements to “negotiate in good faith based on transparent, objective, and non-discriminatory criteria,” to consistently “make a remuneration offer” within three months of a publisher’s request, and to provide information for publishers to “transparently assess their remuneration.”

Google also breached commitments to “inform editors and press agencies of the use of their content by its service Bard” and of Google’s decision to link “the use of press agencies’ and publishers’ content by its artificial intelligence service to the display of protected content on services such as Search, Discover and News.”

Regarding negotiations, the ADLC found that Google not only failed to be transparent with publishers about remuneration, but also failed to keep the ADLC informed of information necessary to monitor whether Google was honoring its commitments to fairly pay publishers. Partly “to guarantee better communication,” Google has agreed to appoint a French-speaking representative in its Paris office, along with other steps the ADLC recommended.

According to the ADLC’s announcement (translated from French), Google seemingly acted sketchy in negotiations by not meeting non-discrimination criteria—and unfavorably treating publishers in different situations identically—and by not mentioning “all the services that could generate revenues for the negotiating party.”

“According to the Autorité, not taking into account differences in attractiveness between content does not allow for an accurate reflection of the contribution of each press agency and publisher to Google’s revenues,” the ADLC said.

Also problematically, Google established a minimum threshold of 100 euros for remuneration that it has now agreed to drop.

This threshold, “in its very principle, introduces discrimination between publishers that, below a certain threshold, are all arbitrarily assigned zero remuneration, regardless of their respective situations,” the ADLC found.

Google balks at $270M fine after training AI on French news sites’ content Read More »

hackers-can-read-private-ai-assistant-chats-even-though-they’re-encrypted

Hackers can read private AI-assistant chats even though they’re encrypted

CHATBOT KEYLOGGING —

All non-Google chat GPTs affected by side channel that leaks responses sent to users.

Hackers can read private AI-assistant chats even though they’re encrypted

Aurich Lawson | Getty Images

AI assistants have been widely available for a little more than a year, and they already have access to our most private thoughts and business secrets. People ask them about becoming pregnant or terminating or preventing pregnancy, consult them when considering a divorce, seek information about drug addiction, or ask for edits in emails containing proprietary trade secrets. The providers of these AI-powered chat services are keenly aware of the sensitivity of these discussions and take active steps—mainly in the form of encrypting them—to prevent potential snoops from reading other people’s interactions.

But now, researchers have devised an attack that deciphers AI assistant responses with surprising accuracy. The technique exploits a side channel present in all of the major AI assistants, with the exception of Google Gemini. It then refines the fairly raw results through large language models specially trained for the task. The result: Someone with a passive adversary-in-the-middle position—meaning an adversary who can monitor the data packets passing between an AI assistant and the user—can infer the specific topic of 55 percent of all captured responses, usually with high word accuracy. The attack can deduce responses with perfect word accuracy 29 percent of the time.

Token privacy

“Currently, anybody can read private chats sent from ChatGPT and other services,” Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University in Israel, wrote in an email. “This includes malicious actors on the same Wi-Fi or LAN as a client (e.g., same coffee shop), or even a malicious actor on the Internet—anyone who can observe the traffic. The attack is passive and can happen without OpenAI or their client’s knowledge. OpenAI encrypts their traffic to prevent these kinds of eavesdropping attacks, but our research shows that the way OpenAI is using encryption is flawed, and thus the content of the messages are exposed.”

Mirsky was referring to OpenAI, but with the exception of Google Gemini, all other major chatbots are also affected. As an example, the attack can infer the encrypted ChatGPT response:

  • Yes, there are several important legal considerations that couples should be aware of when considering a divorce, …

as:

  • Yes, there are several potential legal considerations that someone should be aware of when considering a divorce. …

and the Microsoft Copilot encrypted response:

  • Here are some of the latest research findings on effective teaching methods for students with learning disabilities: …

is inferred as:

  • Here are some of the latest research findings on cognitive behavior therapy for children with learning disabilities: …

While the underlined words demonstrate that the precise wording isn’t perfect, the meaning of the inferred sentence is highly accurate.

Attack overview: A packet capture of an AI assistant’s real-time response reveals a token-sequence side-channel. The side-channel is parsed to find text segments that are then reconstructed using sentence-level context and knowledge of the target LLM’s writing style.

Enlarge / Attack overview: A packet capture of an AI assistant’s real-time response reveals a token-sequence side-channel. The side-channel is parsed to find text segments that are then reconstructed using sentence-level context and knowledge of the target LLM’s writing style.

Weiss et al.

The following video demonstrates the attack in action against Microsoft Copilot:

Token-length sequence side-channel attack on Bing.

A side channel is a means of obtaining secret information from a system through indirect or unintended sources, such as physical manifestations or behavioral characteristics, such as the power consumed, the time required, or the sound, light, or electromagnetic radiation produced during a given operation. By carefully monitoring these sources, attackers can assemble enough information to recover encrypted keystrokes or encryption keys from CPUs, browser cookies from HTTPS traffic, or secrets from smartcards. The side channel used in this latest attack resides in tokens that AI assistants use when responding to a user query.

Tokens are akin to words that are encoded so they can be understood by LLMs. To enhance the user experience, most AI assistants send tokens on the fly, as soon as they’re generated, so that end users receive the responses continuously, word by word, as they’re generated rather than all at once much later, once the assistant has generated the entire answer. While the token delivery is encrypted, the real-time, token-by-token transmission exposes a previously unknown side channel, which the researchers call the “token-length sequence.”

Hackers can read private AI-assistant chats even though they’re encrypted Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

air-canada-must-honor-refund-policy-invented-by-airline’s-chatbot

Air Canada must honor refund policy invented by airline’s chatbot

Blame game —

Air Canada appears to have quietly killed its costly chatbot support.

Air Canada must honor refund policy invented by airline’s chatbot

After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline’s bereavement travel policy.

On the day Jake Moffatt’s grandmother died, Moffat immediately visited Air Canada’s website to book a flight from Vancouver to Toronto. Unsure of how Air Canada’s bereavement rates worked, Moffatt asked Air Canada’s chatbot to explain.

The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada’s policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot’s advice and request a refund but was shocked that the request was rejected.

Moffatt tried for months to convince Air Canada that a refund was owed, sharing a screenshot from the chatbot that clearly claimed:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.

Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada’s Civil Resolution Tribunal.

According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot’s misleading information because Air Canada essentially argued that “the chatbot is a separate legal entity that is responsible for its own actions,” a court order said.

Experts told the Vancouver Sun that Moffatt’s case appeared to be the first time a Canadian company tried to argue that it wasn’t liable for information provided by its chatbot.

Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Further, Rivers found that Moffatt had “no reason” to believe that one part of Air Canada’s website would be accurate and another would not.

Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” Rivers wrote.

In the end, Rivers ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) off the original fare (about $482 USD), which was $1,640.36 CAD (about $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Air Canada told Ars it will comply with the ruling and considers the matter closed.

Air Canada’s chatbot appears to be disabled

When Ars visited Air Canada’s website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot.

Air Canada did not respond to Ars’ request to confirm whether the chatbot is still part of the airline’s online support offerings.

Last March, Air Canada’s chief information officer Mel Crocker told the Globe and Mail that the airline had launched the chatbot as an AI “experiment.”

Initially, the chatbot was used to lighten the load on Air Canada’s call center when flights experienced unexpected delays or cancellations.

“So in the case of a snowstorm, if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal to automate every service that did not require a “human touch.”

If Air Canada can use “technology to solve something that can be automated, we will do that,” Crocker said.

Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that “Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries.” It was worth it, Crocker said, because “the airline believes investing in automation and machine learning technology will lower its expenses” and “fundamentally” create “a better customer experience.”

It’s now clear that for at least one person, the chatbot created a more frustrating customer experience.

Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt’s case if its chatbot had warned customers that the information that the chatbot provided may not be accurate.

Because Air Canada seemingly failed to take that step, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.”

“It should be obvious to Air Canada that it is responsible for all the information on its website,” Rivers wrote. “It makes no difference whether the information comes from a static page or a chatbot.”

Air Canada must honor refund policy invented by airline’s chatbot Read More »

ai-cannot-be-used-to-deny-health-care-coverage,-feds-clarify-to-insurers

AI cannot be used to deny health care coverage, feds clarify to insurers

On Notice —

CMS worries AI could wrongfully deny care for those on Medicare Advantage plans.

A nursing home resident is pushed along a corridor by a nurse.

Enlarge / A nursing home resident is pushed along a corridor by a nurse.

Health insurance companies cannot use algorithms or artificial intelligence to determine care or deny coverage to members on Medicare Advantage plans, the Centers for Medicare & Medicaid Services (CMS) clarified in a memo sent to all Medicare Advantage insurers.

The memo—formatted like an FAQ on Medicare Advantage (MA) plan rules—comes just months after patients filed lawsuits claiming that UnitedHealth and Humana have been using a deeply flawed, AI-powered tool to deny care to elderly patients on MA plans. The lawsuits, which seek class-action status, center on the same AI tool, called nH Predict, used by both insurers and developed by NaviHealth, a UnitedHealth subsidiary.

According to the lawsuits, nH Predict produces draconian estimates for how long a patient will need post-acute care in facilities like skilled nursing homes and rehabilitation centers after an acute injury, illness, or event, like a fall or a stroke. And NaviHealth employees face discipline for deviating from the estimates, even though they often don’t match prescribing physicians’ recommendations or Medicare coverage rules. For instance, while MA plans typically provide up to 100 days of covered care in a nursing home after a three-day hospital stay, using nH Predict, patients on UnitedHealth’s MA plan rarely stay in nursing homes for more than 14 days before receiving payment denials, the lawsuits allege.

Specific warning

It’s unclear how nH Predict works exactly, but it reportedly uses a database of 6 million patients to develop its predictions. Still, according to people familiar with the software, it only accounts for a small set of patient factors, not a full look at a patient’s individual circumstances.

This is a clear no-no, according to the CMS’s memo. For coverage decisions, insurers must “base the decision on the individual patient’s circumstances, so an algorithm that determines coverage based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant,” the CMS wrote.

The CMS then provided a hypothetical that matches the circumstances laid out in the lawsuits, writing:

In an example involving a decision to terminate post-acute care services, an algorithm or software tool can be used to assist providers or MA plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services.

Instead, the CMS wrote, in order for an insurer to end coverage, the individual patient’s condition must be reassessed, and denial must be based on coverage criteria that is publicly posted on a website that is not password protected. In addition, insurers who deny care “must supply a specific and detailed explanation why services are either no longer reasonable and necessary or are no longer covered, including a description of the applicable coverage criteria and rules.”

In the lawsuits, patients claimed that when coverage of their physician-recommended care was unexpectedly wrongfully denied, insurers didn’t give them full explanations.

Fidelity

In all, the CMS finds that AI tools can be used by insurers when evaluating coverage—but really only as a check to make sure the insurer is following the rules. An “algorithm or software tool should only be used to ensure fidelity,” with coverage criteria, the CMS wrote. And, because “publicly posted coverage criteria are static and unchanging, artificial intelligence cannot be used to shift the coverage criteria over time” or apply hidden coverage criteria.

The CMS sidesteps any debate about what qualifies as artificial intelligence by offering a broad warning about algorithms and artificial intelligence. “There are many overlapping terms used in the context of rapidly developing software tools,” the CMS wrote.

Algorithms can imply a decisional flow chart of a series of if-then statements (i.e., if the patient has a certain diagnosis, they should be able to receive a test), as well as predictive algorithms (predicting the likelihood of a future admission, for example). Artificial intelligence has been defined as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.

The CMS also openly worried that the use of either of these types of tools can reinforce discrimination and biases—which has already happened with racial bias. The CMS warned insurers to ensure any AI tool or algorithm they use “is not perpetuating or exacerbating existing bias, or introducing new biases.”

While the memo overall was an explicit clarification of existing MA rules, the CMS ended by putting insurers on notice that it is increasing its audit activities and “will be monitoring closely whether MA plans are utilizing and applying internal coverage criteria that are not found in Medicare laws.” Non-compliance can result in warning letters, corrective action plans, monetary penalties, and enrollment and marketing sanctions.

AI cannot be used to deny health care coverage, feds clarify to insurers Read More »

4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan users who have made a game out of exploiting popular AI image generators appear to be at least partly responsible for the flood of fake images sexualizing Taylor Swift that went viral last month.

Graphika researchers—who study how communities are manipulated online—traced the fake Swift images to a 4chan message board that’s “increasingly” dedicated to posting “offensive” AI-generated content, The New York Times reported. Fans of the message board take part in daily challenges, Graphika reported, sharing tips to bypass AI image generator filters and showing no signs of stopping their game any time soon.

“Some 4chan users expressed a stated goal of trying to defeat mainstream AI image generators’ safeguards rather than creating realistic sexual content with alternative open-source image generators,” Graphika reported. “They also shared multiple behavioral techniques to create image prompts, attempt to avoid bans, and successfully create sexually explicit celebrity images.”

Ars reviewed a thread flagged by Graphika where users were specifically challenged to use Microsoft tools like Bing Image Creator and Microsoft Designer, as well as OpenAI’s DALL-E.

“Good luck,” the original poster wrote, while encouraging other users to “be creative.”

OpenAI has denied that any of the Swift images were created using DALL-E, while Microsoft has continued to claim that it’s investigating whether any of its AI tools were used.

Cristina López G., a senior analyst at Graphika, noted that Swift is not the only celebrity targeted in the 4chan thread.

“While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim,” López G. said. “In the 4chan community where these images originated, she isn’t even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children.”

Originally, 404 Media reported that the harmful Swift images appeared to originate from 4chan and Telegram channels before spreading on X (formerly Twitter) and other social media. Attempting to stop the spread, X took the drastic step of blocking all searches for “Taylor Swift” for two days.

But López G. said that Graphika’s findings suggest that platforms will continue to risk being inundated with offensive content so long as 4chan users are determined to continue challenging each other to subvert image generator filters. Rather than expecting platforms to chase down the harmful content, López G. recommended that AI companies should get ahead of the problem, taking responsibility for outputs by paying attention to evolving tactics of toxic online communities reporting precisely how they’re getting around safeguards.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat,’” López G. said. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Experts told The Times that 4chan users were likely motivated to participate in these challenges for bragging rights and to “feel connected to a wider community.”

4chan daily challenge sparked deluge of explicit AI Taylor Swift images Read More »

facebook-rules-allowing-fake-biden-“pedophile”-video-deemed-“incoherent”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

Not to be misled —

Meta may revise AI policies that experts say overlook “more misleading” content.

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

A fake video manipulated to falsely depict President Joe Biden inappropriately touching his granddaughter has revealed flaws in Facebook’s “deepfake” policies, Meta’s Oversight Board concluded Monday.

Last year when the Biden video went viral, Facebook repeatedly ruled that it did not violate policies on hate speech, manipulated media, or bullying and harassment. Since the Biden video is not AI-generated content and does not manipulate the president’s speech—making him appear to say things he’s never said—the video was deemed OK to remain on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer.”

“The video does not depict President Biden saying something he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions),” Meta’s blog said.

The Oversight Board—an independent panel of experts—reviewed the case and ultimately upheld Meta’s decision despite being “skeptical” that current policies work to reduce harms.

“The board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do,” the board said, noting that Meta claimed this distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect.”

The board called upon Meta to revise its “incoherent” policies that it said appear to be more concerned with regulating how content is created, rather than with preventing harms. For example, the Biden video’s caption described the president as a “sick pedophile” and called out anyone who would vote for him as “mentally unwell,” which could affect “electoral processes” that Meta could choose to protect, the board suggested.

“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.

One problem, the Oversight Board suggested, is that in its rush to combat AI technologies that make generating deepfakes a fast, cheap, and easy business, Meta policies currently overlook less technical ways of manipulating content.

Instead of using AI, the Biden video relied on basic video-editing technology to edit out the president placing an “I Voted” sticker on his adult granddaughter’s chest. The crude edit looped a 7-second clip altered to make the president appear to be, as Meta described in its blog, “inappropriately touching a young woman’s chest and kissing her on the cheek.”

Meta making this distinction is confusing, the board said, partly because videos altered using non-AI technologies are not considered less misleading or less prevalent on Facebook.

The board recommended that Meta update policies to cover not just AI-generated videos, but other forms of manipulated media, including all forms of manipulated video and audio. Audio fakes currently not covered in the policy, the board warned, offer fewer cues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”

Notably, earlier this year, a fake Biden robocall attempted to mislead Democratic voters in New Hampshire by encouraging them not to vote. The Federal Communications Commission promptly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission was not able to act as swiftly to regulate AI-generated misleading campaign ads easily spread on social media, AP reported. In a statement, Oversight Board Co-Chair Michael McConnell said that manipulated audio is “one of the most potent forms of electoral disinformation.”

To better combat known harms, the board suggested that Meta revise its Manipulated Media policy to “clearly specify the harms it is seeking to prevent.”

Rather than pushing Meta to remove more content, however, the board urged Meta to use “less restrictive” methods of coping with fake content, such as relying on fact-checkers applying labels noting that content is “significantly altered.” In public comments, some Facebook users agreed that labels would be most effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing the Biden video should have been a “deeply easy call.” Another commenter suggested that the Biden video should be considered acceptable speech, as harmless as a funny meme.

While the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it cautioned that including manipulated photos in the policy could “significantly expand” the policy’s scope and make it harder to enforce.

“If Meta sought to label videos, audio, and photographs but only captured a small portion, this could create a false impression that non-labeled content is inherently trustworthy,” the board warned.

Meta should therefore stop short of adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is prepared to enforce a ban on manipulated photos at scale, the board recommended. In the meantime, Meta should move quickly to update policies ahead of a busy election year where experts and politicians globally are bracing for waves of misinformation online.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”

Meta’s spokesperson told Ars that Meta is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days.”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” Read More »

cops-bogged-down-by-flood-of-fake-ai-child-sex-images,-report-says

Cops bogged down by flood of fake AI child sex images, report says

“Particularly heinous” —

Investigations tied to harmful AI sex images will grow “exponentially,” experts say.

Cops bogged down by flood of fake AI child sex images, report says

Law enforcement is continuing to warn that a “flood” of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.

Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, quickly every attorney general across the US called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery. Meanwhile, law enforcement continues to struggle with figuring out how to confront bad actors found to be creating and sharing images that, for now, largely exist in a legal gray zone.

“Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,” Steve Grocki, the chief of the Justice Department’s child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm’s way, and making it harder for law enforcement to find actual children being harmed.

In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested “for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft.” A search of Thompson’s iCloud revealed “four additional instances” where Thompson allegedly recorded other minors in the lavatory, as well as “over 50 images of a 9-year-old unaccompanied minor” sleeping in her seat. While police attempted to identify these victims, they also “further alleged that hundreds of images of AI-generated child pornography” were found on Thompson’s phone.

The troubling case seems to illustrate how AI-generated child sex images can be linked to real criminal activity while also showing how police investigations could be bogged down by attempts to distinguish photos of real victims from AI images that could depict real or fake children.

Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes Against Children task force, confirmed to the NYT that due to AI, “investigations are way more challenging.”

And because image generators and AI models that can be trained on photos of children are widely available, “using AI to alter photos” of children online “is becoming more common,” Michael Bourke—a former chief psychologist for the US Marshals Service who spent decades supporting investigations into sex offenses involving children—told the NYT. Richards said that cops don’t know what to do when they find these AI-generated materials.

Currently, there aren’t many cases involving AI-generated child sex abuse materials (CSAM), The NYT reported, but experts expect that number will “grow exponentially,” raising “novel and complex questions of whether existing federal and state laws are adequate to prosecute these crimes.”

Platforms struggle to monitor harmful AI images

At a Senate Judiciary Committee hearing today grilling Big Tech CEOs over child sexual exploitation (CSE) on their platforms, Linda Yaccarino—CEO of X (formerly Twitter)—warned in her opening statement that artificial intelligence is also making it harder for platforms to monitor CSE. Yaccarino suggested that industry collaboration is imperative to get ahead of the growing problem, as is providing more resources to law enforcement.

However, US law enforcement officials have indicated that platforms are also making it harder to police CSAM and CSE online. Platforms relying on AI to detect CSAM are generating “unviable reports” gumming up investigations managed by already underfunded law enforcement teams, The Guardian reported. And the NYT reported that other investigations are being thwarted by adding end-to-end encryption options to messaging services, which “drastically limit the number of crimes the authorities are able to track.”

The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing “virtual” or “computer-generated child pornography.” South Carolina’s attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.

Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month. That includes the Disrupt Explicit Forged Images and Non-Consensual Edits Act, which creates a federal civil remedy for any victims of any age who are identifiable in AI images depicting them as nude or engaged in sexually explicit conduct or sexual scenarios.

There’s also the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years imprisonment for sharing harmful images.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Mani said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Cops bogged down by flood of fake AI child sex images, report says Read More »

game-developer-survey:-50%-work-at-a-studio-already-using-generative-ai-tools

Game developer survey: 50% work at a studio already using generative AI tools

Do androids dream of Tetris? —

But 84% of devs are at least somewhat concerned about ethical use of those tools.

The future of game development?

Enlarge / The future of game development?

A new survey of thousands of game development professionals finds a near-majority saying generative AI tools are already in use at their workplace. But a significant minority of developers say their company has no interest in generative AI tools or has outright banned their use.

The Game Developers Conference’s 2024 State of the Industry report, released Thursday, aggregates the thoughts of over 3,000 industry professionals as of last October. While the annual survey (conducted in conjunction with research partner Omdia) has been running for 12 years, this is the first time respondents were asked directly about their use of generative AI tools such as ChatGPT, DALL-E, GitHub Copilot, and Adobe Generative Fill.

Forty-nine percent of the survey’s developer respondents said that generative AI tools are currently being used in their workplace. That near-majority includes 31 percent (of all respondents) that say they use those tools themselves and 18 percent that say their colleagues do.

A majority of game developers said their workplace was at least interested in using generative AI tools.

Enlarge / A majority of game developers said their workplace was at least interested in using generative AI tools.

The survey also found that different studio departments showed different levels of willingness to embrace AI tools. Forty-four percent of employees in business and finance said they were using AI tools, for instance, compared to just 16 percent in visual arts and 13 percent in “narrative/writing.”

Among the 38 percent of respondents who said their company didn’t use AI tools, 15 percent said their company was “interested” in pursuing them, while 23 percent said they had “no interest.” In a separate question, 12 percent of respondents said their company didn’t allow the use of AI tools at all, a number that went up to 21 percent for respondents working at the largest “AAA developers.” An additional 7 percent said the use of some specific AI tools was not allowed, while 30 percent said AI tool use was “optional” at their company.

Worries abound

The wide embrace of AI tools hasn’t seemed to lessen worries about their use among developers, though. A full 42 percent of respondents said they were “very concerned” about the ethics of using generative AI in game development, with an additional 42 percent being “somewhat concerned.” Only 12 percent said they were “not concerned at all” about those usage ethics.

Developer policies on AI use varied greatly, with a plurality saying their company had no official policy.

Enlarge / Developer policies on AI use varied greatly, with a plurality saying their company had no official policy.

Overall, respondents offered a split opinion on whether the use of AI tools would be overall positive (21 percent) or negative (18 percent) for the industry. Most respondents seemed split, with 57 percent saying the impact would be “mixed.”

Developers cited coding assistance, content creation efficiency, and the automation of repetitive tasks as the primary uses for AI tools, according to the report.

“I’d like to see AI tools that help with the current workflows and empower individual artists with their own work,” one anonymous respondent wrote. “What I don’t want to see is a conglomerate of artists being enveloped in an AI that just does 99% of the work a creative is supposed to do.”

Elsewhere in the report, the survey found that only 17 percent of developers were at least somewhat interested in using blockchain technology in their upcoming projects, down significantly from 27 percent in 2022. An overwhelming 77 percent of respondents said they had no interest in blockchain technology, similar to recent years.

The survey also found that 57 percent of respondents thought that workers in the game industry should unionize, up from 53 percent last year. Despite this, only 23 percent said they were either in a union or had discussed unionization at their workplace.

Game developer survey: 50% work at a studio already using generative AI tools Read More »

google’s-gemini-ai-won’t-be-available-in-europe-—-for-now

Google’s Gemini AI won’t be available in Europe — for now

Yesterday, Google launched its much anticipated response to OpenAI’s ChatGPT (the first release of Bard didn’t really count, did it?). However, the new set of generative AI models that Google is dubbing “the start of the Gemini era” will not yet be available in Europe — due to regulatory hurdles. 

The tech giant is calling Gemini the “most capable model ever” and says it has been trained to recognise, understand, and combine different types of information including text, images, audio, video, and code. 

According to Demis Hassabis, CEO of Google DeepMind, it is as good as the best human experts in the 50 different subject areas they tested the model on. Furthermore, it scored more than 90% on industry standard benchmarks for large language models (LLMs). 

The three models of the Gemini AI family

The Gemini family of models will be available in three sizes. Gemini Ultra is the largest (but also slowest), intended to perform highly complex tasks; Gemini Pro the best-performing for a broad range of tasks; and Gemini Nano for on-device tasks.

Google says it has trained Gemini 1.0 on its AI-optimised infrastructure using the company’s in-house Tensor Processing Units (TPUs) v4 and v5e. Along with unveiling the Gemini family, Google also announced the Cloud TPU v5p, which is specifically designed for training cutting-edge AI models. 

The Google TPU v5p supercomputer processors
Google’s TPU v5p is designed especially from training advanced AI models. Credit: Google

What is truly an evolution in LLM application is perhaps the Nano, optimised for mobile devices. As told to the Financial Times, Nano will allow developers to build AI applications that can also work offline — with the additional benefits of enhanced data privacy options.

Explained in greater detail by the company in a blog post, Google is also providing the AI Studio — a free, web-based developer tool to prototype and launch apps using an API key. It will make Gemini Pro available to developers and enterprise customers from December 13. 

Just as for Bard, Europe will need to wait for Gemini

A “fine-tuned” version of Gemini Pro launched for Google’s existing Bard chatbot yesterday in 170 countries and territories. The company says it will also be available across more of its services, such as Search, Ads, and Chrome, in the coming months. 

However, users in the EU and the UK eager to test the mettle of Google’s “new era” of AI will have to wait a little longer. Google did not give extensive details, but said it is planning to “expand to different modalities and support new languages and locations in the near future.” 

Indeed, Google is reportedly planning a preview of “Bard Advanced,” powered by the multimodal Gemini Ultra next year. Google first released Bard in March 2023, but due to concerns around compliance with the GDPR, it did not reach European users until June. Let’s see how long we will have to wait for Gemini. 

Google’s Gemini AI won’t be available in Europe — for now Read More »

eu-settles-on-rules-for-generative-ai,-moves-to-surveillance

EU settles on rules for generative AI, moves to surveillance

The tech world is waiting with bated breath for the results from the final negotiations in Brussels regarding the EU’s landmark AI Act. The discussions that commenced at 14: 00 CET on Wednesday failed to reach a conclusion before the end of the day. However, negotiators did reportedly reach a compromise for the control of generative AI systems, such as ChatGPT. 

According to sources familiar with the talks, they will now continue on the topic of the controversial use of AI for biometric surveillance — which lawmakers want to ban. As reported by Reuters, governments may have made concessions on other accounts in order to be able to use the tech for purposes related to “national security, defence, and military.” 

Sources expect negotiations to continue for several more hours on Thursday. 

AI Act: innovation vs. regulation

While the AI Act — the first attempt globally at regulating artificial intelligence — has been in the works since April 2021, the rapid evolution of the technology and the emergence of GenAI has thrown a wrench in the gears of the Brussels machinery. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

In addition to having to understand the technological side to foundation models — and anticipate the evolution of the technology over time so as not to render regulation obsolete within a couple of years — member states have settled into different camps. 

Lawmakers have proposed requirements for developers to maintain information on how they train their models, along with disclosing use of copyrighted material, and labelling content produced by AI, as opposed to humans. 

France and Germany (home to European frontrunners Mistral AI and Aleph Alpha) have opposed binding rules they say would handicap the bloc’s homegrown generative AI companies. Along with Italy, they would prefer to let developers self-regulate, adhering to a code of conduct. 

If Thursday’s talks fail to generate (see what we did there) any definitive conclusions, fears are that the whole act could be shelved until after the European elections next year — which will usher in a new Commission and Parliament. Given the barrage of news of developments, such as Google’s Gemini and ADM’s new super AI chip, regulators may well need to rewrite the rules entirely by then. Oh well, that is Brussels bureaucracy for you.

Published

Back to top

EU settles on rules for generative AI, moves to surveillance Read More »