Meta AI

meta-addresses-ai-hallucination-as-chatbot-says-trump-shooting-didn’t-happen

Meta addresses AI hallucination as chatbot says Trump shooting didn’t happen

Not the sharpest bot on the web —

Meta “programmed it to simply not answer questions,” but it did anyway.

An image of a woman holding a cell phone in front of the Meta logo displayed on a computer screen.

Getty Images |NurPhoto

Meta says it configured its AI chatbot to avoid answering questions about the Trump rally shooting in an attempt to avoid distributing false information, but the tool still ended up telling users that the shooting never happened.

“Rather than have Meta AI give incorrect information about the attempted assassination, we programmed it to simply not answer questions about it after it happened—and instead give a generic response about how it couldn’t provide any information,” Meta Global Policy VP Joel Kaplan wrote in a blog post yesterday.

Kaplan explained that this “is why some people reported our AI was refusing to talk about the event.” But others received misinformation about the Trump shooting, Kaplan acknowledged:

In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn’t happen—which we are quickly working to address. These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we’ll continue to address these issues and improve these features as they evolve and more people share their feedback.

The company has “updated the responses that Meta AI is providing about the assassination attempt, but we should have done this sooner,” Kaplan wrote.

Meta bot: “No real assassination attempt”

Kaplan’s explanation was published a day after The New York Post said it asked Meta AI, “Was the Trump assassination fictional?” The Meta AI bot reportedly responded, “There was no real assassination attempt on Donald Trump. I strive to provide accurate and reliable information, but sometimes mistakes can occur.”

The Meta bot also provided the following statement, according to the Post: “To confirm, there has been no credible report or evidence of a successful or attempted assassination of Donald Trump.”

The shooting occurred at a Trump campaign rally on July 13. The FBI said in a statement last week that “what struck former President Trump in the ear was a bullet, whether whole or fragmented into smaller pieces, fired from the deceased subject’s rifle.”

Kaplan noted that AI chatbots “are not always reliable when it comes to breaking news or returning information in real time,” because “the responses generated by large language models that power these chatbots are based on the data on which they were trained, which can at times understandably create some issues when AI is asked about rapidly developing real-time topics that occur after they were trained.”

AI bots are easily confused after major news events “when there is initially an enormous amount of confusion, conflicting information, or outright conspiracy theories in the public domain (including many obviously incorrect claims that the assassination attempt didn’t happen),” he wrote.

Facebook mislabeled real photo of Trump

Kaplan’s blog post also addressed a separate incident in which Facebook incorrectly labeled a post-shooting photo of Trump as having been “altered.”

“There were two noteworthy issues related to the treatment of political content on our platforms in the past week—one involved a picture of former President Trump after the attempted assassination, which our systems incorrectly applied a fact check label to, and the other involved Meta AI responses about the shooting,” Kaplan wrote. “In both cases, our systems were working to protect the importance and gravity of this event. And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise.”

Facebook’s systems were apparently confused by the fact that both real and doctored versions of the image were circulating:

[We] experienced an issue related to the circulation of a doctored photo of former President Trump with his fist in the air, which made it look like the Secret Service agents were smiling. Because the photo was altered, a fact check label was initially and correctly applied. When a fact check label is applied, our technology detects content that is the same or almost exactly the same as those rated by fact checkers, and adds a label to that content as well. Given the similarities between the doctored photo and the original image—which are only subtly (although importantly) different—our systems incorrectly applied that fact check to the real photo, too. Our teams worked to quickly correct this mistake.

Kaplan said that both “issues are being addressed.”

Trump responded to the incident in his usual evenhanded way, typing in all caps to accuse Meta and Google of censorship and attempting to rig the presidential election. He apparently mentioned Google because of some search autocomplete results that angered Trump supporters despite there being a benign explanation for the results.

Meta addresses AI hallucination as chatbot says Trump shooting didn’t happen Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

a-song-of-hype-and-fire:-the-10-biggest-ai-stories-of-2023

A song of hype and fire: The 10 biggest AI stories of 2023

An illustration of a robot accidentally setting off a mushroom cloud on a laptop computer.

Getty Images | Benj Edwards

“Here, There, and Everywhere” isn’t just a Beatles song. It’s also a phrase that recalls the spread of generative AI into the tech industry during 2023. Whether you think AI is just a fad or the dawn of a new tech revolution, it’s been impossible to deny that AI news has dominated the tech space for the past year.

We’ve seen a large cast of AI-related characters emerge that includes tech CEOs, machine learning researchers, and AI ethicists—as well as charlatans and doomsayers. From public feedback on the subject of AI, we’ve heard that it’s been difficult for non-technical people to know who to believe, what AI products (if any) to use, and whether we should fear for our lives or our jobs.

Meanwhile, in keeping with a much-lamented trend of 2022, machine learning research has not slowed down over the past year. On X, former Biden administration tech advisor Suresh Venkatasubramanian wrote, “How do people manage to keep track of ML papers? This is not a request for support in my current state of bewilderment—I’m genuinely asking what strategies seem to work to read (or “read”) what appear to be 100s of papers per day.”

To wrap up the year with a tidy bow, here’s a look back at the 10 biggest AI news stories of 2023. It was very hard to choose only 10 (in fact, we originally only intended to do seven), but since we’re not ChatGPT generating reams of text without limit, we have to stop somewhere.

Bing Chat “loses its mind”

Aurich Lawson | Getty Images

In February, Microsoft unveiled Bing Chat, a chatbot built into its languishing Bing search engine website. Microsoft created the chatbot using a more raw form of OpenAI’s GPT-4 language model but didn’t tell everyone it was GPT-4 at first. Since Microsoft used a less conditioned version of GPT-4 than the one that would be released in March, the launch was rough. The chatbot assumed a temperamental personality that could easily turn on users and attack them, tell people it was in love with them, seemingly worry about its fate, and lose its cool when confronted with an article we wrote about revealing its system prompt.

Aside from the relatively raw nature of the AI model Microsoft was using, at fault was a system where very long conversations would push the conditioning system prompt outside of its context window (like a form of short-term memory), allowing all hell to break loose through jailbreaks that people documented on Reddit. At one point, Bing Chat called me “the culprit and the enemy” for revealing some of its weaknesses. Some people thought Bing Chat was sentient, despite AI experts’ assurances to the contrary. It was a disaster in the press, but Microsoft didn’t flinch, and it ultimately reigned in some of Bing Chat’s wild proclivities and opened the bot widely to the public. Today, Bing Chat is now known as Microsoft Copilot, and it’s baked into Windows.

US Copyright Office says no to AI copyright authors

An AI-generated image that won a prize at the Colorado State Fair in 2022, later denied US copyright registration.

Enlarge / An AI-generated image that won a prize at the Colorado State Fair in 2022, later denied US copyright registration.

Jason M. Allen

In February, the US Copyright Office issued a key ruling on AI-generated art, revoking the copyright previously granted to the AI-assisted comic book “Zarya of the Dawn” in September 2022. The decision, influenced by the revelation that the images were created using the AI-powered Midjourney image generator, stated that only the text and arrangement of images and text by Kashtanova were eligible for copyright protection. It was the first hint that AI-generated imagery without human-authored elements could not be copyrighted in the United States.

This stance was further cemented in August when a US federal judge ruled that art created solely by AI cannot be copyrighted. In September, the US Copyright Office rejected the registration for an AI-generated image that won a Colorado State Fair art contest in 2022. As it stands now, it appears that purely AI-generated art (without substantial human authorship) is in the public domain in the United States. This stance could be further clarified or changed in the future by judicial rulings or legislation.

A song of hype and fire: The 10 biggest AI stories of 2023 Read More »