2024 election

here-are-3-science-backed-strategies-to-rein-in-election-anxiety

Here are 3 science-backed strategies to rein in election anxiety

In this scenario, I encourage my patients to move past that initial thought of how awful it will be and instead consider exactly how they will respond to the inauguration, the next day, week, month, and so on.

Cognitive flexibility allows you to explore how you will cope, even in the face of a negative outcome, helping you feel a bit less out of control. If you’re experiencing a lot of anxiety about the election, try thinking through what you’d do if the undesirable candidate takes office—thoughts like “I’ll donate to causes that are important to me” and “I’ll attend protests.”

Choose your actions with intention

Another tool for managing your anxiety is to consider whether your behaviors are affecting how you feel.

Remember, for instance, the goal of 24-hour news networks is to increase ratings. It’s in their interest to keep you riveted to your screens by making it seem like important announcements are imminent. As a result, it may feel difficult to disconnect and take part in your usual self-care behavior.

Try telling yourself, “If something happens, someone will text me,” and go for a walk or, better yet, to bed. Keeping up with healthy habits can help reduce your vulnerability to uncontrolled anxiety.

Post-Election Day, you may continue to feel drawn to the news and motivated to show up—whether that means donating, volunteering, or protesting—for a variety of causes you think will be affected by the election results. Many people describe feeling guilty if they say no or disengage, leading them to overcommit and wind up overwhelmed.

If this sounds like you, try reminding yourself that taking a break from politics to cook, engage with your family or friends, get some work done, or go to the gym does not mean you don’t care. In fact, keeping up with the activities that fuel you will give you the energy to contribute to important causes more meaningfully.The Conversation

Shannon Sauer-Zavala, Associate Professor of Psychology & Licensed Clinical Psychologist, University of Kentucky. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Here are 3 science-backed strategies to rein in election anxiety Read More »

creator-of-fake-kamala-harris-video-musk-boosted-sues-calif.-over-deepfake-laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws

After California passed laws cracking down on AI-generated deepfakes of election-related content, a popular conservative influencer promptly sued, accusing California of censoring protected speech, including satire and parody.

In his complaint, Christopher Kohls—who is known as “Mr Reagan” on YouTube and X (formerly Twitter)—said that he was suing “to defend all Americans’ right to satirize politicians.” He claimed that California laws, AB 2655 and AB 2839, were urgently passed after X owner Elon Musk shared a partly AI-generated parody video on the social media platform that Kohls created to “lampoon” presidential hopeful Kamala Harris.

AB 2655, known as the “Defending Democracy from Deepfake Deception Act,” prohibits creating “with actual malice” any “materially deceptive audio or visual media of a candidate for elective office with the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate, within 60 days of the election.” It requires social media platforms to block or remove any reported deceptive material and label “certain additional content” deemed “inauthentic, fake, or false” to prevent election interference.

The other law at issue, AB 2839, titled “Elections: deceptive media in advertisements,” bans anyone from “knowingly distributing an advertisement or other election communication” with “malice” that “contains certain materially deceptive content” within 120 days of an election in California and, in some cases, within 60 days after an election.

Both bills were signed into law on September 17, and Kohls filed his complaint that day, alleging that both must be permanently blocked as unconstitutional.

Elon Musk called out for boosting Kohls’ video

Kohls’ video that Musk shared seemingly would violate these laws by using AI to make Harris appear to give speeches that she never gave. The manipulated audio sounds like Harris, who appears to be mocking herself as a “diversity hire” and claiming that any critics must be “sexist and racist.”

“Making fun of presidential candidates and other public figures is an American pastime,” Kohls said, defending his parody video. He pointed to a long history of political cartoons and comedic impressions of politicians, claiming that “AI-generated commentary, though a new mode of speech, falls squarely within this tradition.”

While Kohls’ post was clearly marked “parody” in the YouTube title and in his post on X, that “parody” label did not carry over when Musk re-posted the video. This lack of a parody label on Musk’s post—which got approximately 136 million views, roughly twice as many as Kohls’ post—set off California governor Gavin Newsom, who immediately blasted Musk’s post and vowed on X to make content like Kohls’ video “illegal.”

In response to Newsom, Musk poked fun at the governor, posting that “I checked with renowned world authority, Professor Suggon Deeznutz, and he said parody is legal in America.” For his part, Kohls put up a second parody video targeting Harris, calling Newsom a “bully” in his complaint and claiming that he had to “punch back.”

Shortly after these online exchanges, California lawmakers allegedly rushed to back the governor, Kohls’ complaint said. They allegedly amended the deepfake bills to ensure that Kohls’ video would be banned when the bills were signed into law, replacing a broad exception for satire in one law with a narrower safe harbor that Kohls claimed would chill humorists everywhere.

“For videos,” his complaint said, disclaimers required under AB 2839 must “appear for the duration of the video” and “must be in a font size ‘no smaller than the largest font size of other text appearing in the visual media.'” For a satirist like Kohls who uses large fonts to optimize videos for mobile, this “would require the disclaimer text to be so large that it could not fit on the screen,” his complaint said.

On top of seeming impractical, the disclaimers would “fundamentally” alter “the nature of his message” by removing the comedic effect for viewers by distracting from what allegedly makes the videos funny—”the juxtaposition of over-the-top statements by the AI-generated ‘narrator,’ contrasted with the seemingly earnest style of the video as if it were a genuine campaign ad,” Kohls’ complaint alleged.

Imagine watching Saturday Night Live with prominent disclaimers taking up your TV screen, his complaint suggested.

It’s possible that Kohls’ concerns about AB 2839 are unwarranted. Newsom spokesperson Izzy Gardon told Politico that Kohls’ parody label on X was good enough to clear him of liability under the law.

“Requiring them to use the word ‘parody’ on the actual video avoids further misleading the public as the video is shared across the platform,” Gardon said. “It’s unclear why this conservative activist is suing California. This new disclosure law for election misinformation isn’t any more onerous than laws already passed in other states, including Alabama.”

Creator of fake Kamala Harris video Musk boosted sues Calif. over deepfake laws Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

users-shocked-to-find-instagram-limits-political-content-by-default

Users shocked to find Instagram limits political content by default

“I had no idea” —

Instagram never directly told users it was limiting political content by default.

Users shocked to find Instagram limits political content by default

Instagram users have started complaining on X (formerly Twitter) after discovering that Meta has begun limiting recommended political content by default.

“Did [y’all] know Instagram was actively limiting the reach of political content like this?!” an X user named Olayemi Olurin wrote in an X post with more than 150,000 views as of this writing. “I had no idea ’til I saw this comment and I checked my settings and sho nuff political content was limited.”

“Instagram quietly introducing a ‘political’ content preference and turning on ‘limit’ by default is insane?” wrote another X user named Matt in a post with nearly 40,000 views.

Instagram apparently did not notify users directly on the platform when this change happened.

Instead, Instagram rolled out the change in February, announcing in a blog that the platform doesn’t “want to proactively recommend political content from accounts you don’t follow.” That post confirmed that Meta “won’t proactively recommend content about politics on recommendation surfaces across Instagram and Threads,” so that those platforms can remain “a great experience for everyone.”

“This change does not impact posts from accounts people choose to follow; it impacts what the system recommends, and people can control if they want more,” Meta’s spokesperson Dani Lever told Ars. “We have been working for years to show people less political content based on what they told us they want, and what posts they told us are political.”

To change the setting, users can navigate to Instagram’s menu for “settings and activity” in their profiles, where they can update their “content preferences.” On this menu, “political content” is the last item under a list of “suggested content” controls that allow users to set preferences for what content is recommended in their feeds.

There are currently two options for controlling what political content users see. Choosing “don’t limit” means “you might see more political or social topics in your suggested content,” the app says. By default, all users are set to “limit,” which means “you might see less political or social topics.”

“This affects suggestions in Explore, Reels, Feed, Recommendations, and Suggested Users,” Instagram’s settings menu explains. “It does not affect content from accounts you follow. This setting also applies to Threads.”

For general Instagram and Threads users, this change primarily limits what content posted can be recommended, but for influencers using professional accounts, the stakes can be higher. The Washington Post reported that news creators were angered by the update, insisting that Meta’s update diminished the value of the platform for reaching users not actively seeking political content.

“The whole value-add for social media, for political people, is that you can reach normal people who might not otherwise hear a message that they need to hear, like, abortion is on the ballot in Florida, or voting is happening today,” Keith Edwards, a Democratic political strategist and content creator, told The Post.

Meta’s blog noted that “professional accounts on Instagram will be able to use Account Status to check their eligibility to be recommended based on whether they recently posted political content. From Account Status, they can edit or remove recent posts, request a review if they disagree with our decision, or stop posting this type of content for a period of time, in order to be eligible to be recommended again.”

Ahead of a major election year, Meta’s change could impact political outreach attempting to inform voters. The change also came amid speculation that Meta was “shadowbanning” users posting pro-Palestine content since the start of the Israel-Hamas war, The Markup reported.

“Our investigation found that Instagram heavily demoted nongraphic images of war, deleted captions and hid comments without notification, suppressed hashtags, and limited users’ ability to appeal moderation decisions,” The Markup reported.

Meta appears to be interested in shifting away from its reputation as a platform where users expect political content—and misinformation—to thrive. Last year, The Wall Street Journal reported that Meta wanted out of politics and planned to “scale back how much political content it showed users,” after criticism over how the platform handled content related to the January 6 Capitol riot.

The decision to limit recommended political content on Instagram and Threads, Meta’s blog said, extends Meta’s “existing approach to how we treat political content.”

“People have told us they want to see less political content, so we have spent the last few years refining our approach on Facebook to reduce the amount of political content—including from politicians’ accounts—you see in Feed, Reels, Watch, Groups You Should Join, and Pages You May Like,” Meta wrote in a February blog update.

“As part of this, we aim to avoid making recommendations that could be about politics or political issues, in line with our approach of not recommending certain types of content to those who don’t wish to see it,” Meta’s blog continued, while at the same time, “preserving your ability to find and interact with political content that’s meaningful to you if that’s what you’re interested in.”

While platforms typically update users directly on the platform when terms of services change, that wasn’t the case for this update, which simply added new controls for users. That’s why many users who prefer to be recommended political content—and apparently missed Meta’s announcement and subsequent media coverage—expressed shock to discover that Meta was limiting what they see.

On X, even Instagram users who don’t love seeing political content are currently rallying to raise awareness and share tips on how to update the setting.

“This is actually kinda wild that Instagram defaults everyone to this,” one user named Laura wrote. “Obviously political content is toxic but during an election season it’s a little weird to just hide it from everyone?”

Users shocked to find Instagram limits political content by default Read More »

as-2024-election-looms,-openai-says-it-is-taking-steps-to-prevent-ai-abuse

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse

Don’t Rock the vote —

ChatGPT maker plans transparency for gen AI content and improved access to voting info.

A pixelated photo of Donald Trump.

On Monday, ChatGPT maker OpenAI detailed its plans to prevent the misuse of its AI technologies during the upcoming elections in 2024, promising transparency in AI-generated content and enhancing access to reliable voting information. The AI developer says it is working on an approach that involves policy enforcement, collaboration with partners, and the development of new tools aimed at classifying AI-generated media.

“As we prepare for elections in 2024 across the world’s largest democracies, our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency,” writes OpenAI in its blog post. “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”

Initiatives proposed by OpenAI include preventing abuse by means such as deepfakes or bots imitating candidates, refining usage policies, and launching a reporting system for the public to flag potential abuses. For example, OpenAI’s image generation tool, DALL-E 3, includes built-in filters that reject requests to create images of real people, including politicians. “For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests,” the company stated.

OpenAI says it regularly updates its Usage Policies for ChatGPT and its API products to prevent misuse, especially in the context of elections. The organization has implemented restrictions on using its technologies for political campaigning and lobbying until it better understands the potential for personalized persuasion. Also, OpenAI prohibits creating chatbots that impersonate real individuals or institutions and disallows the development of applications that could deter people from “participation in democratic processes.” Users can report GPTs that may violate the rules.

OpenAI claims to be proactively engaged in detailed strategies to safeguard its technologies against misuse. According to their statements, this includes red-teaming new systems to anticipate challenges, engaging with users and partners for feedback, and implementing robust safety mitigations. OpenAI asserts that these efforts are integral to its mission of continually refining AI tools for improved accuracy, reduced biases, and responsible handling of sensitive requests

Regarding transparency, OpenAI says it is advancing its efforts in classifying image provenance. The company plans to embed digital credentials, using cryptographic techniques, into images produced by DALL-E 3 as part of its adoption of standards by the Coalition for Content Provenance and Authenticity. Additionally, OpenAI says it is testing a tool designed to identify DALL-E-generated images.

In an effort to connect users with authoritative information, particularly concerning voting procedures, OpenAI says it has partnered with the National Association of Secretaries of State (NASS) in the United States. ChatGPT will direct users to CanIVote.org for verified US voting information.

“We want to make sure that our AI systems are built, deployed, and used safely,” writes OpenAI. “Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used.”

As 2024 election looms, OpenAI says it is taking steps to prevent AI abuse Read More »