X

butts,-breasts,-and-genitals-now-explicitly-allowed-on-elon-musk’s-x

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X

Aurich Lawson | Getty Images

Adult content has always proliferated on Twitter, but the platform now called X recently clarified its policy to officially allow “consensually produced and distributed adult nudity or sexual behavior.”

X’s rules seem simple. As long as content is “properly labeled and not prominently displayed,” users can share material—including AI-generated or animated content—”that is pornographic or intended to cause sexual arousal.”

“We believe that users should be able to create, distribute, and consume material related to sexual themes as long as it is consensually produced and distributed,” X’s policy said.

The policy update seemingly reflects X’s core mission to defend all legal speech. It protects a wide range of sexual expression, including depictions of explicit or implicit sexual behavior, simulated sexual intercourse, full or partial nudity, and close-ups of genitals, buttocks, or breasts.

“Sexual expression, whether visual or written, can be a legitimate form of artistic expression,” X’s policy said. “We believe in the autonomy of adults to engage with and create content that reflects their own beliefs, desires, and experiences, including those related to sexuality.”

Today, X Support promoted the update on X, confirming that “we have launched Adult Content and Violent Content policies to bring more clarity of our Rules and transparency into enforcement of these areas. These policies replace our former Sensitive Media and Violent Speech policies—but what we enforce against hasn’t changed.”

Seemingly also unchanged: none of this content can be monetized, as X’s ad policy says that “to ensure a positive user experience and a healthy conversation on the platform, X prohibits the promotion of adult sexual content globally.”

Under the policy, adult content is also prohibited from appearing in live videos, profile pictures, headers, list banners, or community cover photos.

X has been toying with the idea of fully embracing adult content and has even planned a feature for adult creators that could position X as an OnlyFans rival. That plan was delayed, Platformer reported in 2022, after red-teaming flagged a seemingly insurmountable obstacle to the launch: “Twitter cannot accurately detect child sexual exploitation and non-consensual nudity at scale.”

The new adult content policy still emphasizes that non-consensual adult content is prohibited, but it’s unclear if the platform has gotten any better at distinguishing between consensually produced content and nonconsensual material. X did not immediately respond to Ars’ request to comment.

For adult content to be allowed on the platform, X now requires content warnings so that “users who do not wish to see it can avoid it” and “children below the age of 18 are not exposed to it.”

Users who plan to regularly post adult content can adjust their account’s media settings to place a label on all their images and videos. That results in a content warning for any visitor of that account’s profile, except for “people who have opted in to see possibly sensitive content,” who “will still see your account without the message.”

Users who only occasionally share adult content can choose to avoid the account label and instead edit an image or video to add a one-time label to any individual post, flagging just that post as sensitive.

Once a label is applied, any users under 18 will be blocked from viewing the post, X said.

Butts, breasts, and genitals now explicitly allowed on Elon Musk’s X Read More »

musk-can’t-avoid-testifying-in-sec-probe-of-twitter-buyout-by-playing-victim

Musk can’t avoid testifying in SEC probe of Twitter buyout by playing victim

Musk can’t avoid testifying in SEC probe of Twitter buyout by playing victim

After months of loudly protesting a subpoena, Elon Musk has once again agreed to testify in the US Securities and Exchange Commission’s investigation into his acquisition of Twitter (now called X).

Musk tried to avoid testifying by arguing that the SEC had deposed him twice before, telling a US district court in California that the most recent subpoena was “the latest in a long string of SEC abuses of its investigative authority.”

But the court did not agree that Musk testifying three times in the SEC probe was either “abuse” or “overly burdensome.” Especially since the SEC has said it’s seeking a follow-up deposition after receiving “thousands of new documents” from Musk and third parties over the past year since his last depositions. And according to an order requiring Musk and the SEC to agree on a deposition date from US district judge Jacqueline Scott Corley, “Musk’s lament does not come close to meeting his burden of proving ‘the subpoena was issued in bad faith or for an improper purpose.'”

“Under Musk’s theory of reasonableness, the SEC must wait to depose a percipient witness until it has first gathered all relevant documents,” Corley wrote in the order. “But the law does not support that theory. Nor does common sense. In an investigation, the initial depositions can help an agency identify what documents are relevant and need to be requested in the first place.”

Corley’s court filing today shows that Musk didn’t even win his fight to be deposed remotely. He has instead agreed to sit for no more than five hours in person, which the SEC argued “will more easily allow for assessment of Musk’s demeanor and be more efficient as it avoids delays caused by technology.” (Last month, Musk gave a remote deposition where the Internet cut in and out, and Musk repeatedly dropped off the call.)

Musk’s deposition will be scheduled by mid-July. He is expected to testify on his Twitter stock purchases prior to his purchase of the platform, as well as his other investments surrounding the acquisition.

The SEC has been probing Musk’s Twitter stock purchases to determine if he violated a securities law that requires disclosures within 10 days from anyone who buys more than a 5 percent stake in a company. Musk missed that deadline by 11 days, as he amassed close to a 10 percent stake, and a proposed class action lawsuit from Twitter shareholders has suggested that he intentionally missed the deadline to keep Twitter stock prices artificially low while preparing for his Twitter purchase.

In an amended complaint filed this week, an Oklahoma firefighters pension fund—which sold more than 14,000 Twitter shares while Musk went on his buying spree—laid out Musk’s alleged scheme. The firefighters claim that the “goal” of Musk’s strategy was to purchase Twitter “cost effectively” and that this scheme was carried out by an unnamed Morgan Stanley banker who was motivated “to acquire billions of dollars of Twitter securities without tipping off the market” to curry favor with Musk.

As a seeming result, the firefighters’ complaint alleged that Morgan Stanley “pocketed over $1,460,000 in commissions just for executing” the “secret Twitter stock acquisition scheme.” And Morgan Stanley’s work seemingly pleased Musk so much that he went back for financial advising on the Twitter deal, the complaint alleged, paying Morgan Stanley an “estimated $42 million in fees.”

Messages from the banker show he was determined to keep the trading “absofuckinglutely quiet” to avoid the prospect that “anyone sniff anything out.”

Because of this secrecy, Twitter “investors suffered enormous damages” when Musk “belatedly disclosed his Twitter interests,” and “the price of Twitter’s stock predictably skyrocketed,” the complaint said.

“Ultimately, Musk went from owning zero shares of Twitter stock as of January 28, 2022 to spending over $2.6 billion to secretly acquire over 70 million shares” on April 4, 2022, the complaint said.

Musk can’t avoid testifying in SEC probe of Twitter buyout by playing victim Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

bumble-apologizes-for-ads-shaming-women-into-sex

Bumble apologizes for ads shaming women into sex

Bumble apologizes for ads shaming women into sex

For the past decade, the dating app Bumble has claimed to be all about empowering women. But under a new CEO, Lidiane Jones, Bumble is now apologizing for a tone-deaf ad campaign that many users said seemed to channel incel ideology by telling women to stop denying sex.

“You know full well a vow of celibacy is not the answer,” one Bumble billboard seen in Los Angeles read. “Thou shalt not give up on dating and become a nun,” read another.

Bumble HQ

“We don’t have enough women on the app.”

“They’d rather be alone than deal with men.”

“Should we teach men to be better?”

“No, we should shame women so they come back to the app.”

“Yes! Let’s make them feel bad for choosing celibacy. Great idea!” pic.twitter.com/115zDdGKZo

— Arghavan Salles, MD, PhD (@arghavan_salles) May 14, 2024

Bumble intended these ads to bring “joy and humor,” the company said in an apology posted on Instagram after the backlash on social media began.

Some users threatened to delete their accounts, criticizing Bumble for ignoring religious or personal reasons for choosing celibacy. These reasons include preferring asexuality or sensibly abstaining from sex amid diminishing access to abortion nationwide.

Others accused Bumble of more shameful motives. On X (formerly Twitter), a user called UjuAnya posted that “Bumble’s main business model is selling men access to women,” since market analysts have reported that 76 percent of Bumble users are male.

“Bumble won’t alienate their primary customers (men) telling them to quit being shit,” UjuAnya posted on X. “They’ll run ads like this to make their product (women) ‘better’ and more available on their app for men.”

That account quote-tweeted an even more popular post with nearly 3 million views suggesting that Bumble needs to “fuck off and stop trying to shame women into coming back to the apps” instead of running “ads targeted at men telling them to be normal.”

One TikTok user, ItsNeetie, declared, “the Bumble reckoning is finally here.”

Bumble did not respond to Ars’ request to respond to these criticisms or verify user statistics.

In its apology, Bumble took responsibility for not living up to its “values” of “passionately” standing up for women and marginalized communities and defending “their right to fully exercise personal choice.” Admitting the ads were a “mistake” that “unintentionally” frustrated the dating community, the dating app responded to some of the user feedback:

Some of the perspectives we heard were from those who shared that celibacy is the only answer when reproductive rights are continuously restricted; from others for whom celibacy is a choice, one that we respect; and from the asexual community, for whom celibacy can have a particular meaning and importance, which should not be diminished. We are also aware that for many, celibacy may be brought on by harm or trauma.

Bumble’s pulled ads were part of a larger marketing campaign that at first seemed to resonate with its users. Created by the company’s in-house creative studio, according to AdAge, Bumble’s campaign attracted a lot of eyeballs by deleting Bumble’s entire Instagram feed and posting “cryptic messages” showing tired women in Renaissance-era paintings that alluded to the app’s rebrand.

In a press release, chief marketing officer Selby Drummond said that Bumble “wanted to take a fun, bold approach in celebrating the first chapter of our app’s evolution and remind women that our platform has been solving for their needs from the start.”

The dating app is increasingly investing in ads, AdAge reported, tripling investments from $8 million in 2022 to $24 million in 2023. These ads are seemingly meant to help Bumble recover after posting “a $1.9 million net loss last year,” CNN reported, following a dismal drop in its share price by 86 percent since its initial public offering in February 2021.

Bumble’s new CEO Jones told NBC News that younger users are dating less and that Bumble’s plan was to listen to users to find new ways to grow.

Bumble apologizes for ads shaming women into sex Read More »

elon-musk’s-x-can’t-invent-its-own-copyright-law,-judge-says

Elon Musk’s X can’t invent its own copyright law, judge says

Who owns X data? Everyone but X —

Judge rules copyright law governs public data scraping, not X’s terms.

Elon Musk’s X can’t invent its own copyright law, judge says

A US district judge William Alsup has dismissed Elon Musk’s X Corp’s lawsuit against Bright Data, a data-scraping company accused of improperly accessing X (formerly Twitter) systems and violating both X terms and state laws when scraping and selling data.

X sued Bright Data to stop the company from scraping and selling X data to academic institutes and businesses, including Fortune 500 companies.

According to Alsup, X failed to state a claim while arguing that companies like Bright Data should have to pay X to access public data posted by X users.

“To the extent the claims are based on access to systems, they fail because X Corp. has alleged no more than threadbare recitals,” parroting laws and findings in other cases without providing any supporting evidence, Alsup wrote. “To the extent the claims are based on scraping and selling of data, they fail because they are preempted by federal law,” specifically standing as an “obstacle to the accomplishment and execution of” the Copyright Act.

The judge found that X Corp’s argument exposed a tension between the platform’s desire to control user data while also enjoying the safe harbor of Section 230 of the Communications Decency Act, which allows X to avoid liability for third-party content. If X owned the data, it could perhaps argue it has exclusive rights to control the data, but then it wouldn’t have safe harbor.

“X Corp. wants it both ways: to keep its safe harbors yet exercise a copyright owner’s right to exclude, wresting fees from those who wish to extract and copy X users’ content,” Alsup wrote.

If X got its way, Alsup warned, “X Corp. would entrench its own private copyright system that rivals, even conflicts with, the actual copyright system enacted by Congress” and “yank into its private domain and hold for sale information open to all, exercising a copyright owner’s right to exclude where it has no such right.”

That “would upend the careful balance Congress struck between what copyright owners own and do not own,” Alsup wrote, potentially shrinking the public domain.

“Applying general principles, this order concludes that the extent to which public data may be freely copied from social media platforms, even under the banner of scraping, should generally be governed by the Copyright Act, not by conflicting, ubiquitous terms,” Alsup wrote.

Bright Data CEO Or Lenchner said in a statement provided to Ars that Alsup’s decision had “profound implications in business, research, training of AI models, and beyond.”

“Bright Data has proven that ethical and transparent scraping practices for legitimate business use and social good initiatives are legally sound,” Lenchner said. “Companies that try to control user data intended for public consumption will not win this legal battle.”

Alsup pointed out that X’s lawsuit was “not looking to protect X users’ privacy” but rather to block Bright Data from interfering with its “own sale of its data through a tiered subscription service.”

“X Corp. is happy to allow the extraction and copying of X users’ content so long as it gets paid,” Alsup wrote.

In a sea of vague claims that scraping is “unfair,” perhaps most deficient in X’s complaint, Alsup suggested, was X’s failure to allege that Bright Data’s scraping impaired its services or that X suffered any damages.

“There are no allegations of servers harmed or identities misrepresented,” Alsup wrote. “Additionally, there are no allegations of any damage resulting from automated or unauthorized access.”

X will be allowed to amend its complaint and appeal. The case may be strengthened if X can show evidence of damages or prove that the scraping overburdened X or otherwise deprived X users of their use of the platform in a way that could damage X’s reputation.

But as it currently stands, X’s arguments in many ways appear rather “bare,” Alsup wrote, while its terms of service make crystal clear to users that “[w]hat’s yours is yours—you own your Content.”

By attempting to exclude Bright Data from accessing public X posts owned by X users, X also nearly “obliterated” the “fair use” provision of the Copyright Act, “flouting” Congress’ intent in passing the law, Alsup wrote.

“Only by receiving permission and paying X Corp. could Bright Data, its customers, and other X users freely reproduce, adapt, distribute, and display what might (or might not) be available for taking and selling as fair use,” Alsup wrote. “Thus, Bright Data, its customers, and other X users who wanted to make fair use of copyrighted content would not be able to do so.”

A win for X could have had dire consequences for the Internet, Alsup suggested. In dismissing the complaint, Alsup cited an appeals court ruling “that giving social media companies “free rein to decide, on any basis, who can collect and use data—data that the companies do not own, that they otherwise make publicly available to viewers, and that the companies themselves collect and use—risks the possible creation of information monopolies that would disserve the public interest.”

Because that outcome was averted, Lenchner is celebrating Bright Data’s win.

“Bright Data’s victory over X makes it clear to the world that public information on the web belongs to all of us, and any attempt to deny the public access will fail,” Lenchner said.

In 2023, Bright Data won a similar lawsuit lobbed by Meta over scraping public Facebook and Instagram data. These lawsuits, Lenchner alleged, “are used as a monetary weapon to discourage collecting public data from sites, so conglomerates can hoard user-generated public data.”

“Courts recognize this and the risks it poses of information monopolies and ownership of the Internet,” Lenchner said.

X did not respond to Ars’ request to comment.

Elon Musk’s X can’t invent its own copyright law, judge says Read More »

elon-musk’s-grok-keeps-making-up-fake-news-based-on-x-users’-jokes

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

It’s all jokes until it isn’t —

X likely hopes to avoid liability with disclaimer that Grok “can make mistakes.”

Elon Musk’s Grok keeps making up fake news based on X users’ jokes

X’s chatbot Grok is supposed to be an AI engine crunching the platform’s posts to surface and summarize breaking news, but this week, Grok’s flaws were once again exposed when the chatbot got confused and falsely accused an NBA star of criminal vandalism.

“Klay Thompson Accused in Bizarre Brick-Vandalism Spree,” Grok’s headline read in an AI-powered trending-tab post that has remained on X (formerly Twitter) for days. Beneath the headline, Grok went into even more detail to support its fake reporting:

In a bizarre turn of events, NBA star Klay Thompson has been accused of vandalizing multiple houses with bricks in Sacramento. Authorities are investigating the claims after several individuals reported their houses being damaged, with windows shattered by bricks. Klay Thompson has not yet issued a statement regarding the accusations. The incidents have left the community shaken, but no injuries were reported. The motive behind the alleged vandalism remains unclear.

Grok appears to be confusing a common basketball term, where players are said to be throwing “bricks” when they take an airball shot that doesn’t hit the rim. According to SF Gate, which was one of the first outlets to report the Grok error, Thompson had an “all-time rough shooting” night, hitting none of his shots on what was his emotional last game with the Golden State Warriors before becoming an unrestricted free agent.

In small type under Grok’s report, X includes a disclaimer saying, “Grok is an early feature and can make mistakes. Verify its outputs.”

But instead of verifying Grok’s outputs, it appeared that X users—in the service’s famously joke-y spirit—decided to fuel Grok’s misinformation. Under the post, X users, some NBA fans, commented with fake victim reports, using the same joke format to seemingly convince Grok that “several individuals reported their houses being damaged.” Some of these joking comments were viewed by millions.

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson

— LakeShowYo (@LakeShowYo) April 17, 2024

First off…I am ok.

My house was vandalized by bricks in Sacramento.

After my hands stopped shaking, I managed to call the Sheriff, they were quick to respond.

My window is gone, the police asked me if I knew who did it.

I said yes, it was Klay Thompson. pic.twitter.com/smrDs6Yi5M

— KeeganMuse (@KeegMuse) April 17, 2024

First off… I am ok.

My house was vandalized by bricks 🧱

After my hands stopped shaking, I managed to call the Sheriff…They were quick to respond🚨

My window was gone and the police asked if I knew who did it👮‍♂️

I said yes, it was Klay Thompson pic.twitter.com/JaWtdJhFli

— JJJ Muse (@JarenJJMuse) April 17, 2024

X did not immediately respond to Ars’ request for comment or confirm if the post will be corrected or taken down.

In the past, both Microsoft and chatbot maker OpenAI have faced defamation lawsuits over similar fabrications in which ChatGPT falsely accused a politician and a radio host of completely made-up criminal histories. Microsoft was also sued by an aerospace professor who Bing Chat falsely labeled a terrorist.

Experts told Ars that it remains unclear if disclaimers like X’s will spare companies from liability should more people decide to sue over fake AI outputs. Defamation claims might depend on proving that platforms “knowingly” publish false statements, which disclaimers suggest they do. Last July, the Federal Trade Commission launched an investigation into OpenAI, demanding that the company address the FTC’s fears of “false, misleading, or disparaging” AI outputs.

Because the FTC doesn’t comment on its investigations, it’s impossible to know if its probe will impact how OpenAI conducts business.

For people suing AI companies, the urgency of protecting against false outputs seems obvious. Last year, the radio host suing OpenAI, Mark Walters, accused the company of “sticking its head in the sand” and “recklessly disregarding whether the statements were false under circumstances when they knew that ChatGPT’s hallucinations were pervasive and severe.”

X just released Grok to all premium users this month, TechCrunch reported, right around the time that X began giving away premium access to the platform’s top users. During that wider rollout, X touted Grok’s new ability to summarize all trending news and topics, perhaps stoking interest in this feature and peaking Grok usage just before Grok spat out the potentially defamatory post about the NBA star.

Thompson has not issued any statements on Grok’s fake reporting.

Grok’s false post about Thompson may be the first widely publicized example of potential defamation from Grok, but it wasn’t the first time that Grok promoted fake news in response to X users joking around on the platform. During the solar eclipse, a Grok-generated headline read, “Sun’s Odd Behavior: Experts Baffled,” Gizmodo reported.

While it’s amusing to some X users to manipulate Grok, the pattern suggests that Grok may also be vulnerable to being manipulated by bad actors into summarizing and spreading more serious misinformation or propaganda. That’s apparently already happening, too. In early April, Grok made up a headline about Iran attacking Israel with heavy missiles, Mashable reported.

Elon Musk’s Grok keeps making up fake news based on X users’ jokes Read More »

so-much-for-free-speech-on-x;-musk-confirms-new-users-must-soon-pay-to-post

So much for free speech on X; Musk confirms new users must soon pay to post

100 pennies for your thoughts? —

The fee, likely $1, is aimed at stopping “relentless” bots, Musk said.

So much for free speech on X; Musk confirms new users must soon pay to post

Elon Musk confirmed Monday that X (formerly Twitter) plans to start charging new users to post on the platform, TechCrunch reported.

“Unfortunately, a small fee for new user write access is the only way to curb the relentless onslaught of bots,” Musk wrote on X.

In October, X confirmed that it was testing whether users would pay a small annual fee to access the platform by suddenly charging new users in New Zealand and the Philippines $1. Paying the fee enabled new users in those countries to post, reply, like, and bookmark X posts.

That test was deemed the “Not-A-Bot” program, and it’s unclear how successful it was at stopping bots. But X deciding to expand the program seems to suggest that the test must have had some success.

Musk has not yet clarified when X’s “small fee” might be required for new users, only confirming in a later post that any new users who avoid paying the fee will be able to post after three months. Ars created new accounts on the web and in the app, and neither signup required any fees yet.

Although Musk’s posts only mention paying for “write access,” it seems likely that the other features limited by the “Not-A-Bot” program will also be limited during those three months for any users who do not pay the fee, too. An X account called @x_alerts_ noticed on Sunday that X was updating its web app text that was seemingly enabling the “Not-A-Bot” program.

“Changes have been detected in the texts of the X web app!” @x_alerts_ wrote, noting that the altered text seemed to limit not just posting and replying, but also liking and bookmarking X posts.

“It looks like this text has been in the app, but they recently changed it, so not sure whether it’s an indication of launch or not!” the user wrote.

Back when X launched the “Not-A-Bot” program, Musk claimed that charging a $1 annual fee would make it “1000X harder to manipulate the platform.” In a help center post, X said that the “test was developed to bolster our already significant efforts to reduce spam, manipulation of our platform, and bot activity.”

Earlier this month, X warned users it was widely purging spam accounts, TechCrunch noted. X Support confirmed that follower counts would likely be impacted during that purge, because “we’re casting a wide net to ensure X remains secure and free of bots.”

But that attempt to purge bots apparently did not work as well as X hoped. This week, Musk confirmed that X is still struggling with “AI (and troll farms)” that he said are easily able to pass X’s “are you a bot” tests.

It’s hard to keep up with X’s inconsistent messaging on its bot problem since Musk took over. Last summer, Musk told attendees of The Wall Street Journal’s CEO Council that the platform had “eliminated at least 90 percent of scams,” claiming there had been a “dramatic improvement” in the platform’s ability to “detect and remove troll armies.”

At that time, experts told The Journal that solving X’s bot problem was nearly impossible because spammers’ tactics were always evolving and bots had begun using generative AI to avoid detection.

Musk’s plan to charge a fee to overcome bots won’t work, experts told WSJ, because anyone determined to spam X can just find credit cards and buy disposable phones on the dark web. And any bad actor who can’t find what they need on the dark web could theoretically just wait three months to launch scams or spread harmful content like disinformation or propaganda. This leads some critics to wonder what the point of charging the small fee really is.

When the “Not-A-Bot” program launched, X Support directly disputed critics’ claims that the program was simply testing whether charging small fees might expand X’s revenue to help Musk get the platform out of debt.

“This new test was developed to bolster our already successful efforts to reduce spam, manipulation of our platform, and bot activity, while balancing platform accessibility with the small fee amount,” X Support wrote on X. “It is not a profit driver.”

It seems likely that Musk is simply trying everything he can think of to reduce bots on the platform, even though it’s widely known that charging a subscription fee has failed to stop bots from overrunning other online platforms (just ask frustrated fans of World of Warcraft). Musk, who famously overpaid for Twitter and has been climbing out of debt since, has claimed since before the Twitter deal closed that his goal was to eliminate bots on the platform.

“We will defeat the spam bots or die trying!” Musk tweeted back in 2022, when a tweet was still a tweet and everyone could depend on accessing Twitter for free.

So much for free speech on X; Musk confirms new users must soon pay to post Read More »

elon-musk’s-x-to-stop-allowing-users-to-hide-their-blue-checks

Elon Musk’s X to stop allowing users to hide their blue checks

Nothing to hide —

X previously promised to “evolve” the “hide your checkmark” feature.

Elon Musk’s X to stop allowing users to hide their blue checks

X will soon stop allowing users to hide their blue checkmarks, and some users are not happy.

Previously, a blue tick on Twitter was a mark of a notable account, providing some assurance to followers of the account’s authenticity. But then Elon Musk decided to start charging for the blue tick instead, and mayhem ensued as a wave of imposter accounts began jokingly posing as brands.

After that, paying for a blue checkmark began to attract derision, as non-paying users passed around a meme under blue-checked posts, saying, “This MF paid for Twitter.” To help spare paid subscribers this embarrassment, X began allowing users to hide their blue check last August, turning “hide your checkmark” into a feature of paid subscriptions.

However, earlier this month, X decided that hiding a checkmark would no longer be allowed, deleting the feature from its webpage detailing what comes with X Premium. An archive of X’s page shows that the language about how to hide your checkmark was removed after April 6, with X no longer promising to “continue to evolve this feature to make it better for you” but instead abruptly ending the perk.

X’s decision to stop hiding checkmarks came after the platform began gifting blue checkmarks to popular accounts. Back in April 2023, then-Twitter had awarded blue checks to celebrity accounts with more than a million followers. Last week, now-X doled out even more blue checks to accounts with over 2,500 paid verified followers. Now, accounts with more than 2,500 paid verified followers get Premium features for free, and accounts with more than 5,000 paid verified followers get Premium+.

You might think that X giving out freebies would be well-received, but Business Insider tech reporter Katie Notopoulos, one of many accounts suddenly gifted the blue check, summed up how many X users were feeling about the gifted tick by asking, “does it seem uncool?”

X doesn’t seem to care anymore if blue checks are seen as uncool, though. Anyone who doesn’t want the complimentary check can refuse it, and any paid subscriber upset about losing the ability to hide their checkmark can always just stop paying for Premium features.

According to X, anyone deciding to cancel their subscription over the loss of the “hide your checkmark” feature can expect the check to remain on their account “until the end of the subscription term you paid for, unless your account is suspended or the blue checkmark is otherwise removed by X for any reason.”

X could also suddenly remove a checkmark without refunding users in extreme circumstances.

“X reserves the right without notice to remove your blue checkmark at any time in its sole discretion without offering you a refund, including if you violate our Terms of Service or if your account is suspended,” X’s subscription page warns.

X Daily, an X news account, announced that the change was coming this week, gathering “meltdown reactions” from users who are upset that their blue checks will soon no longer be hidden.

“Let me hide my checkmark, I’m not a fucking bot,” a user called @4gntt posted, the complaint seemingly alluding to Musk’s claim that paid subscriptions are the only way to stop bots from overrunning X.

“Oh no,” another user, @jeremyphoward, posted. “I signed up to X Premium since it’s required for them to pay me… but now they [are] making the cringemark non-optional 🙁 Not sure if it’s worth it.”

It’s currently unclear when the “hide your checkmark” feature will stop working. Neither of those users criticizing X currently display a blue tick on their profile, suggesting that their checks are still hidden, but it’s also possible that some users immediately stopped paying in response to the policy change.

Elon Musk’s X to stop allowing users to hide their blue checks Read More »

elon-musk:-ai-will-be-smarter-than-any-human-around-the-end-of-next-year

Elon Musk: AI will be smarter than any human around the end of next year

smarter than the average bear —

While Musk says superintelligence is coming soon, one critic says prediction is “batsh*t crazy.”

Elon Musk, owner of Tesla and the X (formerly Twitter) platform, attends a symposium on fighting antisemitism titled 'Never Again : Lip Service or Deep Conversation' in Krakow, Poland on January 22nd, 2024. Musk, who was invited to Poland by the European Jewish Association (EJA) has visited the Auschwitz-Birkenau concentration camp earlier that day, ahead of International Holocaust Remembrance Day. (Photo by Beata Zawrzel/NurPhoto)

Enlarge / Elon Musk, owner of Tesla and the X (formerly Twitter) platform on January 22, 2024.

On Monday, Tesla CEO Elon Musk predicted the imminent rise in AI superintelligence during a live interview streamed on the social media platform X. “My guess is we’ll have AI smarter than any one human probably around the end of next year,” Musk said in his conversation with hedge fund manager Nicolai Tangen.

Just prior to that, Tangen had asked Musk, “What’s your take on where we are in the AI race just now?” Musk told Tangen that AI “is the fastest advancing technology I’ve seen of any kind, and I’ve seen a lot of technology.” He described computers dedicated to AI increasing in capability by “a factor of 10 every year, if not every six to nine months.”

Musk made the prediction with an asterisk, saying that shortages of AI chips and high AI power demands could limit AI’s capability until those issues are resolved. “Last year, it was chip-constrained,” Musk told Tangen. “People could not get enough Nvidia chips. This year, it’s transitioning to a voltage transformer supply. In a year or two, it’s just electricity supply.”

But not everyone is convinced that Musk’s crystal ball is free of cracks. Grady Booch, a frequent critic of AI hype on social media who is perhaps best known for his work in software architecture, told Ars in an interview, “Keep in mind that Mr. Musk has a profoundly bad record at predicting anything associated with AI; back in 2016, he promised his cars would ship with FSD safety level 5, and here we are, closing on an a decade later, still waiting.”

Creating artificial intelligence at least as smart as a human (frequently called “AGI” for artificial general intelligence) is often seen as inevitable among AI proponents, but there’s no broad consensus on exactly when that milestone will be reached—or on the exact definition of AGI, for that matter.

“If you define AGI as smarter than the smartest human, I think it’s probably next year, within two years,” Musk added in the interview with Tangen while discussing AGI timelines.

Even with uncertainties about AGI, that hasn’t kept companies from trying. ChatGPT creator OpenAI, which launched with Musk as a co-founder in 2015, lists developing AGI as its main goal. Musk has not been directly associated with OpenAI for years (unless you count a recent lawsuit against the company), but last year, he took aim at the business of large language models by forming a new company called xAI. Its main product, Grok, functions similarly to ChatGPT and is integrated into the X social media platform.

Booch gives credit to Musk’s business successes but casts doubt on his forecasting ability. “Albeit a brilliant if not rapacious businessman, Mr. Musk vastly overestimates both the history as well as the present of AI while simultaneously diminishing the exquisite uniqueness of human intelligence,” says Booch. “So in short, his prediction is—to put it in scientific terms—batshit crazy.”

So when will we get AI that’s smarter than a human? Booch says there’s no real way to know at the moment. “I reject the framing of any question that asks when AI will surpass humans in intelligence because it is a question filled with ambiguous terms and considerable emotional and historic baggage,” he says. “We are a long, long way from understanding the design that would lead us there.”

We also asked Hugging Face AI researcher Dr. Margaret Mitchell to weigh in on Musk’s prediction. “Intelligence … is not a single value where you can make these direct comparisons and have them mean something,” she told us in an interview. “There will likely never be agreement on comparisons between human and machine intelligence.”

But even with that uncertainty, she feels there is one aspect of AI she can more reliably predict: “I do agree that neural network models will reach a point where men in positions of power and influence, particularly ones with investments in AI, will declare that AI is smarter than humans. By end of next year, sure. That doesn’t sound far off base to me.”

Elon Musk: AI will be smarter than any human around the end of next year Read More »

x-filing-“thermonuclear-lawsuit”-in-texas-should-be-“fatal,”-media-matters-says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says

Ever since Elon Musk’s X Corp sued Media Matters for America (MMFA) over a pair of reports that X (formerly Twitter) claims caused an advertiser exodus in 2023, one big question has remained for onlookers: Why is this fight happening in Texas?

In a motion to dismiss filed in Texas’ northern district last month, MMFA argued that X’s lawsuit should be dismissed not just because of a “fatal jurisdictional defect,” but “dismissal is also required for lack of venue.”

Notably, MMFA is based in Washington, DC, while “X is organized under Nevada law and maintains its principal place of business in San Francisco, California, where its own terms of service require users of its platform to litigate any disputes.”

“Texas is not a fair or reasonable forum for this lawsuit,” MMFA argued, suggesting that “the case must be dismissed or transferred” because “neither the parties nor the cause of action has any connection to Texas.”

Last Friday, X responded to the motion to dismiss, claiming that the lawsuit—which Musk has described as “thermonuclear”—was appropriately filed in Texas because MMFA “intentionally” targeted readers and at least two X advertisers located in Texas, Oracle and AT&T. According to X, because MMFA “identified Oracle, a Texas-based corporation, by name in its coverage,” MMFA “cannot claim surprise at being held to answer for its conduct in Texas.” X also claimed that Texas has jurisdiction because Musk resides in Texas and “makes numerous critical business decisions about X while in Texas.”

This so-called targeting of Texans caused a “substantial part” of alleged financial harms that X attributes to MMFA’s reporting, X alleged.

According to X, MMFA specifically targeted X in Texas by sending newsletters sharing its reports with “hundreds or thousands” of Texas readers and by allegedly soliciting donations from Texans to support MMFA’s reporting.

But MMFA pushed back, saying that “Texas subscribers comprise a disproportionately small percentage of Media Matters’ newsletter recipients” and that MMFA did “not solicit Texas donors to fund Media Matters’s journalism concerning X.” Because of this, X’s “efforts to concoct claim-related Texas contacts amount to a series of shots in the dark, uninformed guesses, and irrelevant tangents,” MMFA argued.

On top of that, MMFA argued that X could not attribute any financial harms allegedly caused by MMFA’s reports to either of the two Texas-based advertisers that X named in its court filings. Oracle, MMFA said, “by X’s own admission,… did not withdraw its ads” from X, and AT&T was not named in MMFA’s reporting, and thus, “any investigation AT&T did into its ad placement on X was of its own volition and is not plausibly connected to Media Matters.” MMFA has argued that advertisers, particularly sophisticated Fortune 500 companies, made their own decisions to stop advertising on X, perhaps due to widely reported increases in hate speech on X or even Musk’s own seemingly antisemitic posting.

Ars could not immediately reach X, Oracle, or AT&T for comment.

X’s suit allegedly designed to break MMFA

MMFA President Angelo Carusone, who is a defendant in X’s lawsuit, told Ars that X’s recent filing has continued to “expose” the lawsuit as a “meritless and vexatious effort to inflict maximum damage on critical research and reporting about the platform.”

“It’s solely designed to basically break us or stop us from doing the work that we were doing originally,” Carusone said, confirming that the lawsuit has negatively impacted MMFA’s hate speech research on X.

MMFA argued that Musk could have sued in other jurisdictions, such as Maryland, DC, or California, and MMFA would not have disputed the venue, but Carusone suggested that Musk sued in Texas in hopes that it would be “a more friendly jurisdiction.”

X filing “thermonuclear lawsuit” in Texas should be “fatal,” Media Matters says Read More »

public-officials-can-block-haters—but-only-sometimes,-scotus-rules

Public officials can block haters—but only sometimes, SCOTUS rules

Public officials can block haters—but only sometimes, SCOTUS rules

There are some circumstances where government officials are allowed to block people from commenting on their social media pages, the Supreme Court ruled Friday.

According to the Supreme Court, the key question is whether officials are speaking as private individuals or on behalf of the state when posting online. Issuing two opinions, the Supreme Court declined to set a clear standard for when personal social media use constitutes state speech, leaving each unique case to be decided by lower courts.

Instead, SCOTUS provided a test for courts to decide first if someone is or isn’t speaking on behalf of the state on their social media pages, and then if they actually have authority to act on what they post online.

The ruling suggests that government officials can block people from commenting on personal social media pages where they discuss official business when that speech cannot be attributed to the state and merely reflects personal remarks. This means that blocking is acceptable when the official has no authority to speak for the state or exercise that authority when speaking on their page.

That authority empowering officials to speak for the state could be granted by a written law. It could also be granted informally if officials have long used social media to speak on behalf of the state to the point where their power to do so is considered “well-settled,” one SCOTUS ruling said.

SCOTUS broke it down like this: An official might be viewed as speaking for the state if the social media page is managed by the official’s office, if a city employee posts on their behalf to their personal page, or if the page is handed down from one official to another when terms in office end.

Posting on a personal page might also be considered speaking for the state if the information shared has not already been shared elsewhere.

Examples of officials clearly speaking on behalf of the state include a mayor holding a city council meeting online or an official using their personal page as an official channel for comments on proposed regulations.

Because SCOTUS did not set a clear standard, officials risk liability when blocking followers on so-called “mixed use” social media pages, SCOTUS cautioned. That liability could be diminished by keeping personal pages entirely separate or by posting a disclaimer stating that posts represent only officials’ personal views and not efforts to speak on behalf of the state. But any official using a personal page to make official comments could expose themselves to liability, even with a disclaimer.

SCOTUS test for when blocking is OK

These clarifications came in two SCOTUS opinions addressing conflicting outcomes in two separate complaints about officials in California and Michigan who blocked followers heavily criticizing them on Facebook and X. The lower courts’ decisions have been vacated, and courts must now apply the Supreme Court’s test to issue new decisions in each case.

One opinion was brief and unsigned, discussing a case where California parents sued school district board members who blocked them from commenting on public Twitter pages used for campaigning and discussing board issues. The board members claimed they blocked their followers after the parents left dozens and sometimes hundreds of the same exact comments on tweets.

In the second—which was unanimous, with no dissenting opinions—Justice Amy Coney Barrett responded at length to a case from a Facebook user named Kevin Lindke. This opinion provides varied guidance that courts can apply when considering whether blocking is appropriate or violating constituents’ First Amendment rights.

Lindke was blocked by a Michigan city manager, James Freed, after leaving comments criticizing the city’s response to COVID-19 on a page that Freed created as a college student, sometime before 2008. Among these comments, Lindke called the city’s pandemic response “abysmal” and told Freed that “the city deserves better.” On a post showing Freed picking up a takeout order, Lindke complained that residents were “suffering,” while Freed ate at expensive restaurants.

After Freed hit 5,000 followers, he converted the page to reflect his public figure status. But while he primarily still used the page for personal posts about his family and always managed the page himself, the page went into murkier territory when he also shared updates about his job as city manager. Those updates included sharing updates on city efforts, posting screenshots of city press releases, and soliciting public feedback, like sharing links to city surveys.

Public officials can block haters—but only sometimes, SCOTUS rules Read More »

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »