Policy

tesla-shareholder-group-opposes-musk’s-$46b-pay,-slams-board-“dysfunction”

Tesla shareholder group opposes Musk’s $46B pay, slams board “dysfunction”

A photoshopped image of Elon Musk emerging from an enormous pile of money.

Aurich Lawson / Duncan Hull / Getty

A Tesla shareholder group yesterday urged other shareholders to vote against Elon Musk’s $46 billion pay package, saying the Tesla board is dysfunctional and “overly beholden to CEO Musk.” The group’s letter also urged shareholders to vote against the reelection of board members Kimbal Musk and James Murdoch.

“Tesla is suffering from a material governance failure which requires our urgent attention and action,” and its board “is stacked with directors that have close personal ties to CEO Elon Musk,” the letter said. “There are multiple indications that these ties, coupled with excessive director compensation, prevent the level of critical and independent thinking required for effective governance.”

Tesla shareholders approved Elon Musk’s pay package in 2018, but it was nullified by a court ruling in January 2024. After a lawsuit filed by a shareholder, Delaware Court of Chancery Judge Kathaleen McCormick ruled that the pay plan was unfair to Tesla shareholders and must be rescinded.

McCormick wrote that most of Tesla’s board members were beholden to Musk or had compromising conflicts and that Tesla’s board provided false and misleading information to shareholders before the 2018 vote. Musk and the rest of the Tesla board subsequently asked shareholders to approve a transfer of Tesla’s state of incorporation from Delaware to Texas and to reinstate Musk’s pay package. Votes can be submitted before Tesla’s annual meeting on June 13.

The pay package was previously estimated to be worth $56 billion, but the stock options in the plan were more recently valued at $46 billion.

“Tesla has clearly lagged”

From March 2020 to November 2021, Tesla’s share price rose from $28.51 to $409.71. But it “has since fallen to $172.63, a decline of $237.08 or 62 percent from its peak,” the letter opposing the pay package said.

“Over the past three years, and especially over the past year, Tesla has clearly lagged behind its competitors and the broader market. We believe that the distractions caused by Musk’s many projects, particularly his decision to buy Twitter, have played a material role in Tesla’s underperformance,” the letter said.

Tesla’s reputation has been harmed by Musk’s “public fights with regulators, acquisition of Twitter, controversial statements on X, and his legal and personal troubles,” the letter said. The letter was sent by New York City Comptroller Brad Lander and investors including Amalgamated Bank, AkademikerPension, Nordea Asset Management, SOC Investment Group, and United Church Funds.

Musk has taken advantage of lax oversight in order “to use Tesla as a coffer for himself and his other business endeavors,” the letter said. It continued:

In 2022, Musk admitted to using Tesla engineers to work on issues at Twitter (now known as X), and defended the decision by saying that no Tesla Board member had stopped him from using Tesla staff for his other businesses. More recently, Musk has begun poaching top engineers from Tesla’s AI and autonomy team for his new company, xAI, including Ethan Knight, who was computer vision chief at Tesla.

This is on the heels of Musk’s post on X that he is “uncomfortable growing Tesla to be a leader in AI & robotics without having ~25% voting control,” a move widely seen as a threat to push Tesla’s Board to grant him another mega pay package.

The Tesla board “continues to allow Musk to be overcommitted” as he devotes “significant amounts of time to his roles at X, SpaceX, Neuralink, the Boring Company and other companies,” the letter said.

Tesla shareholder group opposes Musk’s $46B pay, slams board “dysfunction” Read More »

“csam-generated-by-ai-is-still-csam,”-doj-says-after-rare-arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest

The US Department of Justice has started cracking down on the use of AI image generators to produce child sexual abuse materials (CSAM).

On Monday, the DOJ arrested Steven Anderegg, a 42-year-old “extremely technologically savvy” Wisconsin man who allegedly used Stable Diffusion to create “thousands of realistic images of prepubescent minors,” which were then distributed on Instagram and Telegram.

The cops were tipped off to Anderegg’s alleged activities after Instagram flagged direct messages that were sent on Anderegg’s Instagram account to a 15-year-old boy. Instagram reported the messages to the National Center for Missing and Exploited Children (NCMEC), which subsequently alerted law enforcement.

During the Instagram exchange, the DOJ found that Anderegg sent sexually explicit AI images of minors soon after the teen made his age known, alleging that “the only reasonable explanation for sending these images was to sexually entice the child.”

According to the DOJ’s indictment, Anderegg is a software engineer with “professional experience working with AI.” Because of his “special skill” in generative AI (GenAI), he was allegedly able to generate the CSAM using a version of Stable Diffusion, “along with a graphical user interface and special add-ons created by other Stable Diffusion users that specialized in producing genitalia.”

After Instagram reported Anderegg’s messages to the minor, cops seized Anderegg’s laptop and found “over 13,000 GenAI images, with hundreds—if not thousands—of these images depicting nude or semi-clothed prepubescent minors lasciviously displaying or touching their genitals” or “engaging in sexual intercourse with men.”

In his messages to the teen, Anderegg seemingly “boasted” about his skill in generating CSAM, the indictment said. The DOJ alleged that evidence from his laptop showed that Anderegg “used extremely specific and explicit prompts to create these images,” including “specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.” These go-to prompts were stored on his computer, the DOJ alleged.

Anderegg is currently in federal custody and has been charged with production, distribution, and possession of AI-generated CSAM, as well as “transferring obscene material to a minor under the age of 16,” the indictment said.

Because the DOJ suspected that Anderegg intended to use the AI-generated CSAM to groom a minor, the DOJ is arguing that there are “no conditions of release” that could prevent him from posing a “significant danger” to his community while the court mulls his case. The DOJ warned the court that it’s highly likely that any future contact with minors could go unnoticed, as Anderegg is seemingly tech-savvy enough to hide any future attempts to send minors AI-generated CSAM.

“He studied computer science and has decades of experience in software engineering,” the indictment said. “While computer monitoring may address the danger posed by less sophisticated offenders, the defendant’s background provides ample reason to conclude that he could sidestep such restrictions if he decided to. And if he did, any reoffending conduct would likely go undetected.”

If convicted of all four counts, he could face “a total statutory maximum penalty of 70 years in prison and a mandatory minimum of five years in prison,” the DOJ said. Partly because of “special skill in GenAI,” the DOJ—which described its evidence against Anderegg as “strong”—suggested that they may recommend a sentencing range “as high as life imprisonment.”

Announcing Anderegg’s arrest, Deputy Attorney General Lisa Monaco made it clear that creating AI-generated CSAM is illegal in the US.

“Technology may change, but our commitment to protecting children will not,” Monaco said. “The Justice Department will aggressively pursue those who produce and distribute child sexual abuse material—or CSAM—no matter how that material was created. Put simply, CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children.”

“CSAM generated by AI is still CSAM,” DOJ says after rare arrest Read More »

openai-pauses-chatgpt-4o-voice-that-fans-said-ripped-off-scarlett-johansson

OpenAI pauses ChatGPT-4o voice that fans said ripped off Scarlett Johansson

“Her” —

“Sky’s voice is not an imitation of Scarlett Johansson,” OpenAI insists.

Scarlett Johansson and Joaquin Phoenix attend <em>Her</em> premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.  ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/GettyImages-187586586-800×534.jpg”></img><figcaption>
<p><a data-height=Enlarge / Scarlett Johansson and Joaquin Phoenix attend Her premiere during the 8th Rome Film Festival at Auditorium Parco Della Musica on November 10, 2013, in Rome, Italy.

OpenAI has paused a voice mode option for ChatGPT-4o, Sky, after backlash accusing the AI company of intentionally ripping off Scarlett Johansson’s critically acclaimed voice-acting performance in the 2013 sci-fi film Her.

In a blog defending their casting decision for Sky, OpenAI went into great detail explaining its process for choosing the individual voice options for its chatbot. But ultimately, the company seemed pressed to admit that Sky’s voice was just too similar to Johansson’s to keep using it, at least for now.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice—Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI’s blog said.

OpenAI is not naming the actress, or any of the ChatGPT-4o voice actors, to protect their privacy.

A week ago, OpenAI CEO Sam Altman seemed to invite this controversy by posting “her” on X (formerly Twitter) after announcing the ChatGPT audio-video features that he said made it more “natural” for users to interact with the chatbot.

Altman has said that Her, a movie about a man who falls in love with his virtual assistant, is among his favorite movies. He told conference attendees at Dreamforce last year that the movie “was incredibly prophetic” when depicting “interaction models of how people use AI,” The San Francisco Standard reported. And just last week, Altman touted GPT-4o’s new voice mode by promising, “it feels like AI from the movies.”

But OpenAI’s chief technology officer, Mira Murati, has said that GPT-4o’s voice modes were less inspired by Her than by studying the “really natural, rich, and interactive” aspects of human conversation, The Wall Street Journal reported.

In 2013, of course, critics praised Johansson’s Her performance as expressively capturing a wide range of emotions, which is exactly what Murati described as OpenAI’s goals for its chatbot voices. Rolling Stone noted how effectively Johansson naturally navigated between “tones sweet, sexy, caring, manipulative, and scary.” Johansson achieved this, the Hollywood Reporter said, by using a “vivacious female voice that breaks attractively but also has an inviting deeper register.”

Her director/screenwriter Spike Jonze was so intent on finding the right voice for his film’s virtual assistant that he replaced British actor Samantha Morton late in the film’s production. According to Vulture, Jonze realized that Morton’s “maternal, loving, vaguely British, and almost ghostly” voice didn’t fit his film as well as Johansson’s “younger,” “more impassioned” voice, which he said brought “more yearning.”

Late-night shows had fun mocking OpenAI’s demo featuring the Sky voice, which showed the chatbot seemingly flirting with engineers, giggling through responses like “oh, stop it. You’re making me blush.” Where The New York Times described these demo interactions as Sky being “deferential and wholly focused on the user,” The Daily Show‘s Desi Lydic joked that Sky was “clearly programmed to feed dudes’ egos.”

OpenAI is likely hoping to avoid any further controversy amidst plans to roll out more voices soon that its blog said will “better match the diverse interests and preferences of users.”

OpenAI did not immediately respond to Ars’ request for comment.

Voice actors versus AI

The OpenAI controversy arrives at a moment when many are questioning AI’s impact on creative communities, triggering early lawsuits from artists and book authors. Just this month, Sony opted all of its artists out of AI training to stop voice clones from ripping off top talents like Adele and Beyoncé.

Voice actors, too, have been monitoring increasingly sophisticated AI voice generators, waiting to see what threat AI might pose to future work opportunities. Recently, two actors sued an AI start-up called Lovo that they claimed “illegally used recordings of their voices to create technology that can compete with their voice work,” The New York Times reported. According to that lawsuit, Lovo allegedly used the actors’ actual voice clips to clone their voices.

“We don’t know how many other people have been affected,” the actors’ lawyer, Steve Cohen, told The Times.

Rather than replace voice actors, OpenAI’s blog said that they are striving to support the voice industry when creating chatbots that will laugh at your jokes or mimic your mood. On top of paying voice actors “compensation above top-of-market rates,” OpenAI said they “worked with industry-leading casting and directing professionals to narrow down over 400 submissions” to the five voice options in the initial roll-out of audio-video features.

Their goals in hiring voice actors were to hire talents “from diverse backgrounds or who could speak multiple languages,” casting actors who had voices that feel “timeless” and “inspire trust.” To OpenAI, that meant finding actors who have a “warm, engaging, confidence-inspiring, charismatic voice with rich tone” that sounds “natural and easy to listen to.”

For ChatGPT-4o’s first five voice actors, the gig lasted about five months before leading to more work, OpenAI said.

“We are continuing to collaborate with the actors, who have contributed additional work for audio research and new voice capabilities in GPT-4o,” OpenAI said.

Arguably, these actors are helping to train AI tools that could one day replace them, though. Backlash defending Johansson—one of the world’s highest-paid actors—perhaps shows that fans won’t take direct mimicry of any of Hollywood’s biggest stars lightly, though.

While criticism of the Sky voice seemed widespread, some fans seemed to think that OpenAI has overreacted by pausing the Sky voice.

NYT critic Alissa Wilkinson wrote that it was only “a tad jarring” to hear Sky’s voice because “she sounded a whole lot” like Johansson. And replying to OpenAI’s X post announcing its decision to pull the voice feature for now, a clump of fans protested the AI company’s “bad decision,” with some complaining that Sky was the “best” and “hottest” voice.

At least one fan noted that OpenAI’s decision seemed to hurt the voice actor behind Sky most.

“Super unfair for the Sky voice actress,” a user called Ate-a-Pi wrote. “Just because she sounds like ScarJo, now she can never make money again. Insane.”

OpenAI pauses ChatGPT-4o voice that fans said ripped off Scarlett Johansson Read More »

slack-users-horrified-to-discover-messages-used-for-ai-training

Slack users horrified to discover messages used for AI training

Slack users horrified to discover messages used for AI training

After launching Slack AI in February, Slack appears to be digging its heels in, defending its vague policy that by default sucks up customers’ data—including messages, content, and files—to train Slack’s global AI models.

According to Slack engineer Aaron Maurer, Slack has explained in a blog that the Salesforce-owned chat service does not train its large language models (LLMs) on customer data. But Slack’s policy may need updating “to explain more carefully how these privacy principles play with Slack AI,” Maurer wrote on Threads, partly because the policy “was originally written about the search/recommendation work we’ve been doing for years prior to Slack AI.”

Maurer was responding to a Threads post from engineer and writer Gergely Orosz, who called for companies to opt out of data sharing until the policy is clarified, not by a blog, but in the actual policy language.

“An ML engineer at Slack says they don’t use messages to train LLM models,” Orosz wrote. “My response is that the current terms allow them to do so. I’ll believe this is the policy when it’s in the policy. A blog post is not the privacy policy: every serious company knows this.”

The tension for users becomes clearer if you compare Slack’s privacy principles with how the company touts Slack AI.

Slack’s privacy principles specifically say that “Machine Learning (ML) and Artificial Intelligence (AI) are useful tools that we use in limited ways to enhance our product mission. To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as other information (including usage information) as defined in our privacy policy and in your customer agreement.”

Meanwhile, Slack AI’s page says, “Work without worry. Your data is your data. We don’t use it to train Slack AI.”

Because of this incongruity, users called on Slack to update the privacy principles to make it clear how data is used for Slack AI or any future AI updates. According to a Salesforce spokesperson, the company has agreed an update is needed.

“Yesterday, some Slack community members asked for more clarity regarding our privacy principles,” Salesforce’s spokesperson told Ars. “We’ll be updating those principles today to better explain the relationship between customer data and generative AI in Slack.”

The spokesperson told Ars that the policy updates will clarify that Slack does not “develop LLMs or other generative models using customer data,” “use customer data to train third-party LLMs” or “build or train these models in such a way that they could learn, memorize, or be able to reproduce customer data.” The update will also clarify that “Slack AI uses off-the-shelf LLMs where the models don’t retain customer data,” ensuring that “customer data never leaves Slack’s trust boundary, and the providers of the LLM never have any access to the customer data.”

These changes, however, do not seem to address a key concern for users who never explicitly consented to sharing chats and other Slack content for use in AI training.

Users opting out of sharing chats with Slack

This controversial policy is not new. Wired warned about it in April, and TechCrunch reported that the policy has been in place since at least September 2023.

But widespread backlash began swelling last night on Hacker News, where Slack users called out the chat service for seemingly failing to notify users about the policy change, instead quietly opting them in by default. To critics, it felt like there was no benefit to opting in for anyone but Slack.

From there, the backlash spread to social media, where SlackHQ hastened to clarify Slack’s terms with explanations that did not seem to address all the criticism.

“I’m sorry Slack, you’re doing fucking WHAT with user DMs, messages, files, etc?” Corey Quinn, the chief cloud economist for a cost management company called Duckbill Group, posted on X. “I’m positive I’m not reading this correctly.”

SlackHQ responded to Quinn after the economist declared, “I hate this so much,” and confirmed that he had opted out of data sharing in his paid workspace.

“To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results,” SlackHQ posted. “And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the customer.”

Later in the thread, SlackHQ noted, “Slack AI—which is our generative AI experience natively built in Slack—[and] is a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data.”

Slack users horrified to discover messages used for AI training Read More »

financial-institutions-have-30-days-to-disclose-breaches-under-new-rules

Financial institutions have 30 days to disclose breaches under new rules

REGULATION S-P —

Amendments contain loopholes that may blunt their effectiveness.

Financial institutions have 30 days to disclose breaches under new rules

The Securities and Exchange Commission (SEC) will require some financial institutions to disclose security breaches within 30 days of learning about them.

On Wednesday, the SEC adopted changes to Regulation S-P, which governs the treatment of the personal information of consumers. Under the amendments, institutions must notify individuals whose personal information was compromised “as soon as practicable, but not later than 30 days” after learning of unauthorized network access or use of customer data. The new requirements will be binding on broker-dealers (including funding portals), investment companies, registered investment advisers, and transfer agents.

“Over the last 24 years, the nature, scale, and impact of data breaches has transformed substantially,” SEC Chair Gary Gensler said. “These amendments to Regulation S-P will make critical updates to a rule first adopted in 2000 and help protect the privacy of customers’ financial data. The basic idea for covered firms is if you’ve got a breach, then you’ve got to notify. That’s good for investors.”

Notifications must detail the incident, what information was compromised, and how those affected can protect themselves. In what appears to be a loophole in the requirements, covered institutions don’t have to issue notices if they establish that the personal information has not been used in a way to result in “substantial harm or inconvenience” or isn’t likely to.

The amendments will require covered institutions to “develop, implement, and maintain written policies and procedures” that are “reasonably designed to detect, respond to, and recover from unauthorized access to or use of customer information.” The amendments also:

• Expand and align the safeguards and disposal rules to cover both nonpublic personal information that a covered institution collects about its own customers and nonpublic personal information it receives from another financial institution about customers of that financial institution;

• Require covered institutions, other than funding portals, to make and maintain written records documenting compliance with the requirements of the safeguards rule and disposal rule;

• Conform Regulation S-P’s annual privacy notice delivery provisions to the terms of an exception added by the FAST Act, which provide that covered institutions are not required to deliver an annual privacy notice if certain conditions are met; and

• Extend both the safeguards rule and the disposal rule to transfer agents registered with the Commission or another appropriate regulatory agency.

The requirements also broaden the scope of nonpublic personal information covered beyond what the firm itself collects. The new rules will also cover personal information the firm has received from another financial institution.

SEC Commissioner Hester M. Peirce voiced concern that the new requirements may go too far.

“Today’s Regulation S-P modernization will help covered institutions appropriately prioritize safeguarding customer information,” she https://www.sec.gov/news/statement/peirce-statement-reg-s-p-051624 wrote. “Customers will be notified promptly when their information has been compromised so they can take steps to protect themselves, like changing passwords or keeping a closer eye on credit scores. My reservations stem from the breadth of the rule and the likelihood that it will spawn more consumer notices than are helpful.”

Regulation S-P hadn’t been substantially updated since its adoption in 2000.

Last year, the SEC adopted new regulations requiring publicly traded companies to disclose security breaches that materially affect or are reasonably likely to materially affect business, strategy, or financial results or conditions.

The amendments take effect 60 days after publication in the Federal Register, the official journal of the federal government that publishes regulations, notices, orders, and other documents. Larger organizations will have 18 months to comply after modifications are published. Smaller organizations will have 24 months.

Public comments on the amendments are available here.

Financial institutions have 30 days to disclose breaches under new rules Read More »

twitter-urls-redirect-to-x.com-as-musk-gets-closer-to-killing-the-twitter-name

Twitter URLs redirect to x.com as Musk gets closer to killing the Twitter name

Goodbye Twitter.com —

X.com stops redirecting to Twitter.com over a year after company name change.

An app icon and logo for Elon Musk's X service.

Getty Images | Kirill Kudryavtsev

Twitter.com links are now redirecting to the x.com domain as Elon Musk gets closer to wiping out the Twitter brand name over a year and half after buying the company.

“All core systems are now on X.com,” Musk wrote in an X post today. X also displayed a message to users that said, “We are letting you know that we are changing our URL, but your privacy and data protection settings remain the same.”

Musk bought Twitter in October 2022 and turned it into X Corp. in April 2023, but the social network continued to use Twitter.com as its primary domain for more than another year. X.com links redirected to Twitter.com during that time.

There were still remnants of Twitter after today’s change. This morning, I noticed a support link took me to a help.twitter.com page. The link subsequently redirected to a help.x.com page after I sent a message to X’s public relations email, though the timing could be coincidence. After sending that message to [email protected], I got the standard auto-reply from [email protected], just as I have in the past.

You might still encounter Twitter links that don’t redirect to x.com, depending on which browser you use. The Verge said it is “seeing a mix of results depending upon browser choice and whether you’re logged in or not.”

I had no trouble accessing x.com on desktop browsers today. But in Safari on iPhone, I received error messages when trying to access either twitter.com or x.com without first logging in. I eventually succeeded in logging in and was able to view content, but I remained at twitter.com in the iPhone browser instead of being redirected to x.com.

This will presumably be sorted out, but the awkward Twitter-to-X transition has previously been accompanied by technical problems. In early April, Musk’s service started automatically changing “twitter.com” to “x.com” in links posted by users in the iOS app. But the automatic text replacement initially applied to any URL ending in “twitter.com” even if it wasn’t actually a twitter.com link, which meant that phishers could have taken advantage by registering misleading domain names.

Twitter URLs redirect to x.com as Musk gets closer to killing the Twitter name Read More »

robert-f-kennedy-jr.-sues-meta,-citing-chatbot’s-reply-as-evidence-of-shadowban

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban

Screenshot from the documentary <em>Who Is Bobby Kennedy?</em>” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/05/Who-Is-Bobby-Kennedy-screenshot-via-YouTube-800×422.jpg”></img><figcaption>
<p><a data-height=Enlarge / Screenshot from the documentary Who Is Bobby Kennedy?

In a lawsuit that seems determined to ignore that Section 230 exists, Robert F. Kennedy Jr. has sued Meta for allegedly shadowbanning his million-dollar documentary, Who Is Bobby Kennedy? and preventing his supporters from advocating for his presidential campaign.

According to Kennedy, Meta is colluding with the Biden administration to sway the 2024 presidential election by suppressing Kennedy’s documentary and making it harder to support Kennedy’s candidacy. This allegedly has caused “substantial donation losses,” while also violating the free speech rights of Kennedy, his supporters, and his film’s production company, AV24.

Meta had initially restricted the documentary on Facebook and Instagram but later fixed the issue after discovering that the film was mistakenly flagged by the platforms’ automated spam filters.

But Kennedy’s complaint claimed that Meta is still “brazenly censoring speech” by “continuing to throttle, de-boost, demote, and shadowban the film.” In an exhibit, Kennedy’s lawyers attached screenshots representing “hundreds” of Facebook and Instagram users whom Meta allegedly sent threats, intimidated, and sanctioned after they shared the documentary.

Some of these users remain suspended on Meta platforms, the complaint alleged. Others whose temporary suspensions have been lifted claimed that their posts are still being throttled, though, and Kennedy’s lawyers earnestly insisted that an exchange with Meta’s chatbot proves it.

Two days after the documentary’s release, Kennedy’s team apparently asked the Meta AI assistant, “When users post the link whoisbobbykennedy.com, can their followers see the post in their feeds?”

“I can tell you that the link is currently restricted by Meta,” the chatbot answered.

Chatbots, of course, are notoriously inaccurate sources of information, and Meta AI’s terms of service note this. In a section labeled “accuracy,” Meta warns that chatbot responses “may not reflect accurate, complete, or current information” and should always be verified.

Perhaps more significantly, there is little reason to think that Meta’s chatbot would have access to information about internal content moderation decisions.

Techdirt’s Mike Masnick mocked Kennedy’s reliance on the chatbot in the case. He noted that Kennedy seemed to have no evidence of the alleged shadow-banning, while there’s plenty of evidence that Meta’s spam filters accidentally remove non-violative content all the time.

Meta’s chatbot is “just a probabilistic stochastic parrot, repeating a probable sounding answer to users’ questions,” Masnick wrote. “And these idiots think it’s meaningful evidence. This is beyond embarrassing.”

Neither Meta nor Kennedy’s lawyer, Jed Rubenfeld, responded to Ars’ request to comment.

Robert F. Kennedy Jr. sues Meta, citing chatbot’s reply as evidence of shadowban Read More »

tesla-must-face-fraud-suit-for-claiming-its-cars-could-fully-drive-themselves

Tesla must face fraud suit for claiming its cars could fully drive themselves

The Tesla car company's logo

Getty Images | SOPA Images

A federal judge ruled yesterday that Tesla must face a lawsuit alleging that it committed fraud by misrepresenting the self-driving capabilities of its vehicles.

California resident Thomas LoSavio’s lawsuit points to claims made by Tesla and CEO Elon Musk starting in October 2016, a few months before LoSavio bought a 2017 Tesla Model S with “Enhanced Autopilot” and “Full Self-Driving Capability.” US District Judge Rita Lin in the Northern District of California dismissed some of LoSavio’s claims but ruled that the lawsuit can move forward on allegations of fraud:

The remaining claims, which arise out of Tesla’s alleged fraud and related negligence, may go forward to the extent they are based on two alleged representations: (1) representations that Tesla vehicles have the hardware needed for full self-driving capability and, (2) representations that a Tesla car would be able to drive itself cross-country in the coming year. While the Rule 9(b) pleading requirements are less stringent here, where Tesla allegedly engaged in a systematic pattern of fraud over a long period of time, LoSavio alleges, plausibly and with sufficient detail, that he relied on these representations before buying his car.

Tesla previously won a significant ruling in the case when a different judge upheld the carmaker’s arbitration agreement and ruled that four plaintiffs would have to go to arbitration. But LoSavio had opted out of the arbitration agreement and was given the option of filing an amended complaint.

LoSavio’s amended complaint seeks class-action status on behalf of himself “and fellow consumers who purchased or leased a new Tesla vehicle with Tesla’s ADAS [Advanced Driver Assistance System] technology but never received the self-driving car that Tesla promised them.”

Cars not fully autonomous

Lin didn’t rule on the merits of the claims but found that they are adequately alleged. LoSavio points to a Tesla statement in October 2016 that all its cars going forward would have the “hardware needed for full self-driving capability,” and a November 2016 email newsletter stating that “all Tesla vehicles produced in our factory now have full self-driving hardware.”

The ruling said:

Those statements were allegedly false because the cars lacked the combination of sensors, including lidar, needed to achieve SAE Level 4 (“High Automation”) and Level 5 (“Full Automation”), i.e., full autonomy. According to the SAC [Second Amended Complaint], Tesla’s cars have thus stalled at SAE Level 2 (“Partial Driving Automation”), which requires “the human driver’s constant supervision, responsibility, and control.”

If Tesla meant to convey that its hardware was sufficient to reach high or full automation, the SAC plainly alleges sufficient falsity. Even if Tesla meant to convey that its hardware could reach Level 2 only, the SAC still sufficiently alleges that those representations reasonably misled LoSavio.

The complaint also “sufficiently alleges that Musk falsely represented the vehicle’s future ability to self-drive cross-country and that LoSavio relied upon these representations pre-purchase,” Lin concluded. Musk claimed at an October 2016 news conference that a Tesla car would be able to drive from Los Angeles to New York City “by the end of next year without the need for a single touch.”

Tesla must face fraud suit for claiming its cars could fully drive themselves Read More »

bumble-apologizes-for-ads-shaming-women-into-sex

Bumble apologizes for ads shaming women into sex

Bumble apologizes for ads shaming women into sex

For the past decade, the dating app Bumble has claimed to be all about empowering women. But under a new CEO, Lidiane Jones, Bumble is now apologizing for a tone-deaf ad campaign that many users said seemed to channel incel ideology by telling women to stop denying sex.

“You know full well a vow of celibacy is not the answer,” one Bumble billboard seen in Los Angeles read. “Thou shalt not give up on dating and become a nun,” read another.

Bumble HQ

“We don’t have enough women on the app.”

“They’d rather be alone than deal with men.”

“Should we teach men to be better?”

“No, we should shame women so they come back to the app.”

“Yes! Let’s make them feel bad for choosing celibacy. Great idea!” pic.twitter.com/115zDdGKZo

— Arghavan Salles, MD, PhD (@arghavan_salles) May 14, 2024

Bumble intended these ads to bring “joy and humor,” the company said in an apology posted on Instagram after the backlash on social media began.

Some users threatened to delete their accounts, criticizing Bumble for ignoring religious or personal reasons for choosing celibacy. These reasons include preferring asexuality or sensibly abstaining from sex amid diminishing access to abortion nationwide.

Others accused Bumble of more shameful motives. On X (formerly Twitter), a user called UjuAnya posted that “Bumble’s main business model is selling men access to women,” since market analysts have reported that 76 percent of Bumble users are male.

“Bumble won’t alienate their primary customers (men) telling them to quit being shit,” UjuAnya posted on X. “They’ll run ads like this to make their product (women) ‘better’ and more available on their app for men.”

That account quote-tweeted an even more popular post with nearly 3 million views suggesting that Bumble needs to “fuck off and stop trying to shame women into coming back to the apps” instead of running “ads targeted at men telling them to be normal.”

One TikTok user, ItsNeetie, declared, “the Bumble reckoning is finally here.”

Bumble did not respond to Ars’ request to respond to these criticisms or verify user statistics.

In its apology, Bumble took responsibility for not living up to its “values” of “passionately” standing up for women and marginalized communities and defending “their right to fully exercise personal choice.” Admitting the ads were a “mistake” that “unintentionally” frustrated the dating community, the dating app responded to some of the user feedback:

Some of the perspectives we heard were from those who shared that celibacy is the only answer when reproductive rights are continuously restricted; from others for whom celibacy is a choice, one that we respect; and from the asexual community, for whom celibacy can have a particular meaning and importance, which should not be diminished. We are also aware that for many, celibacy may be brought on by harm or trauma.

Bumble’s pulled ads were part of a larger marketing campaign that at first seemed to resonate with its users. Created by the company’s in-house creative studio, according to AdAge, Bumble’s campaign attracted a lot of eyeballs by deleting Bumble’s entire Instagram feed and posting “cryptic messages” showing tired women in Renaissance-era paintings that alluded to the app’s rebrand.

In a press release, chief marketing officer Selby Drummond said that Bumble “wanted to take a fun, bold approach in celebrating the first chapter of our app’s evolution and remind women that our platform has been solving for their needs from the start.”

The dating app is increasingly investing in ads, AdAge reported, tripling investments from $8 million in 2022 to $24 million in 2023. These ads are seemingly meant to help Bumble recover after posting “a $1.9 million net loss last year,” CNN reported, following a dismal drop in its share price by 86 percent since its initial public offering in February 2021.

Bumble’s new CEO Jones told NBC News that younger users are dating less and that Bumble’s plan was to listen to users to find new ways to grow.

Bumble apologizes for ads shaming women into sex Read More »

concerns-over-addicted-kids-spur-probe-into-meta-and-its-use-of-dark-patterns

Concerns over addicted kids spur probe into Meta and its use of dark patterns

Protecting the vulnerable —

EU is concerned Meta isn’t doing enough to protect children using its apps.

An iPhone screen displays the app icons for WhatsApp, Messenger, Instagram, and Facebook in a folder titled

Getty Images | Chesnot

Brussels has opened an in-depth probe into Meta over concerns it is failing to do enough to protect children from becoming addicted to social media platforms such as Instagram.

The European Commission, the EU’s executive arm, announced on Thursday it would look into whether the Silicon Valley giant’s apps were reinforcing “rabbit hole” effects, where users get drawn ever deeper into online feeds and topics.

EU investigators will also look into whether Meta, which owns Facebook and Instagram, is complying with legal obligations to provide appropriate age-verification tools to prevent children from accessing inappropriate content.

The probe is the second into the company under the EU’s Digital Services Act. The landmark legislation is designed to police content online, with sweeping new rules on the protection of minors.

It also has mechanisms to force Internet platforms to reveal how they are tackling misinformation and propaganda.

The DSA, which was approved last year, imposes new obligations on very large online platforms with more than 45 million users in the EU. If Meta is found to have broken the law, Brussels can impose fines of up to 6 percent of a company’s global annual turnover.

Repeat offenders can even face bans in the single market as an extreme measure to enforce the rules.

Thierry Breton, commissioner for internal market, said the EU was “not convinced” that Meta “has done enough to comply with the DSA obligations to mitigate the risks of negative effects to the physical and mental health of young Europeans on its platforms Facebook and Instagram.”

“We are sparing no effort to protect our children,” Breton added.

Meta said: “We want young people to have safe, age-appropriate experiences online and have spent a decade developing more than 50 tools and policies designed to protect them. This is a challenge the whole industry is facing, and we look forward to sharing details of our work with the European Commission.”

In the investigation, the commission said it would focus on whether Meta’s platforms were putting in place “appropriate and proportionate measures to ensure a high level of privacy, safety, and security for minors.” It added that it was placing special emphasis on default privacy settings for children.

Last month, the EU opened the first probe into Meta under the DSA over worries the social media giant is not properly curbing disinformation from Russia and other countries.

Brussels is especially concerned whether the social media company’s platforms are properly moderating content from Russian sources that may try to destabilize upcoming elections across Europe.

Meta defended its moderating practices and said it had appropriate systems in place to stop the spread of disinformation on its platforms.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Concerns over addicted kids spur probe into Meta and its use of dark patterns Read More »

doj-says-boeing-faces-criminal-charge-for-violating-deal-over-737-max-crashes

DOJ says Boeing faces criminal charge for violating deal over 737 Max crashes

Criminal prosecution —

DOJ determined that Boeing violated 2021 agreement spurred by two fatal crashes.

Relatives hold a poster with faces of the victims of Ethiopia flight 302 outside a courthouse in Fort Worth, Texas, on January 26, 2023.

Enlarge / Relatives hold a poster with faces of the victims of Ethiopia flight 302 outside a courthouse in Fort Worth, Texas, on January 26, 2023.

Getty Images | Shelby Tauber

The US Department of Justice yesterday said it has determined that Boeing violated a 2021 agreement spurred by two fatal crashes and is now facing a potential criminal prosecution.

Boeing violated the agreement “by failing to design, implement, and enforce a compliance and ethics program to prevent and detect violations of the US fraud laws throughout its operations,” the DOJ said in a filing in US District Court for the Northern District of Texas. Because of this, “Boeing is subject to prosecution by the United States for any federal criminal violation of which the United States has knowledge,” the DOJ said.

The US government is still determining whether to initiate a prosecution and said it will make a decision by July 7. Under terms of the 2021 agreement, Boeing has 30 days to respond to the government’s notice.

The DOJ court filing did not list any specific incidents. But the notice came after a January 2024 incident in which a 737 Max 9 used by Alaska Airlines had to make an emergency landing because a door plug blew off the aircraft in mid-flight. Boeing also recently said that some workers skipped required tests on the 787 Dreamliner planes but falsely recorded the work as having been completed.

Boeing itself referred to the Alaska Airlines flight in a statement the company provided to Ars today. Boeing confirmed that it received a communication “from the Justice Department, stating that the Department has made a determination that we have not met our obligations under our 2021 deferred prosecution agreement, and requesting the company’s response.”

“We believe that we have honored the terms of that agreement and look forward to the opportunity to respond to the Department on this issue,” Boeing said. “As we do so, we will engage with the Department with the utmost transparency, as we have throughout the entire term of the agreement, including in response to their questions following the Alaska Airlines 1282 accident.”

Deal struck after crash deaths of 346 passengers

Yesterday’s DOJ court filing said that Boeing could be prosecuted for the charge listed in the one-count criminal information that was filed at the same time as the deferred prosecution agreement in 2021. That document alleged that Boeing defrauded the Federal Aviation Administration in connection with the agency’s evaluation of the Boeing 737 Max. The DOJ filing yesterday said Boeing could also be prosecuted for other offenses.

In January 2021, the DOJ announced that Boeing signed the deferred prosecution agreement “to resolve a criminal charge related to a conspiracy to defraud the Federal Aviation Administration’s Aircraft Evaluation Group (FAA AEG) in connection with the FAA AEG’s evaluation of Boeing’s 737 Max airplane.”

This occurred after 346 passengers died in two Boeing 737 Max crashes in 2018 and 2019 in Indonesia and Ethiopia. Boeing agreed to pay $2.5 billion, including $1.77 billion in compensation for airline customers and $500 million for the heirs, relatives, and legal beneficiaries of the crash victims.

“The tragic crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302 exposed fraudulent and deceptive conduct by employees of one of the world’s leading commercial airplane manufacturers,” Acting Assistant Attorney General David Burns said when the 2021 deal was struck. “Boeing’s employees chose the path of profit over candor by concealing material information from the FAA concerning the operation of its 737 Max airplane and engaging in an effort to cover up their deception.”

US Attorney Erin Nealy Cox said then that “misleading statements, half-truths, and omissions communicated by Boeing employees to the FAA impeded the government’s ability to ensure the safety of the flying public.”

The nonprofit Foundation for Aviation Safety, which is led by former Boeing employee Ed Pierson, recently accused Boeing of violating the deferred prosecution agreement. Pierson alleged in a December 2023 court filing that “Boeing has deliberately provided false, incomplete, and misleading information to the FAA, the flying public, airline customers, regulators, and investors.”

Meeting with victims’ families

The DOJ court filing yesterday said the department is continuing to confer with the airlines and family members of the crash victims.

“To that end, the Government separately notified the victims and the airline customers today of the breach determination,” the DOJ wrote. “The Government also has already scheduled a conferral session for May 31, 2024, with the victims. The Government last conferred with the victims on April 24, 2024, to discuss the issue of whether Boeing breached the [deferred prosecution agreement].”

Paul Cassell, an attorney for victims’ families, said the DOJ filing “is a positive first step, and for the families, a long time coming. But we need to see further action from DOJ to hold Boeing accountable and plan to use our meeting on May 31 to explain in more detail what we believe would be a satisfactory remedy to Boeing’s ongoing criminal conduct.”

DOJ says Boeing faces criminal charge for violating deal over 737 Max crashes Read More »

report:-microsoft-to-face-antitrust-case-over-teams

Report: Microsoft to face antitrust case over Teams

VS. —

Unbundling Teams from Office has apparently failed to impress EU regulators.

Report: Microsoft to face antitrust case over Teams

Microsoft

Brussels is set to issue new antitrust charges against Microsoft over concerns that the software giant is undermining rivals to its videoconferencing app Teams.

According to three people with knowledge of the move, the European Commission is pressing ahead with a formal charge sheet against the world’s most valuable listed tech company over concerns it is restricting competition in the sector.

Microsoft last month offered concessions as it sought to avoid regulatory action, including extending a plan to unbundle Teams from other software such as Office, not just in Europe but across the world.

However, people familiar with their thinking said EU officials were still concerned that the company did not go far enough to facilitate fairness in the market.

Rivals are concerned that Microsoft will make Teams run more compatibly than rival apps with its own software. Another concern is the lack of data portability, which makes it difficult for existing Teams users to switch to alternatives.

The commission’s move would represent an escalation of a case that dates back to 2020 after Slack, now owned by Salesforce, submitted a formal complaint over Microsoft’s Teams.

It also would end a decade-long truce between EU regulators and the US tech company, after a series of competition probes that ended in 2013. The EU then issued a 561 million euro fine against Microsoft for failure to comply with a decision over the bundling of the Internet Explorer browser with its Windows operating system.

Charges could come in the next few weeks, said the people familiar with the commission’s thinking. Rivals of Microsoft and the commission are meeting this week to discuss the case, in an indication that the charges are being prepared, the people said.

However, they warned that Microsoft could still offer last-minute concessions that would derail the EU’s case, or the commission might decide to delay or scrap the charges against the company.

Microsoft risks fines of up to 10 percent of its global annual turnover if found to have breached the EU competition law.

The company declined to comment but referred to an earlier statement that said it would “continue to engage with the commission, listen to concerns in the marketplace, and remain open to exploring pragmatic solutions that benefit both customers and developers in Europe.”

The commission declined to comment.

The move against Microsoft comes at a time of heightened scrutiny of its activities. The EU is also investigating whether the tech group’s $13 billion alliance with ChatGPT maker OpenAI breaks competition law.

Microsoft is also part of a handful of tech companies, including Google and Meta, caught as “gatekeepers” under the new Digital Markets Act, meaning it has special responsibilities when trading in Europe.

The tech company has also faced complaints from European cloud computing providers that are concerned that Microsoft is abusing its dominant position in the sector to force users to buy its products and squashing competition from smaller start-ups in Europe.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Report: Microsoft to face antitrust case over Teams Read More »