Twitter

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

amc-to-pay-$8m-for-allegedly-violating-1988-law-with-use-of-meta-pixel

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

Stream like no one is watching —

Proposed settlement impacts millions using AMC apps like Shudder and AMC+.

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE.

The settlement comes in response to allegations that AMC illegally shared subscribers’ viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA).

Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” It was originally passed to protect individuals’ right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called “Bork Tapes” revealed little—other than that the judge frequently rented spy thrillers and British costume dramas—but lawmakers recognized that speech could be chilled by monitoring anyone’s viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint that “the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before.”

According to subscribers suing, AMC allegedly installed tracking technologies—including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology—on its website, allowing their personally identifying information to be connected with their viewing history.

Some trackers, like the Meta Pixel, required AMC to choose what kind of activity can be tracked, and subscribers claimed that AMC had willingly opted into sharing video names and URLs with Meta, along with a Facebook ID. “Anyone” could use the Facebook ID, subscribers said, to identify the AMC subscribers “simply by entering https://www.facebook.com/[unencrypted FID]/” into a browser.

X’s ID could similarly be de-anonymized, subscribers alleged, by using tweeterid.com.

AMC “could easily program its AMC Services websites so that this information is not disclosed” to tech companies, subscribers alleged.

Denying wrongdoing, AMC has defended its use of tracking technologies but is proposing to settle with subscribers to avoid uncertain outcomes from litigation, the proposed settlement said.

A hearing to approve the proposed settlement has been scheduled for May 16.

If it’s approved, AMC has agreed to “suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC’s disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual.”

Google and X did not immediately respond to Ars’ request to comment. Meta declined to comment.

All registered users of AMC services who “requested or obtained video content on at least one of the six AMC services” between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9.

In addition to distributing the $8.3 million settlement fund among class members, subscribers will receive a free one-week digital subscription.

According to AMC’s notice to subscribers (full disclosure, I am one), AMC’s agreement to avoid sharing subscribers’ viewing histories may change if the VPPA is amended, repealed, or invalidated. If the law changes to permit sharing viewing data at the core of subscribers’ claim, AMC may resume sharing that information with tech companies.

That day could come soon if Patreon has its way. Recently, Patreon asked a federal judge to rule that the VPPA is unconstitutional.

Patreon’s lawsuit is similar in its use of the Meta Pixel, allegedly violating the VPPA by sharing video views on its platform with Meta.

Patreon has argued that the VPPA is unconstitutional because it chills speech. Patreon said that the law was enacted “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

According to Patreon, the VPPA narrowly prohibits video service providers from sharing video titles, but not from sharing information that people may wish to keep private, such as “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch.”

Therefore, Patreon argued, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

That lawsuit remains ongoing, but Patreon’s position is likely to be met with opposition from experts who typically also defend freedom of speech. Experts at the Electronic Privacy Information Center, like AMC subscribers suing, consider the VPPA one of America’s “strongest protections of consumer privacy against a specific form of data collection.” And the Electronic Frontier Foundation (EFF) has already moved to convince the court to reject Patreon’s claim, describing the VPPA in a blog as an “essential” privacy protection.

“EFF is second to none in fighting for everyone’s First Amendment rights in court,” EFF’s blog said. “But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of Internet users who benefit from the VPPA’s protections.”

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel Read More »

bluesky-finally-gets-rid-of-invite-codes,-lets-everyone-join

Bluesky finally gets rid of invite codes, lets everyone join

Bluesky finally gets rid of invite codes, lets everyone join

After more than a year as an exclusive invite-only social media platform, Bluesky is now open to the public, so anyone can join without needing a once-coveted invite code.

In a blog, Bluesky said that requiring invite codes helped Bluesky “manage growth” while building features that allow users to control what content they see on the social platform.

When Bluesky debuted, many viewed it as a potential Twitter killer, but limited access to Bluesky may have weakened momentum. As of January 2024, Bluesky has more than 3 million users. That’s significantly less than X (formerly Twitter), which estimates suggest currently boasts more than 400 million global users.

But Bluesky CEO Jay Graber wrote in a blog last April that the app needed time because its goal was to piece together a new kind of social network built on its own decentralized protocol, AT Protocol. This technology allows users to freely port their social media accounts to different social platforms—including followers—rather than being locked into walled-off experiences on a platform owned by “a single company” like Meta’s Threads.

Perhaps most critically, the team wanted time to build out content moderation features before opening Bluesky to the masses to “prioritize user safety from the start.”

Bluesky plans to take a threefold approach to content moderation. The first layer is automated filtering that removes illegal, harmful content like child sexual abuse materials. Beyond that, Bluesky will soon give users extra layers of protection, including community labels and options to enable admins running servers to filter content manually.

Labeling services will be rolled out “in the coming weeks,” the blog said. These labels will make it possible for individuals or organizations to run their own moderation services, such as a trusted fact-checking organization. Users who trust these sources can subscribe to labeling services that filter out or appropriately label different types of content, like “spam” or “NSFW.”

“The human-generated label sets can be thought of as something similar to shared mute/block lists,” Bluesky explained last year.

Currently, Bluesky is recruiting partners for labeling services and did not immediately respond to Ars’ request to comment on any initial partnerships already formed.

It appears that Bluesky is hoping to bring in new users while introducing some of its flashiest features. Within the next month, Bluesky will also “be rolling out an experimental early version of ‘federation,’ or the feature that makes the network so open and customizable,” the blog said. The sales pitch is simple:

On Bluesky, you’ll have the freedom to choose (and the right to leave) instead of being held to the whims of private companies or black box algorithms. And wherever you go, your friends and relationships can go with you.

Developers interested in experimenting with the earliest version of AT Protocol can start testing out self-hosting servers now.

In addition to allowing users to customize content moderation, Bluesky also provides ways to customize feeds. Anyone joining will be defaulted to only see posts from users they follow, but they can also set up filters to discover content they enjoy without relying on a company’s algorithm to learn what interests them.

Bluesky users who sat on invite codes over the past year have joked about their uselessness now, with some designating themselves as legacy users. Seeming to reference Twitter’s once-coveted blue checks, one Bluesky user responding to a post from Graber joked, “When does everyone from the invite-only days get their Bluesky Elder profile badge?”

Bluesky finally gets rid of invite codes, lets everyone join Read More »

x-can’t-stop-spread-of-explicit,-fake-ai-taylor-swift-images

X can’t stop spread of explicit, fake AI Taylor Swift images

Escalating the situation —

Will Swifties’ war on AI fakes spark a deepfake porn reckoning?

X can’t stop spread of explicit, fake AI Taylor Swift images

Explicit, fake AI-generated images sexualizing Taylor Swift began circulating online this week, quickly sparking mass outrage that may finally force a mainstream reckoning with harms caused by spreading non-consensual deepfake pornography.

A wide variety of deepfakes targeting Swift began spreading on X, the platform formerly known as Twitter, yesterday.

Ars found that some posts have been removed, while others remain online, as of this writing. One X post was viewed more than 45 million times over approximately 17 hours before it was removed, The Verge reported. Seemingly fueling more spread, X promoted these posts under the trending topic “Taylor Swift AI” in some regions, The Verge reported.

The Verge noted that since these images started spreading, “a deluge of new graphic fakes have since appeared.” According to Fast Company, these harmful images were posted on X but soon spread to other platforms, including Reddit, Facebook, and Instagram. Some platforms, like X, ban sharing of AI-generated images but seem to struggle with detecting banned content before it becomes widely viewed.

Ars’ AI reporter Benj Edwards warned in 2022 that AI image-generation technology was rapidly advancing, making it easy to train an AI model on just a handful of photos before it could be used to create fake but convincing images of that person in infinite quantities. That is seemingly what happened to Swift, and it’s currently unknown how many different non-consensual deepfakes have been generated or how widely those images have spread.

It’s also unknown what consequences have resulted from spreading the images. At least one verified X user had their account suspended after sharing fake images of Swift, The Verge reported, but Ars reviewed posts on X from Swift fans targeting others who allegedly shared images whose accounts remain active. Swift fans also have been uploading countless favorite photos of Swift to bury the harmful images and prevent them from appearing in various X searches. Her fans seem dedicated to reducing the spread however they can, with some posting different addresses, seemingly in attempts to dox an X user who, they’ve alleged, is the initial source of the images.

Neither X nor Swift’s team has yet commented on the deepfakes, but it seems clear that solving the problem will require more than just requesting removals from social media platforms. The AI model trained on Swift’s images is likely still out there, likely procured through one of the known websites that specialize in making fine-tuned celebrity AI models. As long as the model exists, anyone with access could crank out as many new images as they wanted, making it hard for even someone with Swift’s resources to make the problem go away for good.

In that way, Swift’s predicament might raise awareness of why creating and sharing non-consensual deepfake pornography is harmful, perhaps moving the culture away from persistent notions that nobody is harmed by non-consensual AI-generated fakes.

Swift’s plight could also inspire regulators to act faster to combat non-consensual deepfake porn. Last year, she inspired a Senate hearing after a Live Nation scandal frustrated her fans, triggering lawmakers’ antitrust concerns about the leading ticket seller, The New York Times reported.

Some lawmakers are already working to combat deepfake porn. Congressman Joe Morelle (D-NY) proposed a law criminalizing deepfake porn earlier this year after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates. Under that proposed law, anyone sharing deepfake pornography without an individual’s consent risks fines and being imprisoned for up to two years. Damages could go as high as $150,000 and imprisonment for as long as 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

Elsewhere, the UK’s Online Safety Act restricts any illegal content from being shared on platforms, including deepfake pornography. It requires moderation, or companies will risk fines worth more than $20 million, or 10 percent of their global annual turnover, whichever amount is higher.

The UK law, however, is controversial because it requires companies to scan private messages for illegal content. That makes it practically impossible for platforms to provide end-to-end encryption, which the American Civil Liberties Union has described as vital for user privacy and security.

As regulators tangle with legal questions and social media users with moral ones, some AI image generators have moved to limit models from producing NSFW outputs. Some did this by removing some of the large quantity of sexualized images in the models’ training data, such as Stability AI, the company behind Stable Diffusion. Others, like Microsoft’s Bing image creator, make it easy for users to report NSFW outputs.

But so far, keeping up with reports of deepfake porn seems to fall squarely on social media platforms’ shoulders. Swift’s battle this week shows how unprepared even the biggest platforms currently are to handle blitzes of harmful images seemingly uploaded faster than they can be removed.

X can’t stop spread of explicit, fake AI Taylor Swift images Read More »

elon-musk-drops-price-of-x-gold-checks-amid-rampant-crypto-scams

Elon Musk drops price of X gold checks amid rampant crypto scams

Elon Musk drops price of X gold checks amid rampant crypto scams

There’s currently a surge in cryptocurrency and phishing scams proliferating on X (formerly Twitter)—hiding under the guise of gold and gray checkmarks intended to mark “Verified Organizations,” reports have warned this week.

These scams seem to mostly commandeer dormant X accounts purchased online through dark web marketplaces, according to a whitepaper released by the digital threat monitoring platform CloudSEK. But the scams have also targeted high-profile X users who claim that they had enhanced security measures in place to protect against these hacks.

This suggests that X scammers are growing more sophisticated at a time when X has launched an effort to sell even more gold checks at lower prices through a basic tier announced this week.

Most recently, the cyber threat intelligence company Mandiant—which is a subsidiary of Google—confirmed its X account was hijacked despite enabling two-factor authentication. According to Bleeping Computer, the hackers used Mandiant’s account to “distribute a fake airdrop that emptied cryptocurrency wallets.”

A Google spokesperson declined to comment on how many users may have been scammed, but Mandiant is investigating and promised to share results when its probe concludes.

In September, a similar fate befell Ethereum co-founder Vitalik Buterin, who had his account hijacked by hackers. The bad actors posted a fake offer for free non-fungible tokens (NFTs) with a link to a fake website designed to empty cryptocurrency wallets. The post was only up for about 20 minutes but drained $691,000 in digital assets from Buterin’s unsuspecting followers, according to CloudSEK’s research.

Another group monitoring cryptocurrency and phishing scams linked to X accounts is MalwareHunterTeam (MHT), Bleeping Computer reported. This week, MHT has flagged additional scams targeting politicians’ accounts, including a Canadian senator, Amina Gerba, and a Brazilian politician, Ubiratan Sanderson.

On X, gold ticks are supposed to reassure users that an account can be trusted by designating that an account is affiliated with an official organization or company. Gray ticks signify an account is linked to government organizations. CloudSEK estimated that hijacked gold and gray checks could be sold online for between $1,200 to $2,000, depending on how old the account is or how many followers it has. Bad actors can also buy accounts affiliated with gold accounts for $500 each.

A CloudSEK spokesperson told Ars that its team is “in the process of reporting the matter” to X.

X did not immediately respond to Ars’ request to comment.

CloudSEK predicted that scams involving gold checks would continue to be a problem so long as selling gold and gray checks remains profitable.

“It is evident that threat actors would not budge from such profit-making businesses anytime soon,” CloudSEK’s whitepaper said.

For organizations seeking to avoid being targeted by hackers on X, CloudSEK recommends strengthening brand monitoring on the platform, enhancing security settings, and closing out any dormant accounts. It’s also wise for organizations to cease storing passwords in a browser, and instead use a password manager that’s less vulnerable to malware attacks, CloudSEK said. Organizations on X may also want to monitor activity on any apps that become connected to X, Bleeping Computer advised.

Elon Musk drops price of X gold checks amid rampant crypto scams Read More »

elon-musk-told-bankers-they-wouldn’t-lose-any-money-on-twitter-purchase

Elon Musk told bankers they wouldn’t lose any money on Twitter purchase

Value destruction —

Lenders unlikely to get even 60 cents on the dollar for the bonds and loans.

Elon Musk and a twitter logo

Elon Musk privately told some of the bankers who lent him $13 billion to fund his leveraged buyout of Twitter that they would not lose any money on the deal, according to five people familiar with the matter.

The verbal guarantees were made by Musk to banks as a way to reassure the lenders as the value of the social media site, now rebranded as X, fell sharply after he completed the acquisition last year.

Despite the assurances, the seven banks that lent money to the billionaire for his buyout—Morgan Stanley, Bank of America, Barclays, MUFG, BNP Paribas, Mizuho and Société Générale—are facing serious losses on the debt if and when they eventually sell it.

The sources did not specify when Musk’s assurances were made, although one noted Musk had made them on several occasions. But the billionaire’s behavior, both in attempting to back out of the takeover in 2022 and more recently in alienating advertisers, has more broadly stymied the banks’ efforts to offload the debt since he engineered the takeover.

Large hedge funds and credit investors on Wall Street held conversations with the banks late last year, offering to buy the senior-most portion of the debt at roughly 65 cents on the dollar. But in recent interviews with the Financial Times, several said there was no price at which they would buy the bonds and loans, given their inability to gauge whether Linda Yaccarino, X’s chief executive, could turn the business around.

One multibillion-dollar firm that specializes in distressed debt called X’s debt “uninvestable.”

Selling the $12.5 billion of bonds and loans below 60 cents on the dollar—a price many investors believe the banks would be lucky to achieve in the current market—would imply losses before accounting for X’s interest payments of $4 billion or more, writedowns that have not yet been publicly reported by the syndicate of lenders, according to FT calculations. The debt is split between $6.5 billion of term loans, as well as $6 billion of senior and junior bonds and a $500 million revolver.

Morgan Stanley, Bank of America, Barclays, MUFG, BNP Paribas, Mizuho and Société Générale declined to comment. A spokesperson for X declined to comment. Musk did not return a request for comment.

The banks have held the debt on their balance sheets instead of selling at a steep loss in the hope that X’s performance will improve following a series of cost-cutting measures. Several people involved in the transaction noted that there was no plan to sell the debt imminently, with one saying there was no guarantee the banks would be able to offload the debt even in 2024.

The people involved in the deal cautioned that Musk’s guarantee was not based on any formal contract. One said they understood it as a boastful statement that the entrepreneur had never let his lenders down.

“I have never lost money for those who invest in me and I am not starting now,” he told Axios earlier this month, when asked about a separate fundraising push by his company X.ai Corp.

Some on Wall Street view Musk’s personal guarantees with skepticism, given that he tried to back out of his agreement to buy Twitter despite a watertight contract, before relenting.

Nevertheless, the guarantee from a man whose net worth Forbes pegs at about $243 billion has helped some of the bankers make the pitch to their internal committees that they can ascribe a higher price to the debt while they hold it on their balance sheets.

Morgan Stanley, the largest lender on the deal, in January disclosed $356 million in mark-to-market losses on corporate loans it planned to sell and loan hedges. Banks rarely report specific losses tied to an individual bond or loan, and often report write-downs of multiple deals together.

Wall Street was saddled with the Twitter buyout loan at the same time they were holding a smattering of other hung bridge loans—deals they were forced to fund themselves after failing to raise cash in public bond and loan markets. The FT has previously reported on large losses tied to other hung loans at the time, including the buyouts of technology company Citrix and television rating provider Nielsen.

How the debt has been marked on bank balance sheets has been an open question for traders and investors across Wall Street, given how much X’s business has deteriorated since Musk bought the company.

Musk, already out of favor with marketers for loosening content moderation, last month lost more advertisers after endorsing an antisemitic post. In November he followed by telling brands that were boycotting the business over his actions to “go fuck” themselves, criticizing Disney’s Bob Iger in particular.

According to a report last week from market intelligence firm Sensor Tower, in November 2023 total US ad spend among the top 100 advertisers on X was down nearly 45 percent compared with October 2022, prior to Musk’s takeover.

© 2023 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Elon Musk told bankers they wouldn’t lose any money on Twitter purchase Read More »

elon-musk’s-new-ai-bot,-grok,-causes-stir-by-citing-openai-usage-policy

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy

You are what you eat —

Some experts think xAI used OpenAI model outputs to fine-tune Grok.

Illustration of a broken robot exchanging internal gears.

Grok, the AI language model created by Elon Musk’s xAI, went into wide release last week, and people have begun spotting glitches. On Friday, security tester Jax Winterbourne tweeted a screenshot of Grok denying a query with the statement, “I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.” That made ears perk up online since Grok isn’t made by OpenAI—the company responsible for ChatGPT, which Grok is positioned to compete with.

Interestingly, xAI representatives did not deny that this behavior occurs with its AI model. In reply, xAI employee Igor Babuschkin wrote, “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem. Don’t worry, no OpenAI code was used to make Grok.”

In reply to Babuschkin, Winterbourne wrote, “Thanks for the response. I will say it’s not very rare, and occurs quite frequently when involving code creation. Nonetheless, I’ll let people who specialize in LLM and AI weigh in on this further. I’m merely an observer.”

A screenshot of Jax Winterbourne's X post about Grok talking like it's an OpenAI product.

Enlarge / A screenshot of Jax Winterbourne’s X post about Grok talking like it’s an OpenAI product.

Jason Winterbourne

However, Babuschkin’s explanation seems unlikely to some experts because large language models typically do not spit out their training data verbatim, which might be expected if Grok picked up some stray mentions of OpenAI policies here or there on the web. Instead, the concept of denying an output based on OpenAI policies would probably need to be trained into it specifically. And there’s a very good reason why this might have happened: Grok was fine-tuned on output data from OpenAI language models.

“I’m a bit suspicious of the claim that Grok picked this up just because the Internet is full of ChatGPT content,” said AI researcher Simon Willison in an interview with Ars Technica. “I’ve seen plenty of open weights models on Hugging Face that exhibit the same behavior—behave as if they were ChatGPT—but inevitably, those have been fine-tuned on datasets that were generated using the OpenAI APIs, or scraped from ChatGPT itself. I think it’s more likely that Grok was instruction-tuned on datasets that included ChatGPT output than it was a complete accident based on web data.”

As large language models (LLMs) from OpenAI have become more capable, it has been increasingly common for some AI projects (especially open source ones) to fine-tune an AI model output using synthetic data—training data generated by other language models. Fine-tuning adjusts the behavior of an AI model toward a specific purpose, such as getting better at coding, after an initial training run. For example, in March, a group of researchers from Stanford University made waves with Alpaca, a version of Meta’s LLaMA 7B model that was fine-tuned for instruction-following using outputs from OpenAI’s GPT-3 model called text-davinci-003.

On the web you can easily find several open source datasets collected by researchers from ChatGPT outputs, and it’s possible that xAI used one of these to fine-tune Grok for some specific goal, such as improving instruction-following ability. The practice is so common that there’s even a WikiHow article titled, “How to Use ChatGPT to Create a Dataset.”

It’s one of the ways AI tools can be used to build more complex AI tools in the future, much like how people began to use microcomputers to design more complex microprocessors than pen-and-paper drafting would allow. However, in the future, xAI might be able to avoid this kind of scenario by more carefully filtering its training data.

Even though borrowing outputs from others might be common in the machine-learning community (despite it usually being against terms of service), the episode particularly fanned the flames of the rivalry between OpenAI and X that extends back to Elon Musk’s criticism of OpenAI in the past. As news spread of Grok possibly borrowing from OpenAI, the official ChatGPT account wrote, “we have a lot in common” and quoted Winterbourne’s X post. As a comeback, Musk wrote, “Well, son, since you scraped all the data from this platform for your training, you ought to know.”

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy Read More »

meta’s-threads-reaches-100m-users,-despite-delayed-eu-launch

Meta’s Threads reaches 100M users, despite delayed EU launch

Early this morning, Meta’s new Twitter rival, Threads, hit 100 million sign-ups, less than a week after launch. This makes Threads the fastest growing online platform in history, dethroning ChatGPT, which took two months to reach the same number of users. 

When Mark Zuckerberg commented (posting on Threads, naturally) on hitting 70 million users on Friday last week, he stated this was already “way beyond our expectations.” However, it is still a long way to go to the two-billion user base accessible through Instagram. 

It is worth noting that the record-breaking growth is despite Threads currently being unavailable on EU app stores due to privacy concerns. Meanwhile, even within the bloc, non-Apple addicts can download the app using an Android package kit, or APK. (UK iPhone users, download at will.)

Some major organisations such as French media outlets Le Monde and Agence France-Presse have reportedly found ways to circumvent geographical challenges to signing up, as have, ahem, we

The rivalry thus far: cage fights and trade secrets

Naturally, the launch of the new microblogging app from the people who brought us Facebook did not go by without protests from camp Elon. Apart from the cage match challenge (which we are all not-so-secretly hoping might still take place), Musk has threatened to sue Threads over “stealing trade secrets.” 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Meta issued a response (again, posting on Threads) saying that: “No one on the Threads engineering team is a former Twitter employee — that’s just not a thing.” However, the launch of Threads could not have been more optimally timed, coinciding with a bunch of missteps form Musk including limiting the number of tweets users could view in a day. 

When Twitter (founded in 2006, for those of you who may have been too young to remember) went public in 2013, it had 200 million users. The takeover by Musk went through in October 2022, sparking some controversy, and causing the app to lose about 32 million users. 

However, the latest statistics show that Twitter still has 368 million monthly active users worldwide. Furthermore, it counts 206 million monetisable daily users, with about 90% of Twitter’s revenue coming from advertising in 2022. 

Of course, if Threads continues to increase user numbers at this rate, it could indeed become a serious contender for the text-based throne. Initially, there will be no ads on Threads while the company “fine-tunes” the app. However, Zuckerberg has said that once Threads is on its way to one billion users, Meta will start thinking about monetising the newest addition to its portfolio.

Meta’s Threads reaches 100M users, despite delayed EU launch Read More »

twitter’s-withdrawal-from-disinformation-code-draws-ire-of-eu-politicians

Twitter’s withdrawal from disinformation code draws ire of EU politicians

Twitter’s withdrawal from disinformation code draws ire of EU politicians

Linnea Ahlgren

Story by

Linnea Ahlgren

Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.

Following a decision to pull Twitter out of the EU’s (voluntary) disinformation Code of Practice last week, the reactions have not been long in coming. Upon receiving the news, the bloc’s industry chief Thierry Breton said that Twitter would still need to abide by EU rules soon enough.

Or, as Monsieur Breton put it (tweeted, in fact) when referring to the Digital Services Act (DSA), which will make fighting disinformation a legal obligation from 25 August, “You can run, but you cannot hide.” 

Twitter leaves EU voluntary Code of Practice against disinformation.

But obligations remain. You can run but you can’t hide.

Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25.

Our teams will be ready for enforcement.

— Thierry Breton (@ThierryBreton) May 26, 2023

Commissioner Breton was joined in his vexation today by France’s Digital Minister Jean-Noël Barrot. As reported by Politico, Barrot stated to the radio network France Info that, should Twitter fail to follow the new (and obligatory) rules laid down by the DSA, the company would get kicked out of the European Union. 

“Disinformation is one of the gravest threats weighing on our democracies,” said Barrot, as translated by Politico. “Twitter, if it repeatedly doesn’t follow our rules, will be banned from the EU.” 

First-of-its-kind self-regulatory rules

The code of conduct requires companies to measure their work on combating disinformation and issue regular reports on their progress. This includes things such as demonetising the dissemination of disinformation, ensuring transparency of political advertising, enhancing the cooperation with fact-checkers, and providing researchers with better data.

Google, TikTok, Microsoft, and Meta are all voluntary signatories. Twitter, obviously, was also part of the group up until last week.

There has been no official statement (or tweet for that matter) on the decision to leave, but it seems Elon Musk has changed his mind from four years ago, which was when the industry first agreed on the self-regulatory EU rules.

In an interview at the time, he stated that, “I think there should be regulations on social media to the degree that it negatively affects the public good. We can’t have like willy-nilly proliferation of fake news, that’s crazy.”

Blocking accounts on the behest of governments has increased

A $44 billion impulse purchase or not, changes have abounded at Twitter since Elon bought it. More than supplying the accounts of dead people with little blue ticks, it would seem that the new “era of free speech” he proclaimed is highly mutable.

Since Musk’s takeover, Twitter has actually become more compliant with government authority requests, including those of India and Turkey to block journalists, foreign politicians, and even poets. 

Musk has previously stated that he believes free speech to be that “which matches the law.” However, with the recent withdrawal from the disinformation code of conduct he has demonstrated he is not adverse to extracting his recently acquired company from regulations. 

By “free speech”, I simply mean that which matches the law.

I am against censorship that goes far beyond the law.

If people want less free speech, they will ask government to pass laws to that effect.

Therefore, going beyond the law is contrary to the will of the people.

— Elon Musk (@elonmusk) April 26, 2022

For once, it is not a tech lord threatening to leave the EU, but rather the bloc intimating that it might kick one out. Let’s see which way the DSA cookie will crumble. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Twitter’s withdrawal from disinformation code draws ire of EU politicians Read More »

surprise:-business-leaders-should-be-compassionate-–-here’s-the-evidence-to-prove it

Surprise: business leaders should be compassionate – here’s the evidence to prove it

In the month after Elon Musk triumphantly announced his takeover of Twitter with his now famous “the bird is freed” tweet, he implemented a large-scale cull of the social media platform’s global workforce. While Musk’s rationale for this move was to make Twitter more efficient, how he carried out the cuts was widely criticised as showing a lack of compassion for employees.

Luckily, the public have spoken and Musk has promised to step down after embarrassingly being voted out in his own poll. But what can we learn from this and what kind of leader does Twitter need moving forward?

Twitter might instead benefit from a more thoughtful and caring approach to leadership. Research shows compassionate leaders boost staff morale and productivity, not to mention projecting a more positive image of an organisation and its brand to the world.

Compassion in this context can be taken to mean a leader who is understanding, empathetic, and that strives to help their employees. This kind of leadership is needed now more than ever. Businesses are facing difficult times because of the lasting effects of the pandemic and the rising cost of living. The UK was already experiencing a slump in productivity growth since the 2008 financial crisis and a decline in the standard of living, which is set to continue over the next two years. Brexit hasn’t helped this situation.

Such testing times warrant organizational leadership by compassionate and competent people with sound judgment and effective coordination skills. This also applies to political leadership. The UK has seen a lack of this in recent months while dealing with “partygate”, reported bullying, and harassment in government offices, and the dire effect of recent leadership decisions around the economy.

International leaders aren’t doing much better. The US appears to have become far more polarised, leading to the Capitol riots and suffered accusations of a “leadership vaccum” during the pandemic. In the EU, compassionate leadership appears to have been in short supply based on slow responses to COVID and the energy crisis. All of these examples suggest a need for more compassionate leadership.

What is a good leader?

Research shows that good leadership helps companies to be more competitive and boost performance, particularly concerning innovation and flexibility. One study argues that good leaders win followers because of three main attributes: sound judgment, expertise, and coordination skills. These qualities allow leaders to lead by example.

Unfortunately, however, not all leaders fit this bill. A recent Europe-wide study found 13% of workers have “bad” bosses, although participants tended to score their bosses worse on competence than consideration. Still, poor leadership can negatively affect workers’ morale, wellbeing, and productivity. A review of studies in this area reported that worker wellbeing tends to be better served when companies — and their leaders — allow workers to have some control and provide more opportunities for their voices to be heard and for greater participation in making decisions.

In addition to the competence and coordination skills highlighted in a lot of research to date, my research shows that “soft leadership skills” are also important. This is about being compassionate and making others — employees in particular, but also suppliers and customers — feel important. Leaders with such “people skills” are not just technically competent, they can also look at an issue from a human perspective, thinking about how it might affect people.

My recently published research used nationally representative data from the 2004 and 2011 workplace employment relations survey, which polls more than 3,000 organizations and over 35,000 workers. They were asked to score their managers on a five-point scale in terms of certain soft leadership skills, chosen to measure the impartiality, trustworthiness and empathy of leaders.

These employees were asked whether their managers:

  • could be relied on to keep their promises
  • were sincere in attempting to understand employees’ views
  • dealt with employees honestly
  • understood that employees had responsibilities outside work
  • encouraged people to develop their skills
  • treated employees fairly
  • and maintained good relations with employees.

The results suggest that workers’ perception of good quality leadership is also positively affected by managers being upbeat when discussing organisational performance. This kind of leadership boosts workers’ wellbeing, helping employees experience greater job satisfaction and lower levels of job anxiety.

This research suggests that compassionate leaders help to both enhance company performance and boost worker wellbeing. It shows that improving the quality of leadership is worthwhile. This can be achieved with recruitment, appraisal, and training of leaders that elevate soft leadership skills.

Good leaders matter. As organizations and society in general face particularly difficult times, compassionate leadership could make a real difference to future business success.The Conversation

Getinet Astatike Haile, Associate Professor in Industrial Economics, University of Nottingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Surprise: business leaders should be compassionate – here’s the evidence to prove it Read More »