tiktok

us-can’t-ban-tiktok-for-security-reasons-while-ignoring-temu,-other-apps

US can’t ban TikTok for security reasons while ignoring Temu, other apps

Andrew J. Pincus, attorney for TikTok and ByteDance, leaves the E. Barrett Prettyman US Court House with members of his legal team as the U.S. Court of Appeals hears oral arguments in the case <em>TikTok Inc. v. Merrick Garland</em> on September 16 in Washington, DC. ” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/09/GettyImages-2172424134-800×620.jpg”></img><figcaption>
<p><a data-height=Enlarge / Andrew J. Pincus, attorney for TikTok and ByteDance, leaves the E. Barrett Prettyman US Court House with members of his legal team as the U.S. Court of Appeals hears oral arguments in the case TikTok Inc. v. Merrick Garland on September 16 in Washington, DC.

The fight to keep TikTok operating unchanged in the US reached an appeals court Monday, where TikTok and US-based creators teamed up to defend one of the world’s most popular apps from a potential US ban.

TikTok lawyer Andrew Pincus kicked things off by warning a three-judge panel that a law targeting foreign adversaries that requires TikTok to divest from its allegedly China-controlled owner, ByteDance, is “unprecedented” and could have “staggering” effects on “the speech of 170 million Americans.”

Pincus argued that the US government was “for the first time in history” attempting to ban speech by a specific US speaker—namely, TikTok US, the US-based entity that allegedly curates the content that Americans see on the app.

The government justified the law by claiming that TikTok may in the future pose a national security risk because updates to the app’s source code occur in China. Essentially, the US is concerned that TikTok collecting data in the US makes it possible for the Chinese government to both spy on Americans and influence Americans by manipulating TikTok content.

But Pincus argued that there’s no evidence of that, only the FBI warning “about the potential that the Chinese Communist Party could use TikTok to threaten US homeland security, censor dissidents, and spread its malign influence on US soil.” And because the law carves out China-owned and controlled e-commerce apps like Temu and Shein—which a US commission deemed a possible danger and allegedly process even more sensitive data than TikTok—the national security justification for targeting TikTok is seemingly so under-inclusive as to be fatal to the government’s argument, Pincus argued.

Jeffrey Fisher, a lawyer for TikTok creators, agreed, warning the panel that “what the Supreme Court tells us when it comes to under-inclusive arguments is” that they “often” are “a signal that something else is at play.”

Daniel Tenny, a lawyer representing the US government, defended Congress’ motivations for passing the law, explaining that the data TikTok collects is “extremely valuable to a foreign adversary trying to compromise the security” of the US. He further argued that a foreign adversary controlling “what content is shown to Americans” is just as problematic.

Rather than targeting Americans’ expression on the app, Tenny argued that because ByteDance controls TikTok’s source code, the speech on TikTok is not American speech but “expression by Chinese engineers in China.” This is the “core point” that the US hopes the appeals court will embrace, that as long as ByteDance oversees TikTok’s source code, the US will have justified concerns about TikTok data security and content manipulation. The only solution, the US government argues, is divestment.

TikTok has long argued that divestment isn’t an option and that the law will force a ban. Pincus told the court that the “critical issue” with the US government’s case is that the US does not have any evidence that TikTok US is under Chinese control. Because the US is only concerned about some “future Chinese control,” the burden that the law places on speech must meet the highest standard of constitutional scrutiny. Any finding otherwise, Pincus warned the court, risked turning the First Amendment “on its head,” potentially allowing the government to point to foreign ownership to justify regulating US speech on any platform.

But as the panel explained, the US government had tried for two years to negotiate with ByteDance and find through Project Texas a way to maintain TikTok in the US while avoiding national security concerns. Because every attempt to find a suitable national security arrangement has seemingly failed, Congress was potentially justified in passing the law, the panel suggested, especially if the court rules that the law is really just trying to address foreign ownership—not regulate content. And even though the law currently only targets TikTok directly, the government could argue that’s seemingly because TikTok is so far the only foreign adversary-controlled company flagged as a potential national security risk, the panel suggested.

TikTok insisted that divestment is not the answer and that Congress has made no effort to find a better solution. Pincus argued that the US did not consider less restrictive means for achieving the law’s objectives without burdening speech on TikTok, such as a disclosure mechanism that could prevent covert influence on the app by a foreign adversary.

But US circuit judge Neomi Rao pushed back on this, suggesting that disclosure maybe isn’t “always” the only appropriate mechanism to block propaganda in the US—especially when the US government has no way to quickly assess constantly updated TikTok source code developed in China. Pincus had confirmed that any covert content manipulation uncovered on the app would only be discovered after users were exposed.

“They say it would take three years to just review the existing code,” Rao said. “How are you supposed to have disclosure in that circumstance?”

“I think disclosure has been the historic answer for covert content manipulation,” Pincus told the court, branding the current law as “unusual” for targeting TikTok and asking the court to overturn the alleged ban.

The government has given ByteDance until mid-January to sell TikTok, or else the app risks being banned in the US. The appeals court is expected to rule by early December.

US can’t ban TikTok for security reasons while ignoring Temu, other apps Read More »

court:-section-230-doesn’t-shield-tiktok-from-blackout-challenge-death-suit

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit

A dent in the Section 230 shield —

TikTok must face claim over For You Page recommending content that killed kids.

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit

An appeals court has revived a lawsuit against TikTok by reversing a lower court’s ruling that Section 230 immunity shielded the short video app from liability after a child died taking part in a dangerous “Blackout Challenge.”

Several kids died taking part in the “Blackout Challenge,” which Third Circuit Judge Patty Shwartz described in her opinion as encouraging users “to choke themselves with belts, purse strings, or anything similar until passing out.”

Because TikTok promoted the challenge in children’s feeds, Tawainna Anderson counted among mourning parents who attempted to sue TikTok in 2022. Ultimately, she was told that TikTok was not responsible for recommending the video that caused the death of her daughter Nylah.

In her opinion, Shwartz wrote that Section 230 does not bar Anderson from arguing that TikTok’s algorithm amalgamates third-party videos, “which results in ‘an expressive product’ that ‘communicates to users’ [that a] curated stream of videos will be interesting to them.”

The judge cited a recent Supreme Court ruling that “held that a platform’s algorithm that reflects ‘editorial judgments’ about compiling the third-party speech it wants in the way it wants’ is the platform’s own ‘expressive product’ and is therefore protected by the First Amendment,” Shwartz wrote.

Because TikTok’s For You Page (FYP) algorithm decides which third-party speech to include or exclude and organizes content, TikTok’s algorithm counts as TikTok’s own “expressive activity.” That “expressive activity” is not protected by Section 230, which only shields platforms from liability for third-party speech, not platforms’ own speech, Shwartz wrote.

The appeals court has now remanded the case to the district court to rule on Anderson’s remaining claims.

Section 230 doesn’t permit “indifference” to child death

According to Shwartz, if Nylah had discovered the “Blackout Challenge” video by searching on TikTok, the platform would not be liable, but because she found it on her FYP, TikTok transformed into “an affirmative promoter of such content.”

Now TikTok will have to face Anderson’s claims that are “premised upon TikTok’s algorithm,” Shwartz said, as well as potentially other claims that Anderson may reraise that may be barred by Section 230. The District Court will have to determine which claims are barred by Section 230 “consistent” with the Third Circuit’s ruling, though.

Concurring in part, circuit Judge Paul Matey noted that by the time Nylah took part in the “Blackout Challenge,” TikTok knew about the dangers and “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their” FYPs.

Matey wrote that Section 230 does not shield corporations “from virtually any claim loosely related to content posted by a third party,” as TikTok seems to believe. He encouraged a “far narrower” interpretation of Section 230 to stop companies like TikTok from reading the Communications Decency Act as permitting “casual indifference to the death of a 10-year-old girl.”

“Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful,” Matey wrote. That includes pursuing “claims seeking to hold TikTok liable for continuing to host the Blackout Challenge videos knowing they were causing the death of children” and “claims seeking to hold TikTok liable for its targeted recommendations of videos it knew were harmful.”

“The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes,” Matey wrote. “But it cannot claim immunity that Congress did not provide.”

Anderson’s lawyers at Jeffrey Goodman, Saltz Mongeluzzi & Bendesky PC previously provided Ars with a statement after the prior court’s ruling, indicating that parents weren’t prepared to stop fighting in 2022.

“The federal Communications Decency Act was never intended to allow social media companies to send dangerous content to children, and the Andersons will continue advocating for the protection of our children from an industry that exploits youth in the name of profits,” lawyers said.

TikTok did not immediately respond to Ars’ request to comment but previously vowed to “remain vigilant in our commitment to user safety” and “immediately remove” Blackout Challenge content “if found.”

Court: Section 230 doesn’t shield TikTok from Blackout Challenge death suit Read More »

chinese-social-media-users-hilariously-mock-ai-video-fails

Chinese social media users hilariously mock AI video fails

Life imitates AI imitating life —

TikTok and Bilibili users transform nonsensical AI glitches into real-world performance art.

Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

Enlarge / Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

It’s no secret that despite significant investment from companies like OpenAI and Runway, AI-generated videos still struggle to achieve convincing realism at times. Some of the most amusing fails end up on social media, which has led to a new response trend on Chinese social media platforms TikTok and Bilibili where users create videos that mock the imperfections of AI-generated content. The trend has since spread to X (formerly Twitter) in the US, where users have been sharing the humorous parodies.

In particular, the videos seem to parody image synthesis videos where subjects seamlessly morph into other people or objects in unexpected and physically impossible ways. Chinese social media replicate these unusual visual non-sequiturs without special effects by positioning their bodies in unusual ways as new and unexpected objects appear on-camera from out of frame.

This exaggerated mimicry has struck a chord with viewers on X, who find the parodies entertaining. User @theGioM shared one video, seen above. “This is high-level performance arts,” wrote one X user. “art is imitating life imitating ai, almost shedded a tear.” Another commented, “I feel like it still needs a motorcycle the turns into a speedboat and takes off into the sky. Other than that, excellent work.”

An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

While these parodies poke fun at current limitations, tech companies are actively attempting to overcome them with more training data (examples analyzed by AI models that teach them how to create videos) and computational training time. OpenAI unveiled Sora in February, capable of creating realistic scenes if they closely match examples found in training data. Runway’s Gen-3 Alpha suffers a similar fate: It can create brief clips of convincing video within a narrow set of constraints. This means that generated videos of situations outside the dataset often end up hilariously weird.

An AI-generated video that features impossibly-morphing people and animals. Social media users are imitating this style.

It’s worth noting that actor Will Smith beat Chinese social media users to this trend in February by poking fun at a horrific 2023 viral AI-generated video that attempted to depict him eating spaghetti. That may also bring back memories of other amusing video synthesis failures, such as May 2023’s AI-generated beer commercial, created using Runway’s earlier Gen-2 model.

An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

While imitating imperfect AI videos may seem strange to some, people regularly make money pretending to be NPCs (non-player characters—a term for computer-controlled video game characters) on TikTok.

For anyone alive during the 1980s, witnessing this fast-changing and often bizarre new media world can cause some cognitive whiplash, but the world is a weird place full of wonders beyond the imagination. “There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy,” as Hamlet once famously said. “Including people pretending to be video game characters and flawed video synthesis outputs.”

Chinese social media users hilariously mock AI video fails Read More »

push-alerts-from-tiktok-include-fake-news,-expired-tsunami-warning

Push alerts from TikTok include fake news, expired tsunami warning

Broken —

News-style notifications include false claims about Taylor Swift, other misleading info.

illustration showing a phone with TikTok logo

FT montage/Getty Images

TikTok has been sending inaccurate and misleading news-style alerts to users’ phones, including a false claim about Taylor Swift and a weeks-old disaster warning, intensifying fears about the spread of misinformation on the popular video-sharing platform.

Among alerts seen by the Financial Times was a warning about a tsunami in Japan, labeled “BREAKING,” that was posted in late January, three weeks after an earthquake had struck.

Other notifications falsely stated that “Taylor Swift Canceled All Tour Dates in What She Called ‘Racist Florida’” and highlighted a five-year “ban” for a US baseball player that originated as an April Fool’s day prank.

The notifications, which sometimes contain summaries from user-generated posts, pop up on screen in the style of a news alert. Researchers say that format, adopted widely to boost engagement through personalized video recommendations, may make users less critical of the veracity of the content and open them up to misinformation.

“Notifications have this additional stamp of authority,” said Laura Edelson, a researcher at Northeastern University, in Boston. “When you get a notification about something, it’s often assumed to be something that has been curated by the platform and not just a random thing from your feed.”

Social media groups such as TikTok, X, and Meta are facing greater scrutiny to police their platforms, particularly in a year of major national elections, including November’s vote in the US. The rise of artificial intelligence adds to the pressure given that the fast-evolving technology makes it quicker and easier to spread misinformation, including through synthetic media, known as deepfakes.

TikTok, which has more than 1 billion global users, has repeatedly promised to step up its efforts to counter misinformation in response to pressure from governments around the world, including the UK and EU. In May, the video-sharing platform committed to becoming the first major social media network to label some AI-generated content automatically.

The false claim about Swift canceling her tour in Florida, which also circulated on X, mirrored an article published in May in the satirical newspaper The Dunning-Kruger Times, although this article was not linked or directly referred to in the TikTok post.

At least 20 people said on a comment thread that they had clicked on the notification and were directed to a video on TikTok repeating the claim, even though they did not follow the account. At least one person in the thread said they initially thought the notification “was a news article.”

Swift is still scheduled to perform three concerts in Miami in October and has not publicly called Florida “racist.”

Another push notification inaccurately stated that a Japanese pitcher who plays for the Los Angeles Dodgers faced a ban from Major League Baseball: “Shohei Ohtani has been BANNED from the MLB for 5 years following his gambling investigation… ”

The words directly matched the description of a post uploaded as an April Fools’ day prank. Tens of commenters on the original video, however, reported receiving alerts in mid-April. Several said they had initially believed it before they checked other sources.

Users have also reported notifications that appeared to contain news updates but were generated weeks after the event.

One user received an alert on January 23 that read: “BREAKING: A tsunami alert has been issued in Japan after a major earthquake.” The notification appeared to refer to a natural disaster warning issued more than three weeks earlier after an earthquake struck Japan’s Noto peninsula on New Year’s Day.

TikTok said it had removed the specific notifications flagged by the FT.

The alerts appear automatically to scrape the descriptions of posts that are receiving, or are likely to receive, high levels of engagement on the viral video app, owned by China’s ByteDance, researchers said. They seem to be tailored to users’ interests, which means that each one is likely to be limited to a small pool of people.

“The way in which those alerts are positioned, it can feel like the platform is speaking directly to [users] and not just a poster,” said Kaitlyn Regehr, an associate professor of digital humanities at University College London.

TikTok declined to reveal how the app determined which videos to promote through notifications, but the sheer volume of personalized content recommendations must be “algorithmically generated,” said Dani Madrid-Morales, co-lead of the University of Sheffield’s Disinformation Research Cluster.

Edelson, who is also co-director of the Cybersecurity for Democracy group, suggested that a responsible push notification algorithm could be weighted towards trusted sources, such as verified publishers or officials. “The question is: Are they choosing a high-traffic thing from an authoritative source?” she said. “Or is this just a high-traffic thing?”

Additional reporting by Hannah Murphy in San Francisco and Cristina Criddle in London.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Push alerts from TikTok include fake news, expired tsunami warning Read More »

doj-sues-tiktok,-alleging-“massive-scale-invasions-of-children’s-privacy”

DOJ sues TikTok, alleging “massive-scale invasions of children’s privacy”

DOJ sues TikTok, alleging “massive-scale invasions of children’s privacy”

The US Department of Justice sued TikTok today, accusing the short-video platform of illegally collecting data on millions of kids and demanding a permanent injunction “to put an end to TikTok’s unlawful massive-scale invasions of children’s privacy.”

The DOJ said that TikTok had violated the Children’s Online Privacy Protection Act of 1998 (COPPA) and the Children’s Online Privacy Protection Rule (COPPA Rule), claiming that TikTok allowed kids “to create and access accounts without their parents’ knowledge or consent,” collected “data from those children,” and failed to “comply with parents’ requests to delete their children’s accounts and information.”

The COPPA Rule requires TikTok to prove that it does not target kids as its primary audience, the DOJ said, and TikTok claims to satisfy that “by requiring users creating accounts to report their birthdates.”

However, even if a child inputs their real birthdate, the DOJ said, TikTok does nothing to stop them from restarting the process and using a fake birthdate. Dodging TikTok’s age gate has been easy for millions of kids, the DOJ alleged, and TikTok knows that, collecting their information anyway and neglecting to delete information even when child users “identify themselves as children.”

“The precise magnitude” of TikTok’s violations “is difficult to determine,” the DOJ’s complaint said. But TikTok’s “internal analyses show that millions of TikTok’s US users are children under the age of 13.”

“For example, the number of US TikTok users that Defendants classified as age 14 or younger in 2020 was millions higher than the US Census Bureau’s estimate of the total number of 13- and 14-year-olds in the United States, suggesting that many of those users were children younger than 13,” the DOJ said.

TikTok seemingly risks huge fines if the DOJ proves its case. The DOJ has asked a jury to agree that damages are owed for each “collection, use, or disclosure of a child’s personal information” that violates the COPPA Rule, with likely multiple violations spanning millions of children’s accounts. And any recent violations could cost more, as the DOJ noted that the FTC Act authorizes civil penalties up to $51,744 “for each violation of the Rule assessed after January 10, 2024.”

A TikTok spokesperson told Ars that TikTok plans to fight the lawsuit, which is part of the US’s ongoing battle with the app. Currently, TikTok is fighting a nationwide ban that was passed this year, due to growing political tensions with its China-based owner and lawmakers’ concerns over TikTok’s data collection and alleged repeated spying on Americans.

“We disagree with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed,” TikTok’s spokesperson told Ars. “We are proud of our efforts to protect children, and we will continue to update and improve the platform. To that end, we offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users, and have voluntarily launched features such as default screentime limits, Family Pairing, and additional privacy protections for minors.”

The DOJ seems to think damages are owed for past as well as possibly current violations. It claimed that TikTok already has more sophisticated ways to identify the ages of child users for ad-targeting but doesn’t use the same technology to block underage sign-ups because TikTok is allegedly unwilling to dedicate resources to widely police kids on its platform.

“By adhering to these deficient policies, Defendants actively avoid deleting the accounts of users they know to be children,” the DOJ alleged, claiming that “internal communications reveal that Defendants’ employees were aware of this issue.”

DOJ sues TikTok, alleging “massive-scale invasions of children’s privacy” Read More »

first-known-tiktok-mob-attack-led-by-middle-schoolers-tormenting-teachers

First-known TikTok mob attack led by middle schoolers tormenting teachers

First-known TikTok mob attack led by middle schoolers tormenting teachers

A bunch of eighth graders in a “wealthy Philadelphia suburb” recently targeted teachers with an extreme online harassment campaign that The New York Times reported was “the first known group TikTok attack of its kind by middle schoolers on their teachers in the United States.”

According to The Times, the Great Valley Middle School students created at least 22 fake accounts impersonating about 20 teachers in offensive ways. The fake accounts portrayed long-time, dedicated teachers sharing “pedophilia innuendo, racist memes,” and homophobic posts, as well as posts fabricating “sexual hookups among teachers.”

The Pennsylvania middle school’s principal, Edward Souders, told parents in an email that the number of students creating the fake accounts was likely “small,” but that hundreds of students piled on, leaving comments and following the fake accounts. Other students responsibly rushed to report the misconduct, though, Souders said.

“I applaud the vast number of our students who have had the courage to come forward and report this behavior,” Souders said, urging parents to “please take the time to engage your child in a conversation about the responsible use of social media and encourage them to report any instances of online impersonation or cyberbullying.”

Some students claimed that the group attack was a joke that went too far. Certain accounts impersonating teachers made benign posts, The Times reported, but other accounts risked harming respected teachers’ reputations. When creating fake accounts, students sometimes used family photos that teachers had brought into their classrooms or scoured the Internet for photos shared online.

Following The Times’ reporting, the superintendent of the Great Valley School District (GVSD), Daniel Goffredo, posted a message to the community describing the impact on teachers as “profound.” One teacher told The Times that she felt “kicked in the stomach” by the students’ “savage” behavior, while another accused students of slander and character assassination. Both were portrayed in fake posts with pedophilia innuendo.

“I implore you also to use the summer to have conversations with your children about the responsible use of technology, especially social media,” Goffredo said. “What seemingly feels like a joke has deep and long-lasting impacts, not just for the targeted person but for the students themselves. Our best defense is a collaborative one.”

Goffredo confirmed that the school district had explored legal responses to the group attack. But ultimately the district found that they were “limited” because “courts generally protect students’ rights to off-campus free speech, including parodying or disparaging educators online—unless the students’ posts threaten others or disrupt school,” The Times reported.

Instead, the middle school “briefly suspended several students,” teachers told The Times, and held an eighth-grade assembly raising awareness of harms of cyberbullying, inviting parents to join.

Becky Pringle, the president of the National Education Association—which is the largest US teachers’ union—told The Times that teachers have never dealt with such harassment on this scale. Typically, The Times reported, students would target a single educator at a time. Pringle said teachers risk online harassment being increasingly normalized. That “could push educators to question” leaving the profession, Pringle said, at a time when the US Department of Education is already combating a teacher shortage.

While Goffredo said teachers had few options to fight back, he also told parents in an email that the district is “committed to working with law enforcement to support teachers who may pursue legal action.”

“I reiterate my disappointment and sadness that our students’ behavior has caused such duress for our staff,” Goffredo’s message to the community said. “Seeing GVSD in such a prominent place in the news for behavior like this is also disheartening.”

First-known TikTok mob attack led by middle schoolers tormenting teachers Read More »

tiktok-vaguely-disputes-report-that-it’s-making-a-us-only-app

TikTok vaguely disputes report that it’s making a US-only app

Exploring a different route —

TikTok has spent months separating code for US-only algorithm, insiders claim.

TikTok vaguely disputes report that it’s making a US-only app

TikTok is now disputing a Reuters report that claims the short-video app is cloning its algorithm to potentially offer a different version of the app, which might degrade over time, just for US users.

Sources “with direct knowledge” of the project—granted anonymity because they’re not authorized to discuss it publicly—told Reuters that the TikTok effort began late last year. They said that the project will likely take a year to complete, requiring hundreds of engineers to separate millions of lines of code.

As these sources reported, TikTok’s tremendous undertaking could potentially help prepare its China-based owner ByteDance to appease US lawmakers who passed a law in April forcing TikTok to sell its US-based operations by January 19 or face a ban. But TikTok has maintained that the “qualified divestiture” required by the law would be impossible, and on Thursday, TikTok denied the accuracy of Reuters’ report while reiterating its stance that a sale is not in the cards.

“The Reuters story published today is misleading and factually inaccurate,” the TikTok Policy account posted on X (formerly Twitter). “As we said in our court filing, the ‘qualified divestiture’ demanded by the Act to allow TikTok to continue operating in the United States is simply not possible: not commercially, not technologically, not legally. And certainly not on the 270-day timeline required by the Act.”

It remains unclear precisely which parts of Reuters’ report are supposedly “misleading and factually inaccurate.” A Reuters spokesperson said that Reuters stands by its reporting.

A TikTok spokesperson told Ars that “while we have continued work in good faith to further safeguard the authenticity of the TikTok experience, it is simply false to suggest that this work would facilitate divestiture or that divestiture is even a possibility.”

TikTok is currently suing to block the US law on First Amendment grounds, and this week a court fast-tracked that legal challenge to attempt to resolve the matter before the law takes effect next year. Oral arguments are scheduled to start this September, with a ruling expected by December 6, which Reuters reported leaves time for a potential Supreme Court challenge to that ruling.

However, in the meantime, sources told Reuters that TikTok is seemingly exploring all its options to avoid running afoul of the US law, including separating its code base and even once considering open-sourcing parts of its algorithm to increase transparency.

Separating out the code is not an easy task, insiders told Reuters.

“Compliance and legal issues involved with determining what parts of the code can be carried over to TikTok are complicating the work,” one source told Reuters. “Each line of code has to be reviewed to determine if it can go into the separate code base.”

But creating a US-only content-recommendation algorithm could be worth it, as it could allow TikTok US to operate independently from the China-based TikTok app in a manner that could satisfy lawmakers worried about the Chinese government potentially spying on Americans through the app.

However, there’s no guaranteeing that the US-only version of TikTok’s content-recommendation algorithm will perform as well as the currently available app in the US, sources told Reuters. By potentially cutting off TikTok US from Chinese engineers who developed and maintain the algorithm, US users could miss out on code updates improving or maintaining desired functionality. That means TikTok’s popular For You Page recommendations may suffer, as “TikTok US may not be able to deliver the same level of performance as the existing TikTok,” sources told Reuters.

TikTok vaguely disputes report that it’s making a US-only app Read More »

bumble-apologizes-for-ads-shaming-women-into-sex

Bumble apologizes for ads shaming women into sex

Bumble apologizes for ads shaming women into sex

For the past decade, the dating app Bumble has claimed to be all about empowering women. But under a new CEO, Lidiane Jones, Bumble is now apologizing for a tone-deaf ad campaign that many users said seemed to channel incel ideology by telling women to stop denying sex.

“You know full well a vow of celibacy is not the answer,” one Bumble billboard seen in Los Angeles read. “Thou shalt not give up on dating and become a nun,” read another.

Bumble HQ

“We don’t have enough women on the app.”

“They’d rather be alone than deal with men.”

“Should we teach men to be better?”

“No, we should shame women so they come back to the app.”

“Yes! Let’s make them feel bad for choosing celibacy. Great idea!” pic.twitter.com/115zDdGKZo

— Arghavan Salles, MD, PhD (@arghavan_salles) May 14, 2024

Bumble intended these ads to bring “joy and humor,” the company said in an apology posted on Instagram after the backlash on social media began.

Some users threatened to delete their accounts, criticizing Bumble for ignoring religious or personal reasons for choosing celibacy. These reasons include preferring asexuality or sensibly abstaining from sex amid diminishing access to abortion nationwide.

Others accused Bumble of more shameful motives. On X (formerly Twitter), a user called UjuAnya posted that “Bumble’s main business model is selling men access to women,” since market analysts have reported that 76 percent of Bumble users are male.

“Bumble won’t alienate their primary customers (men) telling them to quit being shit,” UjuAnya posted on X. “They’ll run ads like this to make their product (women) ‘better’ and more available on their app for men.”

That account quote-tweeted an even more popular post with nearly 3 million views suggesting that Bumble needs to “fuck off and stop trying to shame women into coming back to the apps” instead of running “ads targeted at men telling them to be normal.”

One TikTok user, ItsNeetie, declared, “the Bumble reckoning is finally here.”

Bumble did not respond to Ars’ request to respond to these criticisms or verify user statistics.

In its apology, Bumble took responsibility for not living up to its “values” of “passionately” standing up for women and marginalized communities and defending “their right to fully exercise personal choice.” Admitting the ads were a “mistake” that “unintentionally” frustrated the dating community, the dating app responded to some of the user feedback:

Some of the perspectives we heard were from those who shared that celibacy is the only answer when reproductive rights are continuously restricted; from others for whom celibacy is a choice, one that we respect; and from the asexual community, for whom celibacy can have a particular meaning and importance, which should not be diminished. We are also aware that for many, celibacy may be brought on by harm or trauma.

Bumble’s pulled ads were part of a larger marketing campaign that at first seemed to resonate with its users. Created by the company’s in-house creative studio, according to AdAge, Bumble’s campaign attracted a lot of eyeballs by deleting Bumble’s entire Instagram feed and posting “cryptic messages” showing tired women in Renaissance-era paintings that alluded to the app’s rebrand.

In a press release, chief marketing officer Selby Drummond said that Bumble “wanted to take a fun, bold approach in celebrating the first chapter of our app’s evolution and remind women that our platform has been solving for their needs from the start.”

The dating app is increasingly investing in ads, AdAge reported, tripling investments from $8 million in 2022 to $24 million in 2023. These ads are seemingly meant to help Bumble recover after posting “a $1.9 million net loss last year,” CNN reported, following a dismal drop in its share price by 86 percent since its initial public offering in February 2021.

Bumble’s new CEO Jones told NBC News that younger users are dating less and that Bumble’s plan was to listen to users to find new ways to grow.

Bumble apologizes for ads shaming women into sex Read More »

claims-of-tiktok-whistleblower-may-not-add-up

Claims of TikTok whistleblower may not add up

TikTok logo next to inverted US flag.

The United States government is currently poised to outlaw TikTok. Little of the evidence that convinced Congress the app may be a national security threat has been shared publicly, in some cases because it remains classified. But one former TikTok employee turned whistleblower, who claims to have driven key news reporting and congressional concerns about the app, has now come forward.

Zen Goziker worked at TikTok as a risk manager, a role that involved protecting the company from external security and reputational threats. In a wrongful termination lawsuit filed against TikTok’s parent company ByteDance in January, he alleges he was fired in February 2022 for refusing “to sign off” on Project Texas, a $1.5 billion program that TikTok designed to assuage US government security concerns by storing American data on servers managed by Oracle.

Goziker worked at TikTok for only six months. He didn’t hold a senior position inside the company. His lawsuit, and a second one he filed in March against several US government agencies, makes a number of improbable claims. He asserts that he was put under 24-hour surveillance by TikTok and the FBI while working remotely in Mexico. He claims that US attorney general Merrick Garland, director of national intelligence Avril Haines, and other top officials “wickedly instigated” his firing. And he states that the FBI helped the CIA share his private information with foreign governments. The suits do not appear to include evidence for any of these claims.

“This lawsuit is full of outrageous claims that lack merit and comes from an individual who significantly exaggerates his role with a company he worked at for merely six months,” TikTok spokesperson Michael Hughes said in a statement.

Yet court records and emails viewed by WIRED suggest that when Goziker raised the alarm about his ex-employer’s links to China, he found a ready audience. After he was fired, Goziker says he began meeting with elected officials, law enforcement agencies, and journalists to allege that, court documents say, he had discovered proof that TikTok’s software could send US data to Toutiao, a ByteDance app in China. That claim directly conflicted with TikTok executives’ assertions that the two companies operated separately.

Goziker says in court filings that what he saw made it necessary to reassess Project Texas. He also alleges that his account of the internal connection to China formed the basis of an influential Washington Post story published in March last year, which said the concerns came from “a former risk manager at TikTok.”

TikTok officials were quoted in that article as saying the allegations were “unfounded,” and that the employee had discovered “nothing more than a naming convention and technical relic.” The Washington Post said it does not comment on sourcing.

“I am free, I am honest, and I am doing this only because I am an American and because USA desperately need help and I cannot keep this truth away from PUBLIC,” Goziker said in an email to WIRED.

His March lawsuit alleging US officials conspired with TikTok to have him fired was filed against Garland, Haines, Secretary of Homeland Security Alejandro Mayorkas, and the agencies they work for.

“Goziker’s main point is that the executives in the American company TikTok Inc. and certain executives from the American federal government have colluded to organize a fraud scheme,” Sean Jiang, Goziker’s lawyer in the case against the US government, told WIRED in an email. The lawsuits do not appear to contain evidence of such a scheme. The Department of Homeland Security and Office of the Director of National Intelligence did not respond to requests for comment. The Department of Justice declined to comment.

Jiang calls the House’s recent passage of a bill that could force ByteDance to sell off TikTok “problematic,” because it “blames ByteDance instead of TikTok Inc for the wrongdoings of the American executives.” He says Goziker would prefer to see TikTok subjected to audits and a new corporate structure.

Claims of TikTok whistleblower may not add up Read More »

bytedance-unlikely-to-sell-tiktok,-as-former-trump-official-plots-purchase

ByteDance unlikely to sell TikTok, as former Trump official plots purchase

ByteDance unlikely to sell TikTok, as former Trump official plots purchase

Aurich Lawson | Getty Images Pool

Former US Treasury Secretary Steven Mnuchin is reportedly assembling an investor group to buy TikTok as the US comes closer to enacting legislation forcing the company to either divest from Chinese ownership or face a nationwide ban.

“I think the legislation should pass, and I think it should be sold,” Mnuchin told CNBC Thursday. “It’s a great business, and I’m going to put together a group to buy TikTok.”

Mnuchin currently leads Liberty Strategic Capital, which describes itself as “a Washington DC-based private equity firm focused on investing in dynamic global technology companies.”

According to CNBC, there is already “common ground between Liberty and ByteDance,” as Softbank—which invested in ByteDance in 2018—partnered with Liberty in 2021, contributing what Financial Times reported was an unknown amount to Mnuchin’s $2.5 billion private equity fund.

TikTok has made no indication that it would consider a sale should the legislation be enacted. Instead, TikTok CEO Shou Zi Chew is continuing to rally TikTok users to oppose the legislation. In a TikTok post viewed by 3.8 million users, the CEO described yesterday’s vote passing the law in the US House of Representatives as “disappointing.”

“This legislation, if signed into law, WILL lead to a ban of TikTok in the United States,” Chew said, seeming to suggest that TikTok’s CEO is not considering a sale to be an option.

But Mnuchin expects that TikTok may be forced to choose to divest—as the US remains an increasingly significant market for the company. If so, he plans to be ready to snatch up the popular app, which TikTok estimated boasts 170 million American monthly active users.

“This should be owned by US businesses,” Mnuchin told CNBC. “There’s no way that the Chinese would ever let a US company own something like this in China.”

Chinese foreign ministry spokesperson Wang Wenbin has said that a TikTok ban in the US would hurt the US, while little evidence backs up the supposed national security threat that lawmakers claim is urgent to address, the BBC reported. Wang has accused the US of “bullying behavior that cannot win in fair competition.” This behavior, Wang said, “disrupts companies’ normal business activity, damages the confidence of international investors in the investment environment, and damages the normal international economic and trade order.”

Liberty and Mnuchin were not immediately available to comment on whether investors have shown any serious interest so far.

However, according to the Los Angeles Times, Mnuchin has already approached a “bunch of people” to consider investing. Mnuchin told CNBC that TikTok’s technology would be the driving force behind wooing various investors.

“It would be a combination of investors, so there would be no one investor that controls this,” Mnuchin told CNBC. “The issue is all about the technology. This needs to be controlled by US businesses.”

Mnuchin’s group would likely face competition to buy TikTok. ByteDance—which PitchBook data indicates was valued at $223.5 billion in 2023—should also expect an offer from former Activision Blizzard CEO Bobby Kotick, The Wall Street Journal reported.

It’s unclear how valuable TikTok is to ByteDance, CNBC reported, and Mnuchin has not specified what potential valuation his group would anticipate. But if TikTok’s algorithm—which was developed in China—is part of the sale, the price would likely be higher than if ByteDance refused to sell the tech fueling the social media app’s rapid rise to popularity.

In 2020, ByteDance weighed various ownership options while facing a potential US ban under the Trump administration, The New York Times reported. Mnuchin served as Secretary of the Treasury at that time. Although ByteDance ended up partnering with Oracle to protect American TikTok users’ data instead, people briefed on ByteDance’s discussions then confirmed that ByteDance was considering carving out TikTok, potentially allowing the company to “receive new investments from existing ByteDance investors.”

The Information provided a breakdown of the most likely investors to be considered by ByteDance back in 2020. Under that plan, though, ByteDance intended to retain a minority holding rather than completely divesting ownership, the Times reported.

ByteDance unlikely to sell TikTok, as former Trump official plots purchase Read More »

bill-that-could-ban-tiktok-passes-in-house-despite-constitutional-concerns

Bill that could ban TikTok passes in House despite constitutional concerns

Bill that could ban TikTok passes in House despite constitutional concerns

On Wednesday, the US House of Representatives passed a bill with a vote of 352–65 that could block TikTok in the US. Fifteen Republicans and 50 Democrats voted in opposition, and one Democrat voted present, CNN reported.

TikTok is not happy. A spokesperson told Ars, “This process was secret and the bill was jammed through for one reason: it’s a ban. We are hopeful that the Senate will consider the facts, listen to their constituents, and realize the impact on the economy, 7 million small businesses, and the 170 million Americans who use our service.”

Lawmakers insist that the Protecting Americans from Foreign Adversary Controlled Applications Act is not a ban. Instead, they claim the law gives TikTok a choice: either divest from ByteDance’s China-based owners or face the consequences of TikTok being cut off in the US.

Under the law—which still must pass the Senate, a more significant hurdle, where less consensus is expected and a companion bill has not yet been introduced—app stores and hosting services would face steep consequences if they provide access to apps controlled by US foreign rivals. That includes allowing the app to be updated or maintained by US users who already have the app on their devices.

Violations subject app stores and hosting services to fines of $5,000 for each individual US user “determined to have accessed, maintained, or updated a foreign adversary-controlled application.” With 170 million Americans currently on TikTok, that could add up quickly to eye-popping fines.

If the bill becomes law, app stores and hosting services would have 180 days to limit access to foreign adversary-controlled apps. The bill specifically names TikTok and ByteDance as restricted apps, making it clear that lawmakers intend to quash the alleged “national security threat” that TikTok poses in the US.

House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), a proponent of the bill, has said that “foreign adversaries like China pose the greatest national security threat of our time. With applications like TikTok, these countries are able to target, surveil, and manipulate Americans.” The proposed bill “ends this practice by banning applications controlled by foreign adversaries of the United States that pose a clear national security risk.”

McMorris Rodgers has also made it clear that “our goal is to get this legislation onto the president’s desk.” Joe Biden has indicated he will sign the bill into law, leaving the Senate as the final hurdle to clear. Senators told CNN that they were waiting to see what happened in the House before seeking a path forward in the Senate that would respect TikTok users’ civil liberties.

Attempts to ban TikTok have historically not fared well in the US, with a recent ban in Montana being reversed by a federal judge last December. Judge Donald Molloy granted TikTok’s request for a preliminary injunction, denouncing Montana’s ban as an unconstitutional infringement of Montana-based TikTok users’ rights.

More recently, the American Civil Liberties Union (ACLU) has slammed House lawmakers for rushing the bill through Congress, accusing lawmakers of attempting to stifle free speech. ACLU senior policy counsel Jenna Leventoff said in a press release that lawmakers were “once again attempting to trade our First Amendment rights for cheap political points during an election year.”

“Just because the bill sponsors claim that banning TikTok isn’t about suppressing speech, there’s no denying that it would do just that,” Leventoff said.

Bill that could ban TikTok passes in House despite constitutional concerns Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »