tiktok

bill-that-could-ban-tiktok-passes-in-house-despite-constitutional-concerns

Bill that could ban TikTok passes in House despite constitutional concerns

Bill that could ban TikTok passes in House despite constitutional concerns

On Wednesday, the US House of Representatives passed a bill with a vote of 352–65 that could block TikTok in the US. Fifteen Republicans and 50 Democrats voted in opposition, and one Democrat voted present, CNN reported.

TikTok is not happy. A spokesperson told Ars, “This process was secret and the bill was jammed through for one reason: it’s a ban. We are hopeful that the Senate will consider the facts, listen to their constituents, and realize the impact on the economy, 7 million small businesses, and the 170 million Americans who use our service.”

Lawmakers insist that the Protecting Americans from Foreign Adversary Controlled Applications Act is not a ban. Instead, they claim the law gives TikTok a choice: either divest from ByteDance’s China-based owners or face the consequences of TikTok being cut off in the US.

Under the law—which still must pass the Senate, a more significant hurdle, where less consensus is expected and a companion bill has not yet been introduced—app stores and hosting services would face steep consequences if they provide access to apps controlled by US foreign rivals. That includes allowing the app to be updated or maintained by US users who already have the app on their devices.

Violations subject app stores and hosting services to fines of $5,000 for each individual US user “determined to have accessed, maintained, or updated a foreign adversary-controlled application.” With 170 million Americans currently on TikTok, that could add up quickly to eye-popping fines.

If the bill becomes law, app stores and hosting services would have 180 days to limit access to foreign adversary-controlled apps. The bill specifically names TikTok and ByteDance as restricted apps, making it clear that lawmakers intend to quash the alleged “national security threat” that TikTok poses in the US.

House Energy and Commerce Committee Chair Cathy McMorris Rodgers (R-Wash.), a proponent of the bill, has said that “foreign adversaries like China pose the greatest national security threat of our time. With applications like TikTok, these countries are able to target, surveil, and manipulate Americans.” The proposed bill “ends this practice by banning applications controlled by foreign adversaries of the United States that pose a clear national security risk.”

McMorris Rodgers has also made it clear that “our goal is to get this legislation onto the president’s desk.” Joe Biden has indicated he will sign the bill into law, leaving the Senate as the final hurdle to clear. Senators told CNN that they were waiting to see what happened in the House before seeking a path forward in the Senate that would respect TikTok users’ civil liberties.

Attempts to ban TikTok have historically not fared well in the US, with a recent ban in Montana being reversed by a federal judge last December. Judge Donald Molloy granted TikTok’s request for a preliminary injunction, denouncing Montana’s ban as an unconstitutional infringement of Montana-based TikTok users’ rights.

More recently, the American Civil Liberties Union (ACLU) has slammed House lawmakers for rushing the bill through Congress, accusing lawmakers of attempting to stifle free speech. ACLU senior policy counsel Jenna Leventoff said in a press release that lawmakers were “once again attempting to trade our First Amendment rights for cheap political points during an election year.”

“Just because the bill sponsors claim that banning TikTok isn’t about suppressing speech, there’s no denying that it would do just that,” Leventoff said.

Bill that could ban TikTok passes in House despite constitutional concerns Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

elon-musk’s-x-allows-china-based-propaganda-banned-on-other-platforms

Elon Musk’s X allows China-based propaganda banned on other platforms

Rinse-wash-repeat. —

X accused of overlooking propaganda flagged by Meta and criminal prosecutors.

Elon Musk’s X allows China-based propaganda banned on other platforms

Lax content moderation on X (aka Twitter) has disrupted coordinated efforts between social media companies and law enforcement to tamp down on “propaganda accounts controlled by foreign entities aiming to influence US politics,” The Washington Post reported.

Now propaganda is “flourishing” on X, The Post said, while other social media companies are stuck in endless cycles, watching some of the propaganda that they block proliferate on X, then inevitably spread back to their platforms.

Meta, Google, and then-Twitter began coordinating takedown efforts with law enforcement and disinformation researchers after Russian-backed influence campaigns manipulated their platforms in hopes of swaying the 2016 US presidential election.

The next year, all three companies promised Congress to work tirelessly to stop Russian-backed propaganda from spreading on their platforms. The companies created explicit election misinformation policies and began meeting biweekly to compare notes on propaganda networks each platform uncovered, according to The Post’s interviews with anonymous sources who participated in these meetings.

However, after Elon Musk purchased Twitter and rebranded the company as X, his company withdrew from the alliance in May 2023.

Sources told The Post that the last X meeting attendee was Irish intelligence expert Aaron Rodericks—who was allegedly disciplined for liking an X post calling Musk “a dipshit.” Rodericks was subsequently laid off when Musk dismissed the entire election integrity team last September, and after that, X apparently ditched the biweekly meeting entirely and “just kind of disappeared,” a source told The Post.

In 2023, for example, Meta flagged 150 “artificial influence accounts” identified on its platform, of which “136 were still present on X as of Thursday evening,” according to The Post’s analysis. X’s seeming oversight extends to all but eight of the 123 “deceptive China-based campaigns” connected to accounts that Meta flagged last May, August, and December, The Post reported.

The Post’s report also provided an exclusive analysis from the Stanford Internet Observatory (SIO), which found that 86 propaganda accounts that Meta flagged last November “are still active on X.”

The majority of these accounts—81—were China-based accounts posing as Americans, SIO reported. These accounts frequently ripped photos from Americans’ LinkedIn profiles, then changed the real Americans’ names while posting about both China and US politics, as well as people often trending on X, such as Musk and Joe Biden.

Meta has warned that China-based influence campaigns are “multiplying,” The Post noted, while X’s standards remain seemingly too relaxed. Even accounts linked to criminal investigations remain active on X. One “account that is accused of being run by the Chinese Ministry of Public Security,” The Post reported, remains on X despite its posts being cited by US prosecutors in a criminal complaint.

Prosecutors connected that account to “dozens” of X accounts attempting to “shape public perceptions” about the Chinese Communist Party, the Chinese government, and other world leaders. The accounts also comment on hot-button topics like the fentanyl problem or police brutality, seemingly to convey “a sense of dismay over the state of America without any clear partisan bent,” Elise Thomas, an analyst for a London nonprofit called the Institute for Strategic Dialogue, told The Post.

Some X accounts flagged by The Post had more than 1 million followers. Five have paid X for verification, suggesting that their disinformation campaigns—targeting hashtags to confound discourse on US politics—are seemingly being boosted by X.

SIO technical research manager Renée DiResta criticized X’s decision to stop coordinating with other platforms.

“The presence of these accounts reinforces the fact that state actors continue to try to influence US politics by masquerading as media and fellow Americans,” DiResta told The Post. “Ahead of the 2022 midterms, researchers and platform integrity teams were collaborating to disrupt foreign influence efforts. That collaboration seems to have ground to a halt, Twitter does not seem to be addressing even networks identified by its peers, and that’s not great.”

Musk shut down X’s election integrity team because he claimed that the team was actually “undermining” election integrity. But analysts are bracing for floods of misinformation to sway 2024 elections, as some major platforms have removed election misinformation policies just as rapid advances in AI technologies have made misinformation spread via text, images, audio, and video harder for the average person to detect.

In one prominent example, a fake robocaller relied on AI voice technology to pose as Biden to tell Democrats not to vote. That incident seemingly pushed the Federal Trade Commission on Thursday to propose penalizing AI impersonation.

It seems apparent that propaganda accounts from foreign entities on X will use every tool available to get eyes on their content, perhaps expecting Musk’s platform to be the slowest to police them. According to The Post, some of the X accounts spreading propaganda are using what appears to be AI-generated images of Biden and Donald Trump to garner tens of thousands of views on posts.

It’s possible that X will start tightening up on content moderation as elections draw closer. Yesterday, X joined Amazon, Google, Meta, OpenAI, TikTok, and other Big Tech companies in signing an agreement to fight “deceptive use of AI” during 2024 elections. Among the top goals identified in the “AI Elections accord” are identifying where propaganda originates, detecting how propaganda spreads across platforms, and “undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing” with propaganda.

Elon Musk’s X allows China-based propaganda banned on other platforms Read More »

female-ex-exec-told-she-lacked-“docility-and-meekness”-sues-tiktok

Female ex-exec told she lacked “docility and meekness” sues TikTok

Female ex-exec told she lacked “docility and meekness” sues TikTok

One of TikTok’s senior-most female executives, Katie Ellen Puris, is suing TikTok and its owner ByteDance, alleging wrongful termination based on age and sex discrimination.

In her complaint filed Thursday, Puris accused ByteDance chairman Lidong Zhang of aggressively forcing her out of the company because she “lacked the docility and meekness specifically required of female employees.” She also alleged experiencing retaliation after reporting sexual harassment to the company.

Puris joined TikTok in December 2019 as managing director and US head of business marketing. Previously, she’d led global marketing initiatives for Google and Facebook. TikTok appeared to value this experience and promoted her within two months to lead its global business marketing team. In this role, she launched TikTok for Business and meaningfully shaped how businesses interact with the platform.

Amid this success, Puris allegedly discovered that she had a target on her back.

According to her complaint, by early 2021, Beijing-based ByteDance executives, including Zhang, “began reasserting more control over TikTok’s day-to-day operations.” These executives, Puris said, required bi-monthly meetings with senior executives to report on their teams’ progress in hitting company targets.

“Despite its attempts to appear independent, TikTok’s day-to-day management and business decisions came directly from ByteDance’s top-level management in China,” Puris’ complaint alleged.

During one of these bi-monthly meetings, Puris met Zhang for the first time during a presentation where she “celebrated her team’s successes and achievements.” Allegedly, Zhang was put off by Puris’ presentation because “women should always remain humble and express modesty.”

“Essentially, Lidong Zhang believes women should be quiet,” Puris’ complaint alleged.

Puris believes that because she “did not fit that stereotypical gender mold,” Zhang refused to ever meet with her again and placed her on a “kill list” of employees who he wanted terminated.

According to Puris, Zhang began pressuring her supervisors to review her performance negatively. He allegedly cast a wide net and sought negative comments from employees whom Puris rarely worked with. His alleged “animosity” was so evident that one of Puris’ supervisors allegedly sought to protect her by removing her from Zhang’s oversight.

At the same time, Puris, who was approaching 50, alleged that other executives “made it clear” that they would prefer to hire “hungry” younger, less experienced workers “believed to be more innovative and pliable” and “desperate for approval” than older workers like Puris. She claimed that a supervisor regularly referenced her age during performance reviews that became increasingly negative and without clear feedback or comments substantiating her poor reviews. Requests for feedback were repeatedly rejected.

Puris’ efforts to report alleged age and sex discrimination did not result in corrective action, her complaint said. Even when a TikTok advertising partner allegedly drunkenly sexually harassed her at an off-site event, Puris alleged that her complaints were not taken seriously. Puris said that TikTok continued inviting the advertising partner to events, causing her to withdraw from attending.

Rather than sincerely investigate her complaints, Puris’ complaint said that “after Ms. Puris made protected complaints, her team was substantially reduced, she received a devastatingly low-performance review, she was denied her annual bonus, she was moved out of her position, and she was ultimately unlawfully terminated.”

Female ex-exec told she lacked “docility and meekness” sues TikTok Read More »

tiktok-requires-users-to-“forever-waive”-rights-to-sue-over-past-harms

TikTok requires users to “forever waive” rights to sue over past harms

Or forever hold your peace —

TikTok may be seeking to avoid increasingly high costs of mass arbitration.

TikTok requires users to “forever waive” rights to sue over past harms

Some TikTok users may have skipped reviewing an update to TikTok’s terms of service this summer that shakes up the process for filing a legal dispute against the app. According to The New York Times, changes that TikTok “quietly” made to its terms suggest that the popular app has spent the back half of 2023 preparing for a wave of legal battles.

In July, TikTok overhauled its rules for dispute resolution, pivoting from requiring private arbitration to insisting that legal complaints be filed in either the US District Court for the Central District of California or the Superior Court of the State of California, County of Los Angeles. Legal experts told the Times this could be a way for TikTok to dodge arbitration claims filed en masse that can cost companies millions more in fees than they expected to pay through individual arbitration.

Perhaps most significantly, TikTok also added a section to its terms that mandates that all legal complaints be filed within one year of any alleged harm caused by using the app. The terms now say that TikTok users “forever waive” rights to pursue any older claims. And unlike a prior version of TikTok’s terms of service archived in May 2023, users do not seem to have any options to opt out of waiving their rights.

TikTok did not immediately respond to Ars’ request to comment, but has previously defended its “industry-leading safeguards for young people,” the Times noted.

Lawyers told the Times that these changes could make it more challenging for TikTok users to pursue legal action at a time when federal agencies are heavily scrutinizing the app and complaints about certain TikTok features allegedly harming kids are mounting.

In the past few years, TikTok has had mixed success defending against user lawsuits filed in courts. In 2021, TikTok was dealt a $92 million blow after settling a class-action lawsuit filed in an Illinois court, which alleged that the app illegally collected underage TikTok users’ personal data. Then, in 2022, TikTok defeated a Pennsylvania lawsuit alleging that the app was liable for a child’s death because its algorithm promoted a deadly “Blackout Challenge.” The same year, a bipartisan coalition of 44 state attorneys general announced an investigation to determine whether TikTok violated consumer laws by allegedly putting young users at risk.

Section 230 shielded TikTok from liability in the 2022 “Blackout Challenge” lawsuit, but more recently, a California judge ruled last month that social media platforms—including TikTok, Facebook, Instagram, and YouTube—couldn’t use a blanket Section 230 defense in a child safety case involving hundreds of children and teens allegedly harmed by social media use across 30 states.

Some of the product liability claims raised in that case are tied to features not protected by Section 230 immunity, the judge wrote, opening up social media platforms to potentially more lawsuits focused on those features. And the Times reported that investigations like the one launched by the bipartisan coalition “can lead to government and consumer lawsuits.”

As new information becomes available to consumers through investigations and lawsuits, there are concerns that users may become aware of harms that occurred before TikTok’s one-year window to file complaints and have no path to seek remedies.

However, it’s currently unclear if TikTok’s new terms will stand up against legal challenges. University of Chicago law professor Omri Ben-Shahar told the Times that TikTok might struggle to defend its new terms in court, and it looks like TikTok is already facing pushback. One lawyer representing more than 1,000 guardians and minors claiming TikTok-related harms, Kyle Roche, told the Times that he is challenging TikTok’s updated terms. Roche said that the minors he represents “could not agree to the changes” and intended to ignore the updates, instead bringing their claims through private arbitration.

TikTok has also spent the past year defending against attempts by lawmakers to ban the China-based app in the US over concerns that the Chinese Communist Party (CCP) may use the app to surveil Americans. Congress has weighed different bipartisan bills with names like “ANTI-SOCIAL CCP Act” and “RESTRICT Act,” each intent to lay out a legal path to ban TikTok nationwide over alleged national security concerns.

So far, TikTok has defeated every attempt to widely ban the app, but that doesn’t mean lawmakers have any plans to stop trying. Most recently, a federal judge stopped Montana’s effort to ban TikTok statewide from taking effect, but a more limited TikTok ban restricting access on state-owned devices was upheld in Texas, Reuters reported.

TikTok requires users to “forever waive” rights to sue over past harms Read More »

tiktok-pledges-e12b-european-investment-as-norway-data-centre-nears-completion

TikTok pledges €12B European investment as Norway data centre nears completion

TikTok has promised to invest €12bn as part of an ongoing push to appease European regulators, who have raised suspicions that the app’s user data is being monitored by the Chinese government.

In response to repeated allegations of this nature, the short-form video app launched Project Clover in March. While it might sound like a secret military sting operation, the programme is pretty mundane. 

Essentially, Project Clover aims to build three massive data centres on the continent to keep European user data in Europe — and “within reach” of local authorities.   

Yesterday, TikTok pledged €12bn over the next 10 years for the project. The first data centre, a facility in Dublin, Ireland, was completed in September. The second one is currently under construction in the frosty climes of Hamar, Norway. 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

TikTok this week announced it took possession of the first of three buildings at the site, and will begin migrating European data to the servers housed there from mid-2024. It said the centre will run solely on renewable energy and will be the largest facility of its kind in Europe once complete. The third and final data centre will also be built in Ireland. 

tiktok's new data centre under construction in norway
A worker walks outside TikTok’s largest data centre in Europe, currently under construction in Hamar, Norway, November 30, 2023. REUTERS/Victoria Klesty

TikTok’s mammoth investment also covers the consultancy fee of British cybersecurity firm NCC, whom the social media firm hired to audit its data controls and provide third-party accountability. 

“All of these controls and operations are designed to ensure that the data of our European users is safeguarded in a specially-designed protective environment, and can only be accessed by approved employees subject to strict independent oversight and verification,” said Theo Bertram, TikTok’s VP of Public Policy in Europe.

A series of institutions including the EU Commision, the UK Parliament, and the French government have banned use of TikTok on work-related devices, over fears that the app has been infiltrated by the Chinese government — allegations which the company has vehemently denied.

The full migration of TikTok’s 150 million European users in the region is expected by the end of 2024. Currently, the company stores its global user data in Singapore, Malaysia, and the US.

Published

Back to top

TikTok pledges €12B European investment as Norway data centre nears completion Read More »

report:-tiktok-parent-lays-off-hundreds-at-vr-subsidiary-pico-interactive,-tencent-scraps-vr-plans

Report: TikTok Parent Lays Off Hundreds at VR Subsidiary Pico Interactive, Tencent Scraps VR Plans

TikTok parent company ByteDance is reportedly laying off what South China Morning Post maintains will be “hundreds of employees” working at its VR headset manufacturing subsidiary, Pico Interactive. A separate report from Reuters also maintains Chinese tech giant Tencent is scrapping its plans to release a VR headset.

According to two people with knowledge of the Pico layoffs, a substantial portion of the VR headset maker is expected to be affected. The report maintains that some teams will see as much as a 30 percent reduction, while some higher-level positions are also expected to be affected.

After being acquired by ByteDance in August 2021, Pico job postings revealed the company was making a sizable expansion into the US to presumably better compete with Meta on its home turf.

Shortly afterwards, the China-based company then released its latest standalone headset, Pico 4, in Europe and Asia to consumers. Seen a direct competitor to Meta Quest 2, Pico 4 still isn’t officially sold in the US; the headset is currently only available across Japan, Korea, Singapore, Malaysia, and most countries in Europe.

It was also reported by Chinese tech outlet 36Kr that Tencent, the massive Chinese multinational, was disbanding it 300-person strong XR unit. The company has since refuted this claim with Reuters, stating instead it will be making adjustments to some business teams as development plans for XR hardware had changed.

Citing sources familiar with the restructuring, Reuters reports that Tencent is abandoning plans to release a VR headset due to a sobering economic outlook.

This follows a widening trend of layoffs which have affected nearly every big name in tech, including Google, Meta, Amazon, and Microsoft. Microsoft recently announced it was shuttering its social VR platform AltspaceVR in addition to its XR interface framework, Mixed Reality Toolkit. Meanwhile, Microsoft has also had trouble fulfilling its end of a US defense contract which uses its HoloLens AR headset as the basis of a tactical AR headset.

It was also revealed late last year that Meta was planning to cut discretionary spending and extend its hiring freeze through the first quarter, alongside a layoff which affected nearly 11,000 employees, or around 13 percent of its overall workforce.

Report: TikTok Parent Lays Off Hundreds at VR Subsidiary Pico Interactive, Tencent Scraps VR Plans Read More »