facebook

public-officials-can-block-haters—but-only-sometimes,-scotus-rules

Public officials can block haters—but only sometimes, SCOTUS rules

Public officials can block haters—but only sometimes, SCOTUS rules

There are some circumstances where government officials are allowed to block people from commenting on their social media pages, the Supreme Court ruled Friday.

According to the Supreme Court, the key question is whether officials are speaking as private individuals or on behalf of the state when posting online. Issuing two opinions, the Supreme Court declined to set a clear standard for when personal social media use constitutes state speech, leaving each unique case to be decided by lower courts.

Instead, SCOTUS provided a test for courts to decide first if someone is or isn’t speaking on behalf of the state on their social media pages, and then if they actually have authority to act on what they post online.

The ruling suggests that government officials can block people from commenting on personal social media pages where they discuss official business when that speech cannot be attributed to the state and merely reflects personal remarks. This means that blocking is acceptable when the official has no authority to speak for the state or exercise that authority when speaking on their page.

That authority empowering officials to speak for the state could be granted by a written law. It could also be granted informally if officials have long used social media to speak on behalf of the state to the point where their power to do so is considered “well-settled,” one SCOTUS ruling said.

SCOTUS broke it down like this: An official might be viewed as speaking for the state if the social media page is managed by the official’s office, if a city employee posts on their behalf to their personal page, or if the page is handed down from one official to another when terms in office end.

Posting on a personal page might also be considered speaking for the state if the information shared has not already been shared elsewhere.

Examples of officials clearly speaking on behalf of the state include a mayor holding a city council meeting online or an official using their personal page as an official channel for comments on proposed regulations.

Because SCOTUS did not set a clear standard, officials risk liability when blocking followers on so-called “mixed use” social media pages, SCOTUS cautioned. That liability could be diminished by keeping personal pages entirely separate or by posting a disclaimer stating that posts represent only officials’ personal views and not efforts to speak on behalf of the state. But any official using a personal page to make official comments could expose themselves to liability, even with a disclaimer.

SCOTUS test for when blocking is OK

These clarifications came in two SCOTUS opinions addressing conflicting outcomes in two separate complaints about officials in California and Michigan who blocked followers heavily criticizing them on Facebook and X. The lower courts’ decisions have been vacated, and courts must now apply the Supreme Court’s test to issue new decisions in each case.

One opinion was brief and unsigned, discussing a case where California parents sued school district board members who blocked them from commenting on public Twitter pages used for campaigning and discussing board issues. The board members claimed they blocked their followers after the parents left dozens and sometimes hundreds of the same exact comments on tweets.

In the second—which was unanimous, with no dissenting opinions—Justice Amy Coney Barrett responded at length to a case from a Facebook user named Kevin Lindke. This opinion provides varied guidance that courts can apply when considering whether blocking is appropriate or violating constituents’ First Amendment rights.

Lindke was blocked by a Michigan city manager, James Freed, after leaving comments criticizing the city’s response to COVID-19 on a page that Freed created as a college student, sometime before 2008. Among these comments, Lindke called the city’s pandemic response “abysmal” and told Freed that “the city deserves better.” On a post showing Freed picking up a takeout order, Lindke complained that residents were “suffering,” while Freed ate at expensive restaurants.

After Freed hit 5,000 followers, he converted the page to reflect his public figure status. But while he primarily still used the page for personal posts about his family and always managed the page himself, the page went into murkier territory when he also shared updates about his job as city manager. Those updates included sharing updates on city efforts, posting screenshots of city press releases, and soliciting public feedback, like sharing links to city surveys.

Public officials can block haters—but only sometimes, SCOTUS rules Read More »

meta-sues-“brazenly-disloyal”-former-exec-over-stolen-confidential-docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

Meta sues “brazenly disloyal” former exec over stolen confidential docs

A recently unsealed court filing has revealed that Meta has sued a former senior employee for “brazenly disloyal and dishonest conduct” while leaving Meta for an AI data startup called Omniva that The Information has described as “mysterious.”

According to Meta, its former vice president of infrastructure, Dipinder Singh Khurana (also known as T.S.), allegedly used his access to “confidential, non-public, and highly sensitive” information to steal more than 100 internal documents in a rushed scheme to poach Meta employees and borrow Meta’s business plans to speed up Omniva’s negotiations with key Meta suppliers.

Meta believes that Omniva—which Data Center Dynamics (DCD) reported recently “pivoted from crypto to AI cloud”—is “seeking to provide AI cloud computing services at scale, including by designing and constructing data centers.” But it was held back by a “lack of data center expertise at the top,” DCD reported.

The Information reported that Omniva began hiring Meta employees to fill the gaps in this expertise, including wooing Khurana away from Meta.

Last year, Khurana notified Meta that he was leaving on May 15, and that’s when Meta first observed Khurana’s allegedly “utter disregard for his contractual and legal obligations to Meta—including his confidentiality obligations to Meta set forth in the Confidential Information and Invention Assignment Agreement that Khurana signed when joining Meta.”

A Meta investigation found that during Khurana’s last two weeks at the company, he allegedly uploaded confidential Meta documents—including “information about Meta’s ‘Top Talent,’ performance information for hundreds of Meta employees, and detailed employee compensation information”—on Meta’s network to a Dropbox folder labeled with his new employer’s name.

“Khurana also uploaded several of Meta’s proprietary, highly sensitive, confidential, and non-public contracts with business partners who supply Meta with crucial components for its data centers,” Meta alleged. “And other documents followed.”

In addition to pulling documents, Khurana also allegedly sent “urgent” requests to subordinates for confidential information on a key supplier, including Meta’s pricing agreement “for certain computing hardware.”

“Unaware of Khurana’s plans, the employee provided Khurana with, among other things, Meta’s pricing-form agreement with that supplier for the computing hardware and the supplier’s Meta-specific preliminary pricing for a particular chip,” Meta alleged.

Some of these documents were “expressly marked confidential,” Meta alleged. Those include a three-year business plan and PowerPoints regarding “Meta’s future ‘roadmap’ with a key supplier” and “Meta’s 2022 redesign of its global-supply-chain group” that Meta alleged “would directly aid Khurana in building his own efficient and effective supply-chain organization” and afford a path for Omniva to bypass “years of investment.” Khurana also allegedly “uploaded a PowerPoint discussing Meta’s use of GPUs for artificial intelligence.”

Meta was apparently tipped off to this alleged betrayal when Khurana used his Meta email and network access to complete a writing assignment for Omniva as part of his hiring process. For this writing assignment, Khurana “disclosed non-public information about Meta’s relationship with certain suppliers that it uses for its data centers” when asked to “explain how he would help his potential new employer develop the supply chain for a company building data centers using specific technologies.”

In a seeming attempt to cover up the alleged theft of Meta documents, Khurana apparently “attempted to scrub” one document “of its references to Meta,” as well as removing a label marking it “CONFIDENTIAL—FOR INTERNAL USE ONLY.” But when replacing “Meta” with “X,” Khurana allegedly missed the term “Meta” in “at least five locations.”

“Khurana took such action to try and benefit himself or his new employer, including to help ensure that Khurana would continue to work at his new employer, continue to receive significant compensation from his new employer, and/or to enable Khurana to take shortcuts in building his supply-chain team at his new employer and/or helping to build his new employer’s business,” Meta alleged.

Ars could not immediately reach Khurana for comment. Meta noted that he has repeatedly denied breaching his contract or initiating contact with Meta employees who later joined Omniva. He also allegedly refused to sign a termination agreement that reiterates his confidentiality obligations.

Meta sues “brazenly disloyal” former exec over stolen confidential docs Read More »

law-enforcement-doesn’t-want-to-be-“customer-service”-reps-for-meta-any-more

Law enforcement doesn’t want to be “customer service” reps for Meta any more

No help —

“Dramatic and persistent spike” in account takeovers is “substantial drain” on resources.

In this photo illustration, the icons of WhatsApp, Messenger, Instagram and Facebook are displayed on an iPhone in front of a Meta logo

Enlarge / Meta has a verified program for users of Facebook and Instagram.

Getty Images | Chesnot

Forty-one state attorneys general penned a letter to Meta’s top attorney on Wednesday saying complaints are skyrocketing across the United States about Facebook and Instagram user accounts being stolen and declaring “immediate action” necessary to mitigate the rolling threat.

The coalition of top law enforcement officials, spearheaded by New York Attorney General Letitia James, says the “dramatic and persistent spike” in complaints concerning account takeovers amounts to a “substantial drain” on governmental resources, as many stolen accounts are also tied to financial crimes—some of which allegedly profits Meta directly.

“We have received a number of complaints of threat actors fraudulently charging thousands of dollars to stored credit cards,” says the letter addressed to Meta’s chief legal officer, Jennifer Newstead. “Furthermore, we have received reports of threat actors buying advertisements to run on Meta.”

“We refuse to operate as the customer service representatives of your company,” the officials add. “Proper investment in response and mitigation is mandatory.”

In addition to New York, the letter is signed by attorneys general from Alabama, Alaska, Arizona, California, Colorado, Connecticut, Delaware, Florida, Georgia, Hawaii, Illinois, Iowa, Kentucky, Louisiana, Maryland, Massachusetts, Michigan, Minnesota, Nebraska, Nevada, New Hampshire, New Jersey, New Mexico, North Carolina, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, South Carolina, South Dakota, Tennessee, Utah, Vermont, Virginia, Washington, West Virginia, Wisconsin, Wyoming, and the District of Columbia.

“Scammers use every platform available to them and constantly adapt to evade enforcement. We invest heavily in our trained enforcement and review teams and have specialized detection tools to identify compromised accounts and other fraudulent activity,” Meta says in a statement provided by spokesperson Erin McPike. “We regularly share tips and tools people can use to protect themselves, provide a means to report potential violations, work with law enforcement, and take legal action.”

Account takeovers can occur as a result of phishing as well as other more sophisticated and targeted techniques. Once an attacker gains access to an account, the owner can be easily locked out by changing passwords and contact information. Private messages and personal information are left up for grabs for a variety of nefarious purposes, from impersonation and fraud to pushing misinformation.

“It’s basically a case of identity theft and Facebook is doing nothing about it,” said one user whose complaint was cited in the letter to Meta’s Newstead.

The state officials said the accounts that were stolen to run ads on Facebook often run afoul of its rules while doing so, leading them to be permanently suspended, punishing the victims—often small business owners—twice over.

“Having your social media account taken over by a scammer can feel like having someone sneak into your home and change all of the locks,” New York’s James said in a statement. “Social media is how millions of Americans connect with family, friends, and people throughout their communities and the world. To have Meta fail to properly protect users from scammers trying to hijack accounts and lock rightful owners out is unacceptable.”

Other complaints forwarded to Newstead show hacking victims expressing frustration over Meta’s lack of response. In many cases, users report no action being taken by the company. Some say the company encourages users to report such problems but never responds, leaving them unable to salvage their accounts or the businesses they built around them.

After being hacked and defrauded of $500, one user complained that their ability to communicate with their own customer base had been “completely disrupted,” and that Meta had never responded to the report they filed, though the user had followed the instructions the company provided them to obtain help.

“I can’t get any help from Meta. There is no one to talk to and meanwhile all my personal pictures are being used. My contacts are receiving false information from the hacker,” one user wrote.

Wrote another: “This is my business account, which is important to me and my life. I have invested my life, time, money and soul in this account. All attempts to contact and get a response from the Meta company, including Instagram and Facebook, were crowned with complete failure, since the company categorically does not respond to letters.”

Figures provided by James’ office in New York show a tenfold increase in complaints between 2019 and 2023—from 73 complaints to more than 780 last year. In January alone, more than 128 complaints were received, James’ office says. Other states saw similar spikes in complaints during that period, according to the letter, with Pennsylvania recording a 270 percent increase, a 330 percent jump in North Carolina, and a 740 percent surge in Vermont.

The letter notes that, while the officials cannot be “certain of any connection,” the drastic increase in complaints occurred “around the same time” as layoffs at Meta affecting roughly 11,000 employees in November 2022, around 13 percent of its staff at the time.

This story originally appeared on wired.com.

Law enforcement doesn’t want to be “customer service” reps for Meta any more Read More »

amc-to-pay-$8m-for-allegedly-violating-1988-law-with-use-of-meta-pixel

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

Stream like no one is watching —

Proposed settlement impacts millions using AMC apps like Shudder and AMC+.

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel

On Thursday, AMC notified subscribers of a proposed $8.3 million settlement that provides awards to an estimated 6 million subscribers of its six streaming services: AMC+, Shudder, Acorn TV, ALLBLK, SundanceNow, and HIDIVE.

The settlement comes in response to allegations that AMC illegally shared subscribers’ viewing history with tech companies like Google, Facebook, and X (aka Twitter) in violation of the Video Privacy Protection Act (VPPA).

Passed in 1988, the VPPA prohibits AMC and other video service providers from sharing “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” It was originally passed to protect individuals’ right to private viewing habits, after a journalist published the mostly unrevealing video rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan.

The so-called “Bork Tapes” revealed little—other than that the judge frequently rented spy thrillers and British costume dramas—but lawmakers recognized that speech could be chilled by monitoring anyone’s viewing habits. While the law was born in the era of Blockbuster Video, subscribers suing AMC wrote in their amended complaint that “the importance of legislation like the VPPA in the modern era of datamining is more pronounced than ever before.”

According to subscribers suing, AMC allegedly installed tracking technologies—including the Meta Pixel, the X Tracking Pixel, and Google Tracking Technology—on its website, allowing their personally identifying information to be connected with their viewing history.

Some trackers, like the Meta Pixel, required AMC to choose what kind of activity can be tracked, and subscribers claimed that AMC had willingly opted into sharing video names and URLs with Meta, along with a Facebook ID. “Anyone” could use the Facebook ID, subscribers said, to identify the AMC subscribers “simply by entering https://www.facebook.com/[unencrypted FID]/” into a browser.

X’s ID could similarly be de-anonymized, subscribers alleged, by using tweeterid.com.

AMC “could easily program its AMC Services websites so that this information is not disclosed” to tech companies, subscribers alleged.

Denying wrongdoing, AMC has defended its use of tracking technologies but is proposing to settle with subscribers to avoid uncertain outcomes from litigation, the proposed settlement said.

A hearing to approve the proposed settlement has been scheduled for May 16.

If it’s approved, AMC has agreed to “suspend, remove, or modify operation of the Meta Pixel and other Third-Party Tracking Technologies so that use of such technologies on AMC Services will not result in AMC’s disclosure to the third-party technology companies of the specific video content requested or obtained by a specific individual.”

Google and X did not immediately respond to Ars’ request to comment. Meta declined to comment.

All registered users of AMC services who “requested or obtained video content on at least one of the six AMC services” between January 18, 2021, and January 10, 2024, are currently eligible to submit claims under the proposed settlement. The deadline to submit is April 9.

In addition to distributing the $8.3 million settlement fund among class members, subscribers will receive a free one-week digital subscription.

According to AMC’s notice to subscribers (full disclosure, I am one), AMC’s agreement to avoid sharing subscribers’ viewing histories may change if the VPPA is amended, repealed, or invalidated. If the law changes to permit sharing viewing data at the core of subscribers’ claim, AMC may resume sharing that information with tech companies.

That day could come soon if Patreon has its way. Recently, Patreon asked a federal judge to rule that the VPPA is unconstitutional.

Patreon’s lawsuit is similar in its use of the Meta Pixel, allegedly violating the VPPA by sharing video views on its platform with Meta.

Patreon has argued that the VPPA is unconstitutional because it chills speech. Patreon said that the law was enacted “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

According to Patreon, the VPPA narrowly prohibits video service providers from sharing video titles, but not from sharing information that people may wish to keep private, such as “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch.”

Therefore, Patreon argued, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

That lawsuit remains ongoing, but Patreon’s position is likely to be met with opposition from experts who typically also defend freedom of speech. Experts at the Electronic Privacy Information Center, like AMC subscribers suing, consider the VPPA one of America’s “strongest protections of consumer privacy against a specific form of data collection.” And the Electronic Frontier Foundation (EFF) has already moved to convince the court to reject Patreon’s claim, describing the VPPA in a blog as an “essential” privacy protection.

“EFF is second to none in fighting for everyone’s First Amendment rights in court,” EFF’s blog said. “But Patreon’s First Amendment argument is wrong and misguided. The company seeks to elevate its speech interests over those of Internet users who benefit from the VPPA’s protections.”

AMC to pay $8M for allegedly violating 1988 law with use of Meta Pixel Read More »

facebook-rules-allowing-fake-biden-“pedophile”-video-deemed-“incoherent”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

Not to be misled —

Meta may revise AI policies that experts say overlook “more misleading” content.

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

A fake video manipulated to falsely depict President Joe Biden inappropriately touching his granddaughter has revealed flaws in Facebook’s “deepfake” policies, Meta’s Oversight Board concluded Monday.

Last year when the Biden video went viral, Facebook repeatedly ruled that it did not violate policies on hate speech, manipulated media, or bullying and harassment. Since the Biden video is not AI-generated content and does not manipulate the president’s speech—making him appear to say things he’s never said—the video was deemed OK to remain on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer.”

“The video does not depict President Biden saying something he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions),” Meta’s blog said.

The Oversight Board—an independent panel of experts—reviewed the case and ultimately upheld Meta’s decision despite being “skeptical” that current policies work to reduce harms.

“The board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do,” the board said, noting that Meta claimed this distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect.”

The board called upon Meta to revise its “incoherent” policies that it said appear to be more concerned with regulating how content is created, rather than with preventing harms. For example, the Biden video’s caption described the president as a “sick pedophile” and called out anyone who would vote for him as “mentally unwell,” which could affect “electoral processes” that Meta could choose to protect, the board suggested.

“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.

One problem, the Oversight Board suggested, is that in its rush to combat AI technologies that make generating deepfakes a fast, cheap, and easy business, Meta policies currently overlook less technical ways of manipulating content.

Instead of using AI, the Biden video relied on basic video-editing technology to edit out the president placing an “I Voted” sticker on his adult granddaughter’s chest. The crude edit looped a 7-second clip altered to make the president appear to be, as Meta described in its blog, “inappropriately touching a young woman’s chest and kissing her on the cheek.”

Meta making this distinction is confusing, the board said, partly because videos altered using non-AI technologies are not considered less misleading or less prevalent on Facebook.

The board recommended that Meta update policies to cover not just AI-generated videos, but other forms of manipulated media, including all forms of manipulated video and audio. Audio fakes currently not covered in the policy, the board warned, offer fewer cues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”

Notably, earlier this year, a fake Biden robocall attempted to mislead Democratic voters in New Hampshire by encouraging them not to vote. The Federal Communications Commission promptly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission was not able to act as swiftly to regulate AI-generated misleading campaign ads easily spread on social media, AP reported. In a statement, Oversight Board Co-Chair Michael McConnell said that manipulated audio is “one of the most potent forms of electoral disinformation.”

To better combat known harms, the board suggested that Meta revise its Manipulated Media policy to “clearly specify the harms it is seeking to prevent.”

Rather than pushing Meta to remove more content, however, the board urged Meta to use “less restrictive” methods of coping with fake content, such as relying on fact-checkers applying labels noting that content is “significantly altered.” In public comments, some Facebook users agreed that labels would be most effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing the Biden video should have been a “deeply easy call.” Another commenter suggested that the Biden video should be considered acceptable speech, as harmless as a funny meme.

While the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it cautioned that including manipulated photos in the policy could “significantly expand” the policy’s scope and make it harder to enforce.

“If Meta sought to label videos, audio, and photographs but only captured a small portion, this could create a false impression that non-labeled content is inherently trustworthy,” the board warned.

Meta should therefore stop short of adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is prepared to enforce a ban on manipulated photos at scale, the board recommended. In the meantime, Meta should move quickly to update policies ahead of a busy election year where experts and politicians globally are bracing for waves of misinformation online.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”

Meta’s spokesperson told Ars that Meta is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days.”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” Read More »

patreon:-blocking-platforms-from-sharing-user-video-data-is-unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon: Blocking platforms from sharing user video data is unconstitutional

Patreon, a monetization platform for content creators, has asked a federal judge to deem unconstitutional a rarely invoked law that some privacy advocates consider one of the nation’s “strongest protections of consumer privacy against a specific form of data collection.” Such a ruling would end decades that the US spent carefully shielding the privacy of millions of Americans’ personal video viewing habits.

The Video Privacy Protection Act (VPPA) blocks businesses from sharing data with third parties on customers’ video purchases and rentals. At a minimum, the VPPA requires written consent each time a business wants to share this sensitive video data—including the title, description, and, in most cases, the subject matter.

The VPPA was passed in 1988 in response to backlash over a reporter sharing the video store rental history of a judge, Robert Bork, who had been nominated to the Supreme Court by Ronald Reagan. The report revealed that Bork apparently liked spy thrillers and British costume dramas and suggested that maybe the judge had a family member who dug John Hughes movies.

Although the videos that Bork rented “revealed nothing particularly salacious” about the judge, the intent of reporting the “Bork Tapes” was to confront the judge “with his own vulnerability to privacy harms” during a time when the Supreme Court nominee had “criticized the constitutional right to privacy” as “a loose canon in the law,” Harvard Law Review noted.

Even though no harm was caused by sharing the “Bork Tapes,” policymakers on both sides of the aisle agreed that First Amendment protections ought to safeguard the privacy of people’s viewing habits, or else risk chilling their speech by altering their viewing habits. The US government has not budged on this stance since, supporting a lawsuit filed in 2022 by Patreon users who claimed that while no harms were caused, damages are owed after Patreon allegedly violated the VPPA by sharing data on videos they watched on the platform with Facebook through Meta Pixel without users’ written consent.

“Restricting the ability of those who possess a consumer’s video purchase, rental, or request history to disclose such information directly advances the goal of keeping that information private and protecting consumers’ intellectual freedom,” the Department of Justice’s brief said.

The Meta Pixel is a piece of code used by companies like Patreon to better target content to users by tracking their activity and monitoring conversions on Meta platforms. “In simplest terms,” Patreon users said in an amended complaint, “the Pixel allows Meta to know what video content one of its users viewed on Patreon’s website.”

The Pixel is currently at the center of a pile of privacy lawsuits, where people have accused various platforms of using the Pixel to covertly share sensitive data without users’ consent, including health and financial data.

Several lawsuits have specifically lobbed VPPA claims, which users have argued validates the urgency of retaining the VPPA protections that Patreon now seeks to strike. The DOJ argued that “the explosion of recent VPPA cases” is proof “that the disclosures the statute seeks to prevent are a legitimate concern,” despite Patreon’s arguments that the statute does “nothing to materially or directly advance the privacy interests it supposedly was enacted to protect.”

Patreon’s attack on the VPPA

Patreon has argued in a recent court filing that the VPPA was not enacted to protect average video viewers from embarrassing and unwarranted disclosures but “for the express purpose of silencing disclosures about political figures and their video-watching, an issue of undisputed continuing public interest and concern.”

That’s one of many ways that the VPPA silences speech, Patreon argued, by allegedly preventing disclosures regarding public figures that are relevant to public interest.

Among other “fatal flaws,” Patreon alleged, the VPPA “restrains speech” while “doing little if anything to protect privacy” and never protecting privacy “by the least restrictive means.”

Patreon claimed that the VPPA is too narrow, focusing only on pre-recorded videos. It prevents video service providers from disclosing to any other person the titles of videos that someone watched, but it does not necessarily stop platforms from sharing information about “the genres, performers, directors, political views, sexual content, and every other detail of pre-recorded video that those consumers watch,” Patreon claimed.

Patreon: Blocking platforms from sharing user video data is unconstitutional Read More »

meta-relents-to-eu,-allows-unlinking-of-facebook-and-instagram-accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta relents to EU, allows unlinking of Facebook and Instagram accounts

Meta will allow some Facebook and Instagram users to unlink their accounts as part of the platform’s efforts to comply with the European Union’s Digital Markets Act (DMA) ahead of enforcement starting March 1.

In a blog, Meta’s competition and regulatory director, Tim Lamb, wrote that Instagram and Facebook users in the EU, the European Economic Area, and Switzerland would be notified in the “next few weeks” about “more choices about how they can use” Meta’s services and features, including new opportunities to limit data-sharing across apps and services.

Most significantly, users can choose to either keep their accounts linked or “manage their Instagram and Facebook accounts separately so that their information is no longer used across accounts.” Up to this point, linking user accounts had provided Meta with more data to more effectively target ads to more users. The perk of accessing data on Instagram’s widening younger user base, TechCrunch noted, was arguably the $1 billion selling point explaining why Facebook acquired Instagram in 2012.

Also announced today, users protected by the DMA will soon be able to separate their Facebook Messenger, Marketplace, and Gaming accounts. However, doing so will limit some social features available in some of the standalone apps.

While Messenger users choosing to disconnect the chat service from their Facebook accounts will still “be able to use Messenger’s core service offering such as private messaging and chat, voice and video calling,” Marketplace users making that same choice will have to email sellers and buyers, rather than using Facebook’s messenger service. And unlinked Gaming app users will only be able to play single-player games, severing their access to social gaming otherwise supported by linking the Gaming service to their Facebook social networks.

While Meta may have had choices other than depriving users unlinking accounts of some features, Meta didn’t really have a choice in allowing newly announced options to unlink accounts. The DMA specifically requires that very large platforms designated as “gatekeepers” give users the “specific choice” of opting out of sharing personal data across a platform’s different core services or across any separate services that the gatekeepers manage.

Without gaining “specific” consent, gatekeepers will no longer be allowed to “combine personal data from the relevant core platform service with personal data from any further core platform services” or “cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper,” the DMA says. The “specific” requirement is designed to block platforms from securing consent at sign-up, then hoovering up as much personal data as possible as new services are added in an endless pursuit of advertising growth.

As defined under the General Data Protection Regulation, the EU requiring “specific” consent stops platforms from gaining user consent for broadly defined data processing by instead establishing “the need for granularity,” so that platforms always seek consent for each “specific” data “processing purpose.”

“This is an important ‘safeguard against the gradual widening or blurring of purposes for which data is processed, after a data subject has agreed to the initial collection of the data,’” the European Data Protection Supervisor explained in public comments describing “commercial surveillance and data security practices that harm consumers” provided at the request of the FTC in 2022.

According to Meta’s help page, once users opt out of sharing data between apps and services, Meta will “stop combining your info across these accounts” within 15 days “after you’ve removed them.” However, all “previously combined info would remain combined.”

Meta relents to EU, allows unlinking of Facebook and Instagram accounts Read More »

actor-paid-to-pose-as-crypto-ceo-“deeply-sorry”-about-$1.3-billion-scam

Actor paid to pose as crypto CEO “deeply sorry” about $1.3 billion scam

A screenshot from Jack Gamble's video outing Stephen Harrison as HyperVerse's fake CEO, posted on Gamble's

Enlarge / A screenshot from Jack Gamble’s video outing Stephen Harrison as HyperVerse’s fake CEO, posted on Gamble’s “Nobody Special Finance” YouTube channel.

An actor who was hired to pretend to be the highly qualified CEO of a shady, collapsed cryptocurrency hedge fund called HyperVerse has apologized after a YouTuber unmasked his real identity last week.

An Englishman currently living in Thailand, Stephen Harrison confirmed to The Guardian that HyperVerse hired him to pose as CEO Steven Reece Lewis. Harrison told The Guardian that he was “deeply sorry” to HyperVerse investors—who lost a reported $1.3 billion after buying into a cryptocurrency-mining operation that promised “double or triple returns,” but did not exist, Court Watch reported.

Harrison claimed that he had “certainly not pocketed” any portion of those funds. Instead, he told The Guardian that he was paid about $7,500 over nine months. To play the part of CEO, he was also given a “wool and cashmere suit, two business shirts, two ties, and a pair of shoes,” The Guardian reported.

Harrison said that he had no part in HyperVerse’s alleged scheme to woo investors with false promises of high returns.

“I am sorry for these people,” Harrison said. “Because they believed some idea with me at the forefront and believed in what I said, and God knows what these people have lost. And I do feel bad about this.”

He also said that he was “shocked” to find out that HyperVerse had falsified his credentials, telling investors that Harrison was a fintech whiz—supposedly earning prestigious degrees before working at Goldman Sachs, then selling a web development company to Adobe before launching his own IT startup.

Harrison claimed that he only found out about this resume fraud when The Guardian investigated and found that nothing on his resume checked out.

“When I read that in the papers, I was like, blooming heck, they make me sound so highly educated,” Harrison told The Guardian.

He confirmed that he had received general certificates of secondary education but that his expertise was “certainly not on that level” that HyperVerse claimed that it was.

“They painted a good picture of me, but they never told me any of this,” Harrison told The Guardian.

Getting hired as fake CEO

According to The Guardian, Harrison was working as an unpaid freelance sports commentator when a “friend of a friend” told him about the HyperVerse gig.

The contract that Harrison signed was with an Indonesian-based talent agency called Mass Focus Ltd. It stated that he would be hired as “presenter talent,” The Guardian reported. However, The Guardian could find “no record of a company of this name on the Indonesian company register.”

Harrison’s agent allegedly told him that it was common for companies to hire corporate “presenters” to “represent the business” and reassured him that HyperVerse was “legitimate.”

Even after those assurances, Harrison said that he was still worried that HyperVerse might be a “scam,” researching the company online but ultimately deciding that “everything seemed OK.”

“So, I rolled with it,” Harrison told The Guardian.

Harrison said that promotional videos that he recorded as HyperVerse CEO were filmed in “makeshift studios” in Bangkok. He said that he was asked to start using the fake name Steven Reece Lewis while filming the second video. When he questioned why a fake name was necessary, HyperVerse allegedly told him that he was “acting the role.”

His agent allegedly told him that this was “perfectly normal” and after that, he “never went online and checked about Steven Reece Lewis,” he told The Guardian.

“I looked on YouTube occasionally, way back when they put the presentations up, but apart from that I was detached from this role,” Harrison said.

Over nine months, Harrison mostly worked one to two hours monthly, making videos posing as HyperVerse’s CEO.

There was also a Twitter account launched under the fake name Steven Reece Lewis. The Guardian noted that the date of Harrison’s final paycheck from HyperVerse “coincided with the last date the Twitter account was active,” but Harrison told The Guardian that he “had no oversight” of that account. When he was ending his stint as fake CEO, Harrison told The Guardian that he “requested that the Twitter account be shut down.”

Harrison also told The Guardian that he had “no contact at any point” with HyperVerse heads Sam Lee and Ryan Xu, exclusively dealing with a local contact in Thailand.

Actor paid to pose as crypto CEO “deeply sorry” about $1.3 billion scam Read More »

meta-reaffirms-commitment-to-metaverse-vision,-has-no-plans-to-slow-billions-in-reality-labs-investments

Meta Reaffirms Commitment to Metaverse Vision, Has No Plans to Slow Billions in Reality Labs Investments

Meta announced its latest quarterly results, revealing that the company’s Reality Labs metaverse division is again reporting a loss of nearly $4 billion. The bright side? Meta’s still investing billions into XR, and it’s not showing any signs of stopping.

Meta revealed in its Q1 2023 financial results that its family of apps is now being used by over 3 billion people, an increase of 5% year-over-year, but its metaverse investments are still operating at heavy losses.

Reality Labs is responsible for R&D for its most forward-looking projects, including the Quest virtual reality headset platform, and its work in augmented reality and artificial intelligence. Meta CEO Mark Zuckerberg has warned shareholders in the past that Meta’s XR investments may not flourish until 2030.

Here’s a look at the related income losses and revenue for Reality Labs since it was formed as a distinct entity in Q4 2020:

Image created by Road to VR using data courtesy Meta

Meta reports Reality Labs generated $339 million in revenue during its first quarter of the year, a small fraction of the company’s 28.65 billion quarterly revenue. The bulk of that was generated from its family of apps—Facebook, Messenger, Instagram, and WhatsApp.

While the $3.99 billion loss may show the company is tightening its belt in contrast to Q4 2022, which was at an eye-watering $4.28 billion, Meta says we should still expect those losses to continue to increase year-over-year in 2023.

This follows the company’s second big round of layoffs, the most recent of which this month has affected VR teams at Reality Labs, Downpour Interactive (Onward) and Ready at Dawn (Lone Echo, Echo VR). The company says a third round is due to come in May, which will affect the company’s business groups.

Dubbed by Zuckerberg as the company’s “year of efficiency,” the Meta founder and chief said this during the earning call regarding the company’s layoffs:

“This has been a difficult process. But after this is done, I think we’re going to have a much more stable environment for our employees. For the rest of the year, I expect us to focus on improving our distributed work model, delivering AI tools to improve productivity, and removing unnecessary processes across the company.”

Beyond its investment in AI, Zuckerberg says the recent characterization claiming the company has somehow moved away from focusing on the metaverse is “not accurate.”

“We’ve been focusing on both AI and the metaverse for years now, and we will continue to focus on both,” Zuckerberg says, noting that breakthroughs in both areas are essentially shared, such as computer vision, procedurally generated virtual worlds, and its work on AR glasses.

Notably, Zuckerberg says the number of titles in the Quest store with at least $25 million in revenue has doubled since last year, with more than half of Quest daily actives now spend more than an hour using their device.

The company previously confirmed a Quest 3 headset is set to release this year, which is said to be slightly pricier than the $400 Quest 2 headset with features “designed to appeal to VR enthusiasts.”

Meta Reaffirms Commitment to Metaverse Vision, Has No Plans to Slow Billions in Reality Labs Investments Read More »

meta-acquires-3d-lens-printing-firm-luxexcel-to-bolster-future-ar-glasses

Meta Acquires 3D Lens Printing Firm Luxexcel to Bolster Future AR Glasses

Meta has acquired the Belgian-Dutch company Luxexcel, a 3D printing firm creating complex glass lenses for use in AR optics.

As first reported by Belgian newspaper De Tijd (Dutch), the Turnhout, Belgium-based company was quietly acquired by Facebook parent Meta in an ostensible bid to bolster the development of its in-development AR glasses.

Details of the acquisition are still under wraps, however confirmation by Meta was obtained by English language publication The Brussels Times.

“We are delighted that the Luxexcel team has joined Meta. This extends the partnership between the two companies,” Meta says.

Founded in 2009, Luxexcel first focused on 3D printing lenses for automotive, industrial optics, and the aerospace industry. Over the years Luxexcel shifted to using its 3D printing tech to create prescription lenses for the eyewear market.

In 2020, the company made its first entry into the smart eyewear market by combining 3D printed prescription lenses with the integration of technology. One year later, Luxexcel partnered with UK-based waveguide company WaveOptics, which has since been acquired by Snapchat parent Snap.

Meta’s interest in Luxexcel undoubtedly stems from its ability to print complex optics for both smart glasses and AR headsets; Meta’s Project Aria is rumored to house Luxexcel-built lenses. Project Aria is a sensor-rich pair of glasses which the company created to train its AR perception systems, as well as asses public perception of the technology.

Meta Acquires 3D Lens Printing Firm Luxexcel to Bolster Future AR Glasses Read More »