Policy

vending-machine-error-reveals-secret-face-image-database-of-college-students

Vending machine error reveals secret face image database of college students

“Stupid M&M machines” —

Facial-recognition data is typically used to prompt more vending machine sales.

Vending machine error reveals secret face image database of college students

Aurich Lawson | Mars | Getty Images

Canada-based University of Waterloo is racing to remove M&M-branded smart vending machines from campus after outraged students discovered the machines were covertly collecting facial-recognition data without their consent.

The scandal started when a student using the alias SquidKid47 posted an image on Reddit showing a campus vending machine error message, “Invenda.Vending.FacialRecognitionApp.exe,” displayed after the machine failed to launch a facial recognition application that nobody expected to be part of the process of using a vending machine.

Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

Enlarge / Reddit post shows error message displayed on a University of Waterloo vending machine (cropped and lightly edited for clarity).

“Hey, so why do the stupid M&M machines have facial recognition?” SquidKid47 pondered.

The Reddit post sparked an investigation from a fourth-year student named River Stanley, who was writing for a university publication called MathNEWS.

Stanley sounded alarm after consulting Invenda sales brochures that promised “the machines are capable of sending estimated ages and genders” of every person who used the machines without ever requesting consent.

This frustrated Stanley, who discovered that Canada’s privacy commissioner had years ago investigated a shopping mall operator called Cadillac Fairview after discovering some of the malls’ informational kiosks were secretly “using facial recognition software on unsuspecting patrons.”

Only because of that official investigation did Canadians learn that “over 5 million nonconsenting Canadians” were scanned into Cadillac Fairview’s database, Stanley reported. Where Cadillac Fairview was ultimately forced to delete the entire database, Stanley wrote that consequences for collecting similarly sensitive facial recognition data without consent for Invenda clients like Mars remain unclear.

Stanley’s report ended with a call for students to demand that the university “bar facial recognition vending machines from campus.”

A University of Waterloo spokesperson, Rebecca Elming, eventually responded, confirming to CTV News that the school had asked to disable the vending machine software until the machines could be removed.

Students told CTV News that their confidence in the university’s administration was shaken by the controversy. Some students claimed on Reddit that they attempted to cover the vending machine cameras while waiting for the school to respond, using gum or Post-it notes. One student pondered whether “there are other places this technology could be being used” on campus.

Elming was not able to confirm the exact timeline for when machines would be removed other than telling Ars it would happen “as soon as possible.” She told Ars she is “not aware of any similar technology in use on campus.” And for any casual snackers on campus wondering, when, if ever, students could expect the vending machines to be replaced with snack dispensers not equipped with surveillance cameras, Elming confirmed that “the plan is to replace them.”

Invenda claims machines are GDPR-compliant

MathNEWS’ investigation tracked down responses from companies responsible for smart vending machines on the University of Waterloo’s campus.

Adaria Vending Services told MathNEWS that “what’s most important to understand is that the machines do not take or store any photos or images, and an individual person cannot be identified using the technology in the machines. The technology acts as a motion sensor that detects faces, so the machine knows when to activate the purchasing interface—never taking or storing images of customers.”

According to Adaria and Invenda, students shouldn’t worry about data privacy because the vending machines are “fully compliant” with the world’s toughest data privacy law, the European Union’s General Data Protection Regulation (GDPR).

“These machines are fully GDPR compliant and are in use in many facilities across North America,” Adaria’s statement said. “At the University of Waterloo, Adaria manages last mile fulfillment services—we handle restocking and logistics for the snack vending machines. Adaria does not collect any data about its users and does not have any access to identify users of these M&M vending machines.”

Under the GDPR, face image data is considered among the most sensitive data that can be collected, typically requiring explicit consent to collect, so it’s unclear how the machines may meet that high bar based on the Canadian students’ experiences.

According to a press release from Invenda, the maker of M&M candies, Mars, was a key part of Invenda’s expansion into North America. It was only after closing a $7 million funding round, including deals with Mars and other major clients like Coca-Cola, that Invenda could push for expansive global growth that seemingly vastly expands its smart vending machines’ data collection and surveillance opportunities.

“The funding round indicates confidence among Invenda’s core investors in both Invenda’s corporate culture, with its commitment to transparency, and the drive to expand global growth,” Invenda’s press release said.

But University of Waterloo students like Stanley now question Invenda’s “commitment to transparency” in North American markets, especially since the company is seemingly openly violating Canadian privacy law, Stanley told CTV News.

On Reddit, while some students joked that SquidKid47’s face “crashed” the machine, others asked if “any pre-law students wanna start up a class-action lawsuit?” One commenter summed up students’ frustration by typing in all caps, “I HATE THESE MACHINES! I HATE THESE MACHINES! I HATE THESE MACHINES!”

Vending machine error reveals secret face image database of college students Read More »

avast-ordered-to-stop-selling-browsing-data-from-its-browsing-privacy-apps

Avast ordered to stop selling browsing data from its browsing privacy apps

Security, privacy, things of that nature —

Identifiable data included job searches, map directions, “cosplay erotica.”

Avast logo on a phone in front of the words

Getty Images

Avast, a name known for its security research and antivirus apps, has long offered Chrome extensions, mobile apps, and other tools aimed at increasing privacy.

Avast’s apps would “block annoying tracking cookies that collect data on your browsing activities,” and prevent web services from “tracking your online activity.” Deep in its privacy policy, Avast said information that it collected would be “anonymous and aggregate.” In its fiercest rhetoric, Avast’s desktop software claimed it would stop “hackers making money off your searches.”

All of that language was offered up while Avast was collecting users’ browser information from 2014 to 2020, then selling it to more than 100 other companies through a since-shuttered entity known as Jumpshot, according to the Federal Trade Commission. Under a proposed recent FTC order (PDF), Avast must pay $16.5 million, which is “expected to be used to provide redress to consumers,” according to the FTC. Avast will also be prohibited from selling future browsing data, must obtain express consent on future data gathering, notify customers about prior data sales, and implement a “comprehensive privacy program” to address prior conduct.

Reached for comment, Avast provided a statement that noted the company’s closure of Jumpshot in early 2020. “We are committed to our mission of protecting and empowering people’s digital lives. While we disagree with the FTC’s allegations and characterization of the facts, we are pleased to resolve this matter and look forward to continuing to serve our millions of customers around the world,” the statement reads.

Data was far from anonymous

The FTC’s complaint (PDF) notes that after Avast acquired then-antivirus competitor Jumpshot in early 2014, it rebranded the company as an analytics seller. Jumpshot advertised that it offered “unique insights” into the habits of “[m]ore than 100 million online consumers worldwide.” That included the ability to “[s]ee where your audience is going before and after they visit your site or your competitors’ sites, and even track those who visit a specific URL.”

While Avast and Jumpshot claimed that the data had identifying information removed, the FTC argues this was “not sufficient.” Jumpshot offerings included a unique device identifier for each browser, included in data like an “All Clicks Feed,” “Search Plus Click Feed,” “Transaction Feed,” and more. The FTC’s complaint detailed how various companies would purchase these feeds, often with the express purpose of pairing them with a company’s own data, down to an individual user basis. Some Jumpshot contracts attempted to prohibit re-identifying Avast users, but “those prohibitions were limited,” the complaint notes.

The connection between Avast and Jumpshot became broadly known in January 2020, after reporting by Vice and PC Magazine revealed that clients, including Home Depot, Google, Microsoft, Pepsi, and McKinsey, were buying data from Jumpshot, as seen in confidential contracts. Data obtained by the publications showed that buyers could purchase data including Google Maps look-ups, individual LinkedIn and YouTube pages, porn sites, and more. “It’s very granular, and it’s great data for these companies, because it’s down to the device level with a timestamp,” one source told Vice.

The FTC’s complaint provides more detail on how Avast, on its own web forums, sought to downplay its Jumpshot presence. Avast suggested both that only non-aggregated data was provided to Jumpshot and that users were informed during product installation about collecting data to “better understand new and interesting trends.” Neither of these claims proved true, the FTC suggests. And the data collected was far from harmless, given its re-identifiable nature:

For example, a sample of just 100 entries out of trillions retained by Respondents

showed visits by consumers to the following pages: an academic paper on a study of symptoms

of breast cancer; Sen. Elizabeth Warren’s presidential candidacy announcement; a CLE course

on tax exemptions; government jobs in Fort Meade, Maryland with a salary greater than

$100,000; a link (then broken) to the mid-point of a FAFSA (financial aid) application;

directions on Google Maps from one location to another; a Spanish-language children’s

YouTube video; a link to a French dating website, including a unique member ID; and cosplay

erotica.

In a blog post accompanying its announcement, FTC Senior Attorney Lesley Fair writes that, in addition to the dual nature of Avast’s privacy products and Jumpshot’s extensive tracking, the FTC is increasingly viewing browsing data as “highly sensitive information that demands the utmost care.” “Data about the websites a person visits isn’t just another corporate asset open to unfettered commercial exploitation,” Fair writes.

FTC commissioners voted 3-0 to issue the complaint and accept the proposed consent agreement. Chair Lina Khan, along with commissioners Rebecca Slaughter and Alvaro Bedoya, issued a statement on their vote.

Since the time of the FTC’s complaint and its Jumpshot business, Avast has been acquired by Gen Digital, a firm that contains Norton, Avast, LifeLock, Avira, AVG, CCLeaner, and ReputationDefender, among other security businesses.

Disclosure: Condé Nast, Ars Technica’s parent company, received data from Jumpshot before its closure.

Avast ordered to stop selling browsing data from its browsing privacy apps Read More »

india’s-plan-to-let-1998-digital-trade-deal-expire-may-worsen-chip-shortage

India’s plan to let 1998 digital trade deal expire may worsen chip shortage

India’s plan to let 1998 digital trade deal expire may worsen chip shortage

India’s plan to let a moratorium on imposing customs duties on cross-border digital e-commerce transactions expire may end up hurting India’s more ambitious plans to become a global chip leader in the next five years, Reuters reported.

It could also worsen the global chip shortage by spiking semiconductor industry costs at a time when many governments worldwide are investing heavily in expanding domestic chip supplies in efforts to keep up with rapidly advancing technologies.

Early next week, world leaders will convene at a World Trade Organization (WTO) meeting, just before the deadline to extend the moratorium hits in March. In place since 1998, the moratorium has been renewed every two years since—but India has grown concerned that it’s losing significant revenues from not imposing taxes as demand rises for its digital goods, like movies, e-books, or games.

Hoping to change India’s mind, a global consortium of semiconductor industry associations known as the World Semiconductor Council (WSC) sent a letter to Indian Prime Minister Narendra Modi on Thursday.

Reuters reviewed the letter, reporting that the WSC warned Modi that ending the moratorium “would mean tariffs on digital e-commerce and an innumerable number of transfers of chip design data across countries, raising costs and worsening chip shortages.”

Pointing to Modi’s $10 billion semiconductor incentive package—which Modi has said is designed to advance India’s industry through “giant leaps” in its mission to become a technology superpower—the WSC cautioned Modi that pushing for customs duties may dash those global chip leader dreams.

Studies suggest that India should be offering tax incentives, not potentially threatening to impose duties on chip design data. That includes a study from earlier this year, released after the Semiconductor Industry Association and the India Electronics and Semiconductor Association commissioned a report from the Information Technology and Innovation Foundation (ITIF).

ITIF’s goal was to evaluate “India’s existing semiconductor ecosystem and policy frameworks” and offer “recommendations to facilitate longer-term strategic development of complementary semiconductor ecosystems in the US and India,” a press release said, partly in order to “deepen commercial ties” between the countries. The Prime Minister’s Office (PMO) has also reported a similar goal to deepen commercial ties with the European Union.

Among recommendations to “strengthen India’s semiconductor competitiveness,” ITIF’s report encouraged India to advance cooperation with the US and introduce policy reforms that “lower the cost of doing business for semiconductor companies in India”—by “offering tax breaks to chip companies” and “expediting clearance times for goods entering the country.”

Because the duties could spike chip industry costs at a time when global cross-border data transmissions are expected to reach $11 trillion by 2025, WSC wrote, the duties may “impede India’s efforts to advance its semiconductor industry and attract semiconductor investment,” which could negatively impact “more than 20 percent of the world’s semiconductor design workforce,” which is based in India.

The prime minister’s office did not immediately respond to Ars’ request to comment.

India’s plan to let 1998 digital trade deal expire may worsen chip shortage Read More »

reddit-admits-more-moderator-protests-could-hurt-its-business

Reddit admits more moderator protests could hurt its business

SEC filing —

Losing third-party tools “could harm our moderators’ ability to review content…”

Reddit logo on website displayed on a laptop screen is seen in this illustration photo taken in Krakow, Poland on February 22, 2024.

Reddit filed to go public on Thursday (PDF), revealing various details of the social media company’s inner workings. Among the revelations, Reddit acknowledged the threat of future user protests and the value of third-party Reddit apps.

On July 1, Reddit enacted API rule changes—including new, expensive pricing —that resulted in many third-party Reddit apps closing. Disturbed by the changes, the timeline of the changes, and concerns that Reddit wasn’t properly appreciating third-party app developers and moderators, thousands of Reddit users protested by making the subreddits they moderate private, read-only, and/or engaging in other forms of protest, such as only discussing John Oliver or porn.

Protests went on for weeks and, at their onset, crashed Reddit for three hours. At the time, Reddit CEO Steve Huffman said the protests did not have “any significant revenue impact so far.”

In its filing with the Securities and Exchange Commission (SEC), though, Reddit acknowledged that another such protest could hurt its pockets:

While these activities have not historically had a material impact on our business or results of operations, similar actions by moderators and/or their communities in the future could adversely affect our business, results of operations, financial condition, and prospects.

The company also said that bad publicity and media coverage, such as the kind that stemmed from the API protests, could be a risk to Reddit’s success. The Form S-1 said bad PR around Reddit, including its practices, prices, and mods, “could adversely affect the size, demographics, engagement, and loyalty of our user base,” adding:

For instance, in May and June 2023, we experienced negative publicity as a result of our API policy changes.

Reddit’s filing also said that negative publicity and moderators disrupting the normal operation of subreddits could hurt user growth and engagement goals. The company highlighted financial incentives associated with having good relationships with volunteer moderators, noting that if enough mods decided to disrupt Reddit (like they did when they led protests last year), “results of operations, financial condition, and prospects could be adversely affected.” Reddit infamously forcibly removed moderators from their posts during the protests, saying they broke Reddit rules by refusing to reopen the subreddits they moderated.

“As communities grow, it can become more and more challenging for communities to find qualified people willing to act as moderators,” the filing says.

Losing third-party tools could hurt Reddit’s business

Much of the momentum for last year’s protests came from users, including long-time Redditors, mods, and people with accessibility needs, feeling that third-party apps were necessary to enjoyably and properly access and/or moderate Reddit. Reddit’s own technology has disappointed users in the past (leading some to cling to Old Reddit, which uses an older interface, for example). In its SEC filing, Reddit pointed to the value of third-party “tools” despite its API pricing killing off many of the most popular examples.

Reddit’s filing discusses losing moderators as a business risk and notes how important third-party tools are in maintaining mods:

While we provide tools to our communities to manage their subreddits, our moderators also rely on their own and third-party tools. Any disruption to, or lack of availability of, these third-party tools could harm our moderators’ ability to review content and enforce community rules. Further, if we are unable to provide effective support for third-party moderation tools, or develop our own such tools, our moderators could decide to leave our platform and may encourage their communities to follow them to a new platform, which would adversely affect our business, results of operations, financial condition, and prospects.

Since Reddit’s API policy changes, a small number of third-party Reddit apps remain available. But some of the remaining third-party Reddit app developers have previously told Ars Technica that they’re unsure of their app’s tenability under Reddit’s terms. Nondisclosure agreement requirements and the lack of a finalized developer platform also drive uncertainty around the longevity of the third-party Reddit app ecosystem, according to devs Ars spoke with this year.

Reddit admits more moderator protests could hurt its business Read More »

isps-keep-giving-false-broadband-coverage-data-to-the-fcc,-groups-say

ISPs keep giving false broadband coverage data to the FCC, groups say

Illustration of a US map with crisscrossing lines representing a broadband network.

Getty Images | Andrey Denisyuk

Internet service providers are still providing false coverage information to the Federal Communications Commission, and the FCC process for challenging errors isn’t good enough to handle all the false claims, the agency was told by several groups this week.

The latest complaints focus on fixed wireless providers that offer home Internet service via signals sent to antennas. ISPs that compete against these wireless providers say that exaggerated coverage data prevents them from obtaining government funding designed to subsidize the building of networks in areas with limited coverage.

The wireless company LTD Broadband (which has been renamed GigFire) came under particular scrutiny in an FCC filing submitted by the Accurate Broadband Data Alliance, a group of about 50 ISPs in the Midwest.

“A number of carriers, including LTD Broadband/GigFire LLC and others, continue to overreport Internet service availability, particularly in relation to fixed wireless network capabilities and reach,” the group said. “These errors and irregularities in the Map will hinder and, in many cases, prevent deployment of essential broadband services by redirecting funds away from areas truly lacking sufficient broadband.”

ISPs are required to submit coverage data for the FCC’s broadband map, and there is a challenge process in which false claims can be contested. The FCC recently sought comment on how well the challenge process is working.

CEO blasts “100-year-old telcos”

The Accurate Broadband Data Alliance accused GigFire of behaving badly in the challenge process, saying “LTD Broadband/GigFire LLC often continues to assert unrealistic broadband claims without evidence and even accuses the challenger of falsifying information during the challenge process.”

GigFire CEO Corey Hauer disputed the Accurate Broadband Data Alliance’s accusations. Hauer told Ars today that “GigFire evaluated over 5 million locations and established that 339,598 are eligible to get service and that is accurately reflected in our BDC [Broadband Data Collection] filings.”

Hauer said GigFire offers service in Illinois, Iowa, Minnesota, Missouri, Nebraska, North Dakota, South Dakota, Tennessee, and Wisconsin. The company’s service area is mostly wireless but includes about 20,000 homes passed by fiber lines, he said.

“GigFire wants to service as many customers as we can, but we have no interest in falsely telling customers that they qualify for service,” Hauer told us.

Hauer also said that “GigFire uses widely accepted wireless propagation models to compute our coverage. It’s just math, there is no way to game the system.” He said that telcos “feel they should get an additional wheelbarrow full of ratepayer money, and because of our coverage, they will not.”

“Many of these 100-year-old telcos were so used to being monopolies, that it appears they struggle with consumers that live in their legacy telco boundaries having competitive choices,” Hauer said.

Wireless claims hard to verify, groups say

Wireline providers have also exaggerated coverage, as we’ve reported. Comcast admitted to mistakes last year after previously insisting that false data it gave the FCC was correct. In another case, a small Ohio ISP called Jefferson County Cable admitted lying to the FCC about the size of its network in order to block funding to rivals.

But it can be especially hard to verify the claims made by fixed wireless providers, several groups that represented wireline providers told the FCC. The FCC says that fixed wireless providers can submit either lists of locations, or polygon coverage maps based on propagation modeling. GigFire submitted a list of locations.

Both the list and polygon models drew criticism from telco groups. The Minnesota Telecom Alliance told the FCC this week that “the highly generalized nature of the polygon coverage maps has tempted some competitive fixed wireless providers to exaggerate the extent of their service areas and the speeds of their services.”

The Minnesota group said that “polygon coverage maps are able to show only an alleged unsubsidized fixed wireless competitor’s theoretical potential signal coverage over a general area,” and don’t account for problems like “line-of-sight obstructions, terrain, foliage, weather conditions, and busy hour congestion” that can restrict coverage at specific locations.

“MTA members are aware that many fixed wireless broadband service providers are unable to determine whether they actually can serve a specific location and what level of service they can provide to that location unless and until they send a technician to the site to attempt to install service,” the group said.

The Minnesota telco group complained that inaccurate filings reduce the number of locations at which a telco can receive Universal Service Fund (USF) money. It is often virtually impossible to successfully “challenge the accuracy of fixed wireless service availability claims that can adversely impact USF support,” the group said.

ISPs keep giving false broadband coverage data to the FCC, groups say Read More »

snapchat-isn’t-liable-for-connecting-12-year-old-to-convicted-sex-offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders

A judge has dismissed a complaint from a parent and guardian of a girl, now 15, who was sexually assaulted when she was 12 years old after Snapchat recommended that she connect with convicted sex offenders.

According to the court filing, the abuse that the girl, C.O., experienced on Snapchat happened soon after she signed up for the app in 2019. Through its “Quick Add” feature, Snapchat “directed her” to connect with “a registered sex offender using the profile name JASONMORGAN5660.” After a little more than a week on the app, C.O. was bombarded with inappropriate images and subjected to sextortion and threats before the adult user pressured her to meet up, then raped her. Cops arrested the adult user the next day, resulting in his incarceration, but his Snapchat account remained active for three years despite reports of harassment, the complaint alleged.

Two years later, at 14, C.O. connected with another convicted sex offender on Snapchat, a former police officer who offered to give C.O. a ride to school and then sexually assaulted her. The second offender is also currently incarcerated, the judge’s opinion noted.

The lawsuit painted a picture of Snapchat’s ongoing neglect of minors it knows are being targeted by sexual predators. Prior to C.O.’s attacks, both adult users sent and requested sexually explicit photos, seemingly without the app detecting any child sexual abuse materials exchanged on the platform. C.O. had previously reported other adult accounts sending her photos of male genitals, but Snapchat allegedly “did nothing to block these individuals from sending her inappropriate photographs.”

Among other complaints, C.O.’s lawsuit alleged that Snapchat’s algorithm for its “Quick Add” feature was the problem. It allegedly recklessly works to detect when adult accounts are seeking to connect with young girls and, by design, sends more young girls their way—continually directing sexual predators toward vulnerable targets. Snapchat is allegedly aware of these abuses and, therefore, should be held liable for harm caused to C.O., the lawsuit argued.

Although C.O.’s case raised difficult questions, Judge Barbara Bellis ultimately agreed with Snapchat that Section 230 of the Communications Decency Act barred all claims and shielded Snap because “the allegations of this case fall squarely within the ambit of the immunity afforded to” platforms publishing third-party content.

According to Bellis, C.O.’s family had “clearly alleged” that Snap had failed to design its recommendations systems to block young girls from receiving messages from sexual predators. Specifically, Section 230 immunity shields Snap from liability in this case because Bellis considered the messages exchanged to be third-party content. Snapchat designing its recommendation systems to deliver content is a protected activity, Bellis ruled.

Internet law professor Eric Goldman wrote in his blog that Bellis’ “well-drafted and no-nonsense opinion” is “grounded” in precedent. Pointing to an “extremely similar” 2008 case against MySpace—”which reached the same outcome that Section 230 applies to offline sexual abuse following online messaging”—Goldman suggested that “the law has been quite consistent for a long time.”

However, as this case was being decided, a seemingly conflicting ruling in a Los Angeles court found that “Section 230 didn’t protect Snapchat from liability for allegedly connecting teens with drug dealers,” MediaPost noted. Bellis acknowledged this outlier opinion but did not appear to consider it persuasive.

Yet, at the end of her opinion, Bellis seemed to take aim at Section 230 as perhaps being too broad.

She quoted a ruling from the First Circuit Court of Appeals, which noted that some Section 230 cases, presumably like C.O.’s, are “hard” for courts not because “the legal issues defy resolution,” but because Section 230 requires that the court “deny relief to plaintiffs whose circumstances evoke outrage.” She then went on to quote an appellate court ruling on a similarly “difficult” Section 230 case that warned “without further legislative action,” there is “little” that courts can do “but join with other courts and commentators in expressing concern” with Section 230’s “broad scope.”

Ars could not immediately reach Snapchat or lawyers representing C.O.’s family for comment.

Snapchat isn’t liable for connecting 12-year-old to convicted sex offenders Read More »

does-fubo’s-antitrust-lawsuit-against-espn,-fox,-and-wbd-stand-a-chance?

Does Fubo’s antitrust lawsuit against ESPN, Fox, and WBD stand a chance?

Collaborating conglomerates —

Fubo: Media giants’ anticompetitive tactics already killed PS Vue, other streamers.

In this photo illustration, the FuboTV Inc. logo is displayed on a smartphone screen and ESPN, Warner Bros. Discovery and FOX logos in the background.

Fubo is suing Fox Corporation, The Walt Disney Company, and Warner Bros. Discovery (WBD) over their plans to launch a unified sports streaming app. Fubo, a live sports streaming service that has business relationships with the three companies, claims the firms have engaged in anticompetitive practices for years, leading to higher prices for consumers.

In an attempt to understand how much potential the allegations have to derail the app’s launch, Ars Technica read the 73-page sealed complaint and sought opinions from some antitrust experts. While some of Fubo’s allegations could be hard to prove, Fubo isn’t the only one concerned about the joint app’s potential to make it hard for streaming services to compete fairly.

Fubo wants to kill ESPN, Fox, and WBD’s joint sports app

Earlier this month, Disney, which owns ESPN, WBD (whose sports channels include TBS and TNT), and Fox, which owns Fox broadcast stations and Fox Sports channels like FS1, announced plans to launch an equally owned live sports streaming app this fall. Pricing hasn’t been confirmed but is expected to be in the $30-to-$50-per-month range. Fubo, for comparison, starts at $80 per month for English-language channels.

Via a lawsuit filed on Tuesday in US District Court for the Southern District of New York, Fubo is seeking an injunction against the app and joint venture (JV), a jury trial, and damages for an unspecified figure. There have been reports that Fubo was suing the three companies for $1 billion, but a Fubo spokesperson confirmed to Ars that this figure is incorrect.

“Insurmountable barriers”

Fubo, which was founded in 2015, is arguing that the three companies’ proposed app will result in higher prices for live sports streaming customers.

The New York City-headquartered company claims the collaboration would preclude other distributors of live sports content, like Fubo, from competing fairly. The lawsuit also claims that distributors like Fubo would see higher prices and worse agreements associated with licensing sports content due to the JV, which could even stop licensing critical sports content to companies like Fubo. Fubo’s lawsuit says that “once they have combined forces, Defendants’ incentive to exclude Fubo and other rivals will only increase.”

Disney, Fox, and WBD haven’t disclosed specifics about how their JV will impact how they license the rights to sports events to companies outside of their JV; however, they have claimed that they will license their respective entities to the JV on a non-exclusive basis.

That statement doesn’t specify, though, if the companies will try to bundle content together forcibly,

“If the three firms get together and say, ‘We’re no longer going to provide to you these streams for resale separately. You must buy a bundle as a condition of getting any of them,’ that would … be an anti-competitive bundle that can be challenged under antitrust law,” Hal Singer, an economics professor at The University of Utah and managing director at Econ One, told Ars.

Lee Hepner, counsel at the American Economic Liberties Project, shared similar concerns about the JV with Ars:

Joint ventures raise the same concerns as mergers when the effect is to shut out competitors and gain power to raise prices and reduce quality. Sports streaming is an extremely lucrative market, and a joint venture between these three powerhouses will foreclose the ability of rivals like Fubo to compete on fair terms.

Fubo’s lawsuit cites research from Citi, finding that, combined, ESPN (26.8 percent), Fox (17.3 percent), and WBD (9.9 percent) own 54 percent of the US sports rights market.

In a statement, Fubo co-founder and CEO David Gandler said the three companies “are erecting insurmountable barriers that will effectively block any new competitors” and will leave sports streamers without options.

The US Department of Justice is reportedly eyeing the JV for an antitrust review and plans to look at the finalized terms, according to a February 15 Bloomberg report citing two anonymous “people familiar with the process.”

Does Fubo’s antitrust lawsuit against ESPN, Fox, and WBD stand a chance? Read More »

twitter-security-staff-kept-firm-in-compliance-by-disobeying-musk,-ftc-says

Twitter security staff kept firm in compliance by disobeying Musk, FTC says

Close call —

Lina Khan: Musk demanded “actions that would have violated the FTC’s Order.”

Elon Musk sits on stage while being interviewed during a conference.

Enlarge / Elon Musk at the New York Times DealBook Summit on November 29, 2023, in New York City.

Getty Images | Michael Santiago

Twitter employees prevented Elon Musk from violating the company’s privacy settlement with the US government, according to Federal Trade Commission Chair Lina Khan.

After Musk bought Twitter in late 2022, he gave Bari Weiss and other journalists access to company documents in the so-called “Twitter Files” incident. The access given to outside individuals raised concerns that Twitter (which is currently named X) violated a 2022 settlement with the FTC, which has requirements designed to prevent repeats of previous security failures.

Some of Twitter’s top privacy and security executives also resigned shortly after Musk’s purchase, citing concerns that Musk’s rapid changes could cause violations of the settlement.

FTC staff deposed former Twitter employees and “learned that the access provided to the third-party individuals turned out to be more limited than the individuals’ tweets and other public reporting had indicated,” Khan wrote in a letter sent today to US Rep. Jim Jordan (R-Ohio). Khan’s letter said the access was limited because employees refused to comply with Musk’s demands:

The deposition testimony revealed that in early December 2022, Elon Musk had reportedly directed staff to grant an outside third-party individual “full access to everything at Twitter… No limits at all.” Consistent with Musk’s direction, the individual was initially assigned a company laptop and internal account, with the intent that the third-party individual be given “elevated privileges” beyond what an average company employee might have.

However, based on a concern that such an arrangement would risk exposing nonpublic user information in potential violation of the FTC’s Order, longtime information security employees at Twitter intervened and implemented safeguards to mitigate the risks. Ultimately the third-party individuals did not receive direct access to Twitter’s systems, but instead worked with other company employees who accessed the systems on the individuals’ behalf.

Khan: FTC “was right to be concerned”

Jordan is chair of the House Judiciary Committee and has criticized the investigation, claiming that “the FTC harassed Twitter in the wake of Mr. Musk’s acquisition.” Khan’s letter to Jordan today argues that the FTC investigation was justified.

“The FTC’s investigation confirmed that staff was right to be concerned, given that Twitter’s new CEO had directed employees to take actions that would have violated the FTC’s Order,” Khan wrote. “Once staff learned that the FTC’s Order had worked to ensure that Twitter employees took appropriate measures to protect consumers’ private information, compliance staff made no further inquiries to Twitter or anyone else concerning this issue.”

Khan also wrote that deep staff cuts following the Musk acquisition, and resignations of Twitter’s top privacy and compliance officials, meant that “there was no one left at the company responsible for interpreting and modifying data policies and practices to ensure Twitter was complying with the FTC’s Order to safeguard Americans’ personal data.” The letter continued:

During staff’s evaluation of the workforce reductions, one of the company’s recently departed lead privacy and security experts testified that Twitter Blue was being implemented too quickly so that the proper “security and privacy review was not conducted in accordance with the company’s process for software development.” Another expert testified that he had concerns about Mr. Musk’s “commitment to overall security and privacy of the organization.” Twitter, meanwhile, filed a motion seeking to eliminate the FTC Order that protected the privacy and security of Americans’ data. Fortunately for Twitter’s millions of users, that effort failed in court.

FTC still trying to depose Musk

While no violation was found in this case, the FTC isn’t done investigating. When contacted by Ars, an FTC spokesperson said the agency cannot rule out bringing lawsuits against Musk’s social network for violations of the settlement or US law.

“When we heard credible public reports of potential violations of protections for Twitter users’ data, we moved swiftly to investigate,” the FTC said in a statement today. “The order remains in place and the FTC continues to deploy the order’s tools to protect Twitter users’ data and ensure the company remains in compliance.”

The FTC also said it is continuing attempts to depose Musk. In July 2023, Musk’s X Corp. asked a federal court for an order that would terminate the settlement and prevent the FTC from deposing Musk. The court denied both requests in November. In a filing, US government lawyers said the FTC investigation had “revealed a chaotic environment at the company that raised serious questions about whether and how Musk and other leaders were ensuring X Corp.’s compliance with the 2022 Administrative Order.”

We contacted X today, but an auto-reply informed us that the company was busy and asked that we check back later.

Twitter security staff kept firm in compliance by disobeying Musk, FTC says Read More »

court-blocks-$1-billion-copyright-ruling-that-punished-isp-for-its-users’-piracy

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy

A man, surrounded by music CDs, uses a laptop while wearing a skull-and-crossbones pirate hat and holding one of the CDs in his mouth.

Getty Images | OcusFocus

A federal appeals court today overturned a $1 billion piracy verdict that a jury handed down against cable Internet service provider Cox Communications in 2019. Judges rejected Sony’s claim that Cox profited directly from copyright infringement committed by users of Cox’s cable broadband network.

Appeals court judges didn’t let Cox off the hook entirely, but they vacated the damages award and ordered a new damages trial, which will presumably result in a significantly smaller amount to be paid to Sony and other copyright holders. Universal and Warner are also plaintiffs in the case.

“We affirm the jury’s finding of willful contributory infringement,” said a unanimous decision by a three-judge panel at the US Court of Appeals for the 4th Circuit. “But we reverse the vicarious liability verdict and remand for a new trial on damages because Cox did not profit from its subscribers’ acts of infringement, a legal prerequisite for vicarious liability.”

If the correct legal standard had been used in the district court, “no reasonable jury could find that Cox received a direct financial benefit from its subscribers’ infringement of Plaintiffs’ copyrights,” judges wrote.

The case began when Sony and other music copyright holders sued Cox, claiming that it didn’t adequately fight piracy on its network and failed to terminate repeat infringers. A US District Court jury in the Eastern District of Virginia found the ISP liable for infringement of 10,017 copyrighted works.

Copyright owners want ISPs to disconnect users

Cox’s appeal was supported by advocacy groups concerned that the big-money judgment could force ISPs to disconnect more Internet users based merely on accusations of copyright infringement. Groups such as the Electronic Frontier Foundation also called the ruling legally flawed.

“When these music companies sued Cox Communications, an ISP, the court got the law wrong,” the EFF wrote in 2021. “It effectively decided that the only way for an ISP to avoid being liable for infringement by its users is to terminate a household or business’s account after a small number of accusations—perhaps only two. The court also allowed a damages formula that can lead to nearly unlimited damages, with no relationship to any actual harm suffered. If not overturned, this decision will lead to an untold number of people losing vital Internet access as ISPs start to cut off more and more customers to avoid massive damages.”

In today’s 4th Circuit ruling, appeals court judges wrote that “Sony failed, as a matter of law, to prove that Cox profits directly from its subscribers’ copyright infringement.”

A defendant may be vicariously liable for a third party’s copyright infringement if it profits directly from it and is in a position to supervise the infringer, the ruling said. Cox argued that it doesn’t profit directly from infringement because it receives the same monthly fee from subscribers whether they illegally download copyrighted files or not, the ruling noted.

The question in this type of case is whether there is a causal relationship between the infringement and the financial benefit. “If copyright infringement draws customers to the defendant’s service or incentivizes them to pay more for their service, that financial benefit may be profit from infringement. But in every case, the financial benefit to the defendant must flow directly from the third party’s acts of infringement to establish vicarious liability,” the court said.

Court blocks $1 billion copyright ruling that punished ISP for its users’ piracy Read More »

musk-claims-neuralink-patient-doing-ok-with-implant,-can-move-mouse-with-brain

Musk claims Neuralink patient doing OK with implant, can move mouse with brain

Neuralink brain implant —

Medical ethicists alarmed by Musk being “sole source of information” on patient.

A person's hand holidng a brain implant device that is about the size of a coin.

Enlarge / A Neuralink implant.

Neuralink

Neuralink co-founder Elon Musk said the first human to be implanted with the company’s brain chip is now able to move a mouse cursor just by thinking.

“Progress is good, and the patient seems to have made a full recovery, with no ill effects that we are aware of. Patient is able to move a mouse around the screen by just thinking,” Musk said Monday during an X Spaces event, according to Reuters.

Musk’s update came a few weeks after he announced that Neuralink implanted a chip into the human. The previous update was also made on X, the Musk-owned social network formerly named Twitter.

Musk reportedly said during yesterday’s chat, “We’re trying to get as many button presses as possible from thinking. So that’s what we’re currently working on is: can you get left mouse, right mouse, mouse down, mouse up… We want to have more than just two buttons.”

Neuralink itself doesn’t seem to have issued any statement on the patient’s progress. We contacted the company today and will update this article if we get a response.

“Basic ethical standards” not met

Neuralink’s method of releasing information was criticized last week by Arthur Caplan, a bioethics professor and head of the Division of Medical Ethics at NYU Grossman School of Medicine, and Jonathan Moreno, a University of Pennsylvania medical ethics professor.

“Science by press release, while increasingly common, is not science,” Caplan and Moreno wrote in an essay published by the nonprofit Hastings Center. “When the person paying for a human experiment with a huge financial stake in the outcome is the sole source of information, basic ethical standards have not been met.”

Caplan and Moreno acknowledged that Neuralink and Musk seem to be “in the clear” legally:

Assuming that some brain-computer interface device was indeed implanted in some patient with severe paralysis by some surgeons somewhere, it would be reasonable to expect some formal reporting about the details of an unprecedented experiment involving a vulnerable person. But unlike drug studies in which there are phases that must be registered in a public database, the Food and Drug Administration does not require reporting of early feasibility studies of devices. From a legal standpoint Musk’s company is in the clear, a fact that surely did not escape the tactical notice of his company’s lawyers.

But they argue that opening “the brain of a living human being to insert a device” should have been accompanied with more public detail. There is an ethical obligation “to avoid the risk of giving false hope to countless thousands of people with serious neurological disabilities,” they wrote.

A brain implant could have complications that leave a patient in worse condition, the ethics professors noted. “We are not even told what plans there are to remove the device if things go wrong or the subject simply wants to stop,” Caplan and Moreno wrote. “Nor do we know the findings of animal research that justified beginning a first-in-human experiment at this time, especially since it is not lifesaving research.”

Clinical trial still to come

Neuralink has been criticized for alleged mistreatment of animals in research and was reportedly fined $2,480 for violating US Department of Transportation rules on the movement of hazardous materials after inspections of company facilities last year.

People “should continue to be skeptical of the safety and functionality of any device produced by Neuralink,” the nonprofit Physicians Committee for Responsible Medicine said after last month’s announcement of the first implant.

“The Physicians Committee continues to urge Elon Musk and Neuralink to shift to developing a noninvasive brain-computer interface,” the group said. “Researchers elsewhere have already made progress to improve patient health using such noninvasive methods, which do not come with the risk of surgical complications, infections, or additional operations to repair malfunctioning implants.”

In May 2023, Neuralink said it obtained Food and Drug Administration approval for clinical trials. The company’s previous attempt to gain approval was reportedly denied by the FDA over safety concerns and other “deficiencies.”

In September, the company said it was recruiting volunteers, specifically people with quadriplegia due to cervical spinal cord injury or amyotrophic lateral sclerosis. Neuralink said the first human clinical trial for PRIME (Precise Robotically Implanted Brain-Computer Interface) will evaluate the safety of its implant and surgical robot, “and assess the initial functionality of our BCI [brain-computer interface] for enabling people with paralysis to control external devices with their thoughts.”

Musk claims Neuralink patient doing OK with implant, can move mouse with brain Read More »

eu-accuses-tiktok-of-failing-to-stop-kids-pretending-to-be-adults

EU accuses TikTok of failing to stop kids pretending to be adults

Getting TikTok’s priorities straight —

TikTok becomes the second platform suspected of Digital Services Act breaches.

EU accuses TikTok of failing to stop kids pretending to be adults

The European Commission (EC) is concerned that TikTok isn’t doing enough to protect kids, alleging that the short-video app may be sending kids down rabbit holes of harmful content while making it easy for kids to pretend to be adults and avoid the protective content filters that do exist.

The allegations came Monday when the EC announced a formal investigation into how TikTok may be breaching the Digital Services Act (DSA) “in areas linked to the protection of minors, advertising transparency, data access for researchers, as well as the risk management of addictive design and harmful content.”

“We must spare no effort to protect our children,” Thierry Breton, European Commissioner for Internal Market, said in the press release, reiterating that the “protection of minors is a top enforcement priority for the DSA.”

This makes TikTok the second platform investigated for possible DSA breaches after X (aka Twitter) came under fire last December. Both are being scrutinized after submitting transparency reports in September that the EC said failed to satisfy the DSA’s strict standards on predictable things like not providing enough advertising transparency or data access for researchers.

But while X is additionally being investigated over alleged dark patterns and disinformation—following accusations last October that X wasn’t stopping the spread of Israel/Hamas disinformation—it’s TikTok’s young user base that appears to be the focus of the EC’s probe into its platform.

“As a platform that reaches millions of children and teenagers, TikTok must fully comply with the DSA and has a particular role to play in the protection of minors online,” Breton said. “We are launching this formal infringement proceeding today to ensure that proportionate action is taken to protect the physical and emotional well-being of young Europeans.”

Likely over the coming months, the EC will request more information from TikTok, picking apart its DSA transparency report. The probe could require interviews with TikTok staff or inspections of TikTok’s offices.

Upon concluding its investigation, the EC could require TikTok to take interim measures to fix any issues that are flagged. The Commission could also make a decision regarding non-compliance, potentially subjecting TikTok to fines of up to 6 percent of its global turnover.

An EC press officer, Thomas Regnier, told Ars that the Commission suspected that TikTok “has not diligently conducted” risk assessments to properly maintain mitigation efforts protecting “the physical and mental well-being of their users, and the rights of the child.”

In particular, its algorithm may risk “stimulating addictive behavior,” and its recommender systems “might drag its users, in particular minors and vulnerable users, into a so-called ‘rabbit hole’ of repetitive harmful content,” Regnier told Ars. Further, TikTok’s age verification system may be subpar, with the EU alleging that TikTok perhaps “failed to diligently assess the risk of 13-17-year-olds pretending to be adults when accessing TikTok,” Regnier said.

To better protect TikTok’s young users, the EU’s investigation could force TikTok to update its age-verification system and overhaul its default privacy, safety, and security settings for minors.

“In particular, the Commission suspects that the default settings of TikTok’s recommender systems do not ensure a high level of privacy, security, and safety of minors,” Regnier said. “The Commission also suspects that the default privacy settings that TikTok has for 16-17-year-olds are not the highest by default, which would not be compliant with the DSA, and that push notifications are, by default, not switched off for minors, which could negatively impact children’s safety.”

TikTok could avoid steep fines by committing to remedies recommended by the EC at the conclusion of its investigation.

Regnier told Ars that the EC does not comment on ongoing investigations, but its probe into X has spanned three months so far. Because the DSA does not provide any deadlines that may speed up these kinds of enforcement proceedings, ultimately, the duration of both investigations will depend on how much “the company concerned cooperates,” the EU’s press release said.

A TikTok spokesperson told Ars that TikTok “would continue to work with experts and the industry to keep young people on its platform safe,” confirming that the company “looked forward to explaining this work in detail to the European Commission.”

“TikTok has pioneered features and settings to protect teens and keep under-13s off the platform, issues the whole industry is grappling with,” TikTok’s spokesperson said.

All online platforms are now required to comply with the DSA, but enforcement on TikTok began near the end of July 2023. A TikTok press release last August promised that the platform would be “embracing” the DSA. But in its transparency report, submitted the next month, TikTok acknowledged that the report only covered “one month of metrics” and may not satisfy DSA standards.

“We still have more work to do,” TikTok’s report said, promising that “we are working hard to address these points ahead of our next DSA transparency report.”

EU accuses TikTok of failing to stop kids pretending to be adults Read More »

report:-apple-is-about-to-be-fined-e500-million-by-the-eu-over-music-streaming

Report: Apple is about to be fined €500 million by the EU over music streaming

Competition concerns —

EC accuses Apple of abusing its market position after complaint by Spotify.

Report: Apple is about to be fined €500 million by the EU over music streaming

Brussels is to impose its first-ever fine on tech giant Apple for allegedly breaking EU law over access to its music streaming services, according to five people with direct knowledge of the long-running investigation.

The fine, which is in the region of €500 million and is expected to be announced early next month, is the culmination of a European Commission antitrust probe into whether Apple has used its own platform to favor its services over those of competitors.

The probe is investigating whether Apple blocked apps from informing iPhone users of cheaper alternatives to access music subscriptions outside the App Store. It was launched after music-streaming app Spotify made a formal complaint to regulators in 2019.

The Commission will say Apple’s actions are illegal and go against the bloc’s rules that enforce competition in the single market, the people familiar with the case told the Financial Times. It will ban Apple’s practice of blocking music services from letting users outside its App Store switch to cheaper alternatives.

Brussels will accuse Apple of abusing its powerful position and imposing anti-competitive trading practices on rivals, the people said, adding that the EU would say the tech giant’s terms were “unfair trading conditions.”

It is one of the most significant financial penalties levied by the EU on Big Tech companies. A series of fines against Google levied over several years and amounting to about 8 billion euros are being contested in court.

Apple has never previously been fined for antitrust infringements by Brussels, but the company was hit in 2020 with a 1.1 billion-euro fine in France for alleged anti-competitive behavior. The penalty was revised down to 372 million euros after an appeal.

The EU’s action against Apple will reignite the war between Brussels and Big Tech at a time when companies are being forced to show how they are complying with landmark new rules aimed at opening competition and allowing small tech rivals to thrive.

Companies that are defined as gatekeepers, including Apple, Amazon, and Google, need to fully comply with these rules under the Digital Markets Act by early next month.

The act requires these tech giants to comply with more stringent rules and will force them to allow rivals to share information about their services.

There are concerns that the rules are not enabling competition as fast as some had hoped, although Brussels has insisted that changes require time.

Brussels formally charged Apple in the anti-competitive probe in 2021. The commission narrowed the scope of the investigation last year and abandoned a charge of pushing developers to use its own in-app payment system.

Apple last month announced changes to its iOS mobile software, App Store, and Safari browser in efforts to appease Brussels after long resisting such steps. But Spotify said at the time that Apple’s compliance was a “complete and total farce.”

Apple responded by saying that “the changes we’re sharing for apps in the European Union give developers choice—with new options to distribute iOS apps and process payments.”

In a separate antitrust case, Brussels is consulting with Apple’s rivals over the tech giant’s concessions to appease worries that it is blocking financial groups from its Apple Pay mobile system.

The timing of the Commission’s announcement has not yet been fixed, but it will not change the direction of the antitrust investigation, the people with knowledge of the situation said.

Apple, which can appeal to the EU courts, declined to comment on the forthcoming ruling but pointed to a statement a year ago when it said it was “pleased” the Commission had narrowed the charges and said it would address concerns while promoting competition.

It added: “The App Store has helped Spotify become the top music streaming service across Europe and we hope the European Commission will end its pursuit of a complaint that has no merit.”

The Commission—the executive body of the EU—declined to comment.

© 2024 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Report: Apple is about to be fined €500 million by the EU over music streaming Read More »