Policy

data-broker-allegedly-selling-de-anonymized-info-to-face-ftc-lawsuit-after-all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

The Federal Trade Commission has succeeded in keeping alive its first federal court case against a geolocation data broker that’s allegedly unfairly selling large quantities of data in violation of the FTC Act.

On Saturday, US District Judge Lynn Winmill denied Kochava’s motion to dismiss an amended FTC complaint, which he said plausibly argued that “Kochava’s data sales invade consumers’ privacy and expose them to risks of secondary harms by third parties.”

Winmill’s ruling reversed a dismissal of the FTC’s initial complaint, which the court previously said failed to adequately allege that Kochava’s data sales cause or are likely to cause a “substantial” injury to consumers.

The FTC has accused Kochava of selling “a substantial amount of data obtained from millions of mobile devices across the world”—allegedly combining precise geolocation data with a “staggering amount of sensitive and identifying information” without users’ knowledge or informed consent. This data, the FTC alleged, “is not anonymized and is linked or easily linkable to individual consumers” without mining “other sources of data.”

Kochava’s data sales allegedly allow its customers—whom the FTC noted often pay tens of thousands of dollars monthly—to target specific individuals by combining Kochava data sets. Using just Kochava data, marketers can create “highly granular” portraits of ad targets such as “a woman who visits a particular building, the woman’s name, email address, and home address, and whether the woman is African-American, a parent (and if so, how many children), or has an app identifying symptoms of cancer on her phone.” Just one of Kochava’s databases “contains ‘comprehensive profiles of individual consumers,’ with up to ‘300 data points’ for ‘over 300 million unique individuals,'” the FTC reported.

This harms consumers, the FTC alleged, in “two distinct ways”—by invading their privacy and by causing “an increased risk of suffering secondary harms, such as stigma, discrimination, physical violence, and emotional distress.”

In its amended complaint, the FTC overcame deficiencies in its initial complaint by citing specific examples of consumers already known to have been harmed by brokers sharing sensitive data without their consent. That included a Catholic priest who resigned after he was outed by a group using precise mobile geolocation data to track his personal use of Grindr and his movements to “LGBTQ+-associated locations.” The FTC also pointed to invasive practices by journalists using precise mobile geolocation data to identify and track military and law enforcement officers over time, as well as data brokers tracking “abortion-minded women” who visited reproductive health clinics to target them with ads about abortion and alternatives to abortion.

“Kochava’s practices intrude into the most private areas of consumers’ lives and cause or are likely to cause substantial injury to consumers,” the FTC’s amended complaint said.

The FTC is seeking a permanent injunction to stop Kochava from allegedly selling sensitive data without user consent.

Kochava considers the examples of consumer harms in the FTC’s amended complaint as “anecdotes” disconnected from its own activities. The data broker was seemingly so confident that Winmill would agree to dismiss the FTC’s amended complaint that the company sought sanctions against the FTC for what it construed as a “baseless” filing. According to Kochava, many of the FTC’s allegations were “knowingly false.”

Ultimately, the court found no evidence that the FTC’s complaints were baseless. Instead of dismissing the case and ordering the FTC to pay sanctions, Winmill wrote in his order that Kochava’s motion to dismiss “misses the point” of the FTC’s filing, which was to allege that Kochava’s data sales are “likely” to cause alleged harms. Because the FTC had “significantly” expanded factual allegations, the agency “easily” satisfied the plausibility standard to allege substantial harms were likely, Winmill said.

Kochava CEO and founder Charles Manning said in a statement provided to Ars that Kochava “expected” Winmill’s ruling and is “confident” that Kochava “will prevail on the merits.”

“This case is really about the FTC attempting to make an end-run around Congress to create data privacy law,” Manning said. “The FTC’s salacious hypotheticals in its amended complaint are mere scare tactics. Kochava has always operated consistently and proactively in compliance with all rules and laws, including those specific to privacy.”

In a press release announcing the FTC lawsuit in 2022, the director of the FTC’s Bureau of Consumer Protection, Samuel Levine, said that the FTC was determined to halt Kochava’s allegedly harmful data sales.

“Where consumers seek out health care, receive counseling, or celebrate their faith is private information that shouldn’t be sold to the highest bidder,” Levine said. “The FTC is taking Kochava to court to protect people’s privacy and halt the sale of their sensitive geolocation information.”

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all Read More »

4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan users who have made a game out of exploiting popular AI image generators appear to be at least partly responsible for the flood of fake images sexualizing Taylor Swift that went viral last month.

Graphika researchers—who study how communities are manipulated online—traced the fake Swift images to a 4chan message board that’s “increasingly” dedicated to posting “offensive” AI-generated content, The New York Times reported. Fans of the message board take part in daily challenges, Graphika reported, sharing tips to bypass AI image generator filters and showing no signs of stopping their game any time soon.

“Some 4chan users expressed a stated goal of trying to defeat mainstream AI image generators’ safeguards rather than creating realistic sexual content with alternative open-source image generators,” Graphika reported. “They also shared multiple behavioral techniques to create image prompts, attempt to avoid bans, and successfully create sexually explicit celebrity images.”

Ars reviewed a thread flagged by Graphika where users were specifically challenged to use Microsoft tools like Bing Image Creator and Microsoft Designer, as well as OpenAI’s DALL-E.

“Good luck,” the original poster wrote, while encouraging other users to “be creative.”

OpenAI has denied that any of the Swift images were created using DALL-E, while Microsoft has continued to claim that it’s investigating whether any of its AI tools were used.

Cristina López G., a senior analyst at Graphika, noted that Swift is not the only celebrity targeted in the 4chan thread.

“While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim,” López G. said. “In the 4chan community where these images originated, she isn’t even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children.”

Originally, 404 Media reported that the harmful Swift images appeared to originate from 4chan and Telegram channels before spreading on X (formerly Twitter) and other social media. Attempting to stop the spread, X took the drastic step of blocking all searches for “Taylor Swift” for two days.

But López G. said that Graphika’s findings suggest that platforms will continue to risk being inundated with offensive content so long as 4chan users are determined to continue challenging each other to subvert image generator filters. Rather than expecting platforms to chase down the harmful content, López G. recommended that AI companies should get ahead of the problem, taking responsibility for outputs by paying attention to evolving tactics of toxic online communities reporting precisely how they’re getting around safeguards.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat,’” López G. said. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Experts told The Times that 4chan users were likely motivated to participate in these challenges for bragging rights and to “feel connected to a wider community.”

4chan daily challenge sparked deluge of explicit AI Taylor Swift images Read More »

eu-right-to-repair:-sellers-will-be-liable-for-a-year-after-products-are-fixed

EU right to repair: Sellers will be liable for a year after products are fixed

Right to repair —

Rules also ban “contractual, hardware or software related barriers to repair.”

A European Union flag blowing in the wind.

Getty Images | SimpleImages

Europe’s right-to-repair rules will force vendors to stand by their products an extra 12 months after a repair is made, according to the terms of a new political agreement.

Consumers will have a choice between repair and replacement of defective products during a liability period that sellers will be required to offer. The liability period is slated to be a minimum of two years before any extensions.

“If the consumer chooses the repair of the good, the seller’s liability period will be extended by 12 months from the moment when the product is brought into conformity. This period may be further prolonged by member states if they so wish,” a European Council announcement on Friday said.

The 12-month extension is part of a provisional deal between the European Parliament and Council on how to implement the European Commission’s right-to-repair directive that was passed in March 2023. The Parliament and Council still need to formally adopt the agreement, which would then come into force 20 days after it is published in the Official Journal of the European Union.

“Once adopted, the new rules will introduce a new ‘right to repair’ for consumers, both within and beyond the legal guarantee, which will make it easier and more cost-effective for them to repair products instead of simply replacing them with new ones,” the European Commission said on Friday.

Rules prohibit “barriers to repair”

The rules require spare parts to be available at reasonable prices, and product makers will be prohibited from using “contractual, hardware or software related barriers to repair, such as impeding the use of second-hand, compatible and 3D-printed spare parts by independent repairers,” the Commission said.

The newly agreed-upon text “requires manufacturers to make the necessary repairs within a reasonable time and, unless the service is provided for free, for a reasonable price too, so that consumers are encouraged to opt for repair,” the European Council said.

There will be required options for consumers to get repairs both before and after the minimum liability period expires, the Commission said:

When a defect appears within the legal guarantee, consumers will now benefit from a prolonged legal guarantee of one year if they choose to have their products repaired.

When the legal guarantee has expired, the consumers will be able to request an easier and cheaper repair of defects in those products that must be technically repairable (such as tablets, smartphones but also washing machines, dishwashers, etc.). Manufacturers will be required to publish information about their repair services, including indicative prices of the most common repairs.

The overarching goal as stated by the Commission is to overcome “obstacles that discourage consumers to repair due to inconvenience, lack of transparency or difficult access to repair services.” To make finding repair services easier for users, the Council said it plans a European-wide online platform “to facilitate the matchmaking between consumers and repairers.”

EU right to repair: Sellers will be liable for a year after products are fixed Read More »

facebook-rules-allowing-fake-biden-“pedophile”-video-deemed-“incoherent”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

Not to be misled —

Meta may revise AI policies that experts say overlook “more misleading” content.

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

A fake video manipulated to falsely depict President Joe Biden inappropriately touching his granddaughter has revealed flaws in Facebook’s “deepfake” policies, Meta’s Oversight Board concluded Monday.

Last year when the Biden video went viral, Facebook repeatedly ruled that it did not violate policies on hate speech, manipulated media, or bullying and harassment. Since the Biden video is not AI-generated content and does not manipulate the president’s speech—making him appear to say things he’s never said—the video was deemed OK to remain on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer.”

“The video does not depict President Biden saying something he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions),” Meta’s blog said.

The Oversight Board—an independent panel of experts—reviewed the case and ultimately upheld Meta’s decision despite being “skeptical” that current policies work to reduce harms.

“The board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do,” the board said, noting that Meta claimed this distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect.”

The board called upon Meta to revise its “incoherent” policies that it said appear to be more concerned with regulating how content is created, rather than with preventing harms. For example, the Biden video’s caption described the president as a “sick pedophile” and called out anyone who would vote for him as “mentally unwell,” which could affect “electoral processes” that Meta could choose to protect, the board suggested.

“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.

One problem, the Oversight Board suggested, is that in its rush to combat AI technologies that make generating deepfakes a fast, cheap, and easy business, Meta policies currently overlook less technical ways of manipulating content.

Instead of using AI, the Biden video relied on basic video-editing technology to edit out the president placing an “I Voted” sticker on his adult granddaughter’s chest. The crude edit looped a 7-second clip altered to make the president appear to be, as Meta described in its blog, “inappropriately touching a young woman’s chest and kissing her on the cheek.”

Meta making this distinction is confusing, the board said, partly because videos altered using non-AI technologies are not considered less misleading or less prevalent on Facebook.

The board recommended that Meta update policies to cover not just AI-generated videos, but other forms of manipulated media, including all forms of manipulated video and audio. Audio fakes currently not covered in the policy, the board warned, offer fewer cues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”

Notably, earlier this year, a fake Biden robocall attempted to mislead Democratic voters in New Hampshire by encouraging them not to vote. The Federal Communications Commission promptly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission was not able to act as swiftly to regulate AI-generated misleading campaign ads easily spread on social media, AP reported. In a statement, Oversight Board Co-Chair Michael McConnell said that manipulated audio is “one of the most potent forms of electoral disinformation.”

To better combat known harms, the board suggested that Meta revise its Manipulated Media policy to “clearly specify the harms it is seeking to prevent.”

Rather than pushing Meta to remove more content, however, the board urged Meta to use “less restrictive” methods of coping with fake content, such as relying on fact-checkers applying labels noting that content is “significantly altered.” In public comments, some Facebook users agreed that labels would be most effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing the Biden video should have been a “deeply easy call.” Another commenter suggested that the Biden video should be considered acceptable speech, as harmless as a funny meme.

While the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it cautioned that including manipulated photos in the policy could “significantly expand” the policy’s scope and make it harder to enforce.

“If Meta sought to label videos, audio, and photographs but only captured a small portion, this could create a false impression that non-labeled content is inherently trustworthy,” the board warned.

Meta should therefore stop short of adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is prepared to enforce a ban on manipulated photos at scale, the board recommended. In the meantime, Meta should move quickly to update policies ahead of a busy election year where experts and politicians globally are bracing for waves of misinformation online.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”

Meta’s spokesperson told Ars that Meta is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days.”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” Read More »

republicans-in-congress-try-to-kill-fcc’s-broadband-discrimination-rules

Republicans in Congress try to kill FCC’s broadband discrimination rules

US Rep. Andrew Clyde (R-Ga.) speaks at a podium with a microphone at an outdoor event.

Enlarge / US Rep. Andrew Clyde (R-Ga.) speaks to the press on June 13, 2023, in Washington, DC.

Getty Images | Michael McCoy

More than 65 Republican lawmakers this week introduced legislation to nullify rules that prohibit discrimination in access to broadband services.

The Federal Communications Commission approved the rules in November despite opposition from broadband providers. The FCC’s two Republicans dissented in the 3-2 vote. While the FCC was required by Congress to issue anti-discrimination rules, Republicans argue that the agency’s Democratic majority wrote rules that are too broad.

On Tuesday this week, US House Republications submitted a resolution of disapproval that would use Congressional Review Act authority to kill the anti-discrimination rules. “Under the guise of ‘equity,’ the Biden administration is attempting to radically expand the federal government’s control of all Internet services and infrastructure,” lead sponsor Rep. Andrew Clyde (R-Ga.) said.

Clyde alleged that the “FCC’s so-called ‘digital discrimination’ rule hands bureaucrats unmitigated regulatory authority that will undoubtedly impede innovation, burden consumers, and generate censorship concerns,” and that it is an “unconstitutional power grab.”

Bill co-sponsor Rep. Buddy Carter (R-Ga.) complained about what he called “the FCC’s totalitarian overreach,” which he said “goes against the very core of free market capitalism.”

Clyde and Carter said their resolution is supported by telecom industry trade groups USTelecom and CTIA, and various conservative advocacy groups. The lawmakers’ press releases included a quote from Americans for Tax Reform President Grover Norquist, who said the resolution is “an opportunity to reverse the FCC’s takeover of the Internet.”

Lawsuits more likely to block rules

In 2017, Republicans used the same Congressional Review Act authority to block broadband-privacy rules. But this time, they essentially have no chance of success.

While Republicans currently have a majority in the House, they’d be unlikely to get the new resolution approved in both chambers because of the Senate’s Democratic majority. Congressional Review Act resolutions of disapproval can also be vetoed by the president.

A more likely path to getting the rules blocked is through the courts. The US Chamber of Commerce sued the FCC this week in an attempt to block the rules, arguing that the FCC exceeded its legal authority.

The lawsuit was filed in the US Court of Appeals for the 5th Circuit, which is generally considered to be one of the most conservative US appeals courts. The Chamber has argued that the FCC rules “micromanag[e] broadband providers through price controls, terms of service requirements, and counterproductive labor provisions.”

ISPs are suing the FCC, too. The Texas Cable Association joined the Chamber of Commerce lawsuit. Separate lawsuits were filed in the 8th and 11th Circuit appeals courts by the Minnesota Telecom Alliance and Florida Internet & Television Association.

Republicans in Congress try to kill FCC’s broadband discrimination rules Read More »

cops-arrest-17-year-old-suspected-of-hundreds-of-swattings-nationwide

Cops arrest 17-year-old suspected of hundreds of swattings nationwide

Coordinated effort —

Police traced swatting calls to teen’s home IP addresses.

Booking photo of Alan Filion, charged with multiple felonies connected to a

Enlarge / Booking photo of Alan Filion, charged with multiple felonies connected to a “swatting” incident at the Masjid Al Hayy Mosque in Sanford, Florida.

Police suspect that a 17-year-old from California, Alan Filion, may be responsible for “hundreds of swatting incidents and bomb threats” targeting the Pentagon, schools, mosques, FBI offices, and military bases nationwide, CNN reported.

Swatting occurs when fraudulent calls to police trigger emergency response teams to react forcefully to non-existent threats.

Recently extradited to Florida, Filion was charged with multiple felonies after the Seminole County Sheriff’s Office (SCSO) traced a call where Filion allegedly claimed to be a mass shooter entering the Masjid Al Hayy Mosque in Sanford, Florida. The caller played “audio of gunfire in the background,” SCSO said, while referencing Satanism and claiming he had a handgun and explosive devices.

Approximately 30 officers responded to the call in May 2023, then determined it was a swatting incident after finding no shooter and confirming that mosque staff was safe. In a statement, SCSO Sheriff Dennis Lemma said that “swatting is a perilous and senseless crime, which puts innocent lives in dangerous situations and drains valuable resources” by prompting a “substantial law enforcement response.”

Seminole County authorities coordinated with the FBI and Department of Justice to track the alleged “serial swatter” down, ultimately arresting Filion on January 18. According to SCSO, police were able to track down Filion after he allegedly “created several accounts on websites offering swatting services” that were linked to various IP addresses connected to his home address. The FBI then served a search warrant on the residence and found “incriminating evidence.”

Filion has been charged as an adult for a variety of offenses, including making a false report while facilitating or furthering an act of terrorism. He is currently being detained in Florida, CNN reported.

Earlier this year, Sen. Rick Scott (R-Fla.) introduced legislation to “crack down” on swattings after he became a target at his home in December. If passed, the Preserving Safe Communities by Ending Swatting Act would impose strict penalties, including a maximum sentence of 20 years in prison for any swatting that lead to serious injuries. If death results, bad actors risk a lifetime sentence. That bill is currently under review by the House Judiciary Committee.

“We must send a message to the cowards behind these calls—this isn’t a joke, it’s a crime,” Scott said.

Last year, Sen. Chuck Schumer (D-NY) warned that an “unprecedented wave” of swatting attacks in just two weeks had targeted 11 states, including more than 200 schools across New York. In response, Schumer called for over $10 million in FBI funding to “specifically tackle the growing problem of swatting.”

Schumer said it was imperative that the FBI begin tracking the incidents more closely, not just to protect victims from potentially deadly swattings, but also to curb costs to law enforcement and prevent unnecessary delays of emergency services tied up by hoax threats.

As a result of Schumer’s push, the FBI announced it would finally begin tracking swatting incidents nationwide. Hundreds of law enforcement agencies and police departments now rely on an FBI database to share information on swatting incidents.

Coordination appears to be key to solving these cases. Lemma noted that SCSO has an “unwavering dedication” to holding swatters accountable, “regardless of where they are located.” His office confirmed that investigators suspect that Filion may have also been behind “other swatting incidents” across the US. SCSO said that it will continue coordinating with local authorities investigating those incidents.

“Make no mistake, we will continue to work tirelessly in collaboration with our policing partners and the judiciary to apprehend swatting perpetrators,” Lemma said. “Gratitude is extended to all agencies involved at the local, state, and federal levels, and this particular investigation and case stands as a stern warning: swatting will face zero tolerance, and measures are in place to identify and prosecute those responsible for such crimes.”

Cops arrest 17-year-old suspected of hundreds of swattings nationwide Read More »

fcc-to-declare-ai-generated-voices-in-robocalls-illegal-under-existing-law

FCC to declare AI-generated voices in robocalls illegal under existing law

AI and robocalls —

Robocalls with AI voices to be regulated under Telephone Consumer Protection Act.

Illustration of a robot wearing a headset for talking on the phone.

Getty Images | Thamrongpat Theerathammakorn

The Federal Communications Commission plans to vote on making the use of AI-generated voices in robocalls illegal. The FCC said that AI-generated voices in robocalls have “escalated during the last few years” and have “the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members.”

FCC Chairwoman Jessica Rosenworcel’s proposed Declaratory Ruling would rule that “calls made with AI-generated voices are ‘artificial’ voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocalls scams targeting consumers illegal,” the commission announced yesterday. Commissioners reportedly will vote on the proposal in the coming weeks.

A recent anti-voting robocall used an artificially generated version of President Joe Biden’s voice. The calls told Democrats not to vote in the New Hampshire Presidential Primary election.

An analysis by the company Pindrop concluded that the artificial Biden voice was created using a text-to-speech engine offered by ElevenLabs. That conclusion was apparently confirmed by ElevenLabs, which reportedly suspended the account of the user who created the deepfake.

FCC ruling could help states crack down

The TCPA, a 1991 US law, bans the use of artificial or prerecorded voices in most non-emergency calls “without the prior express consent of the called party.” The FCC is responsible for writing rules to implement the law, which is punishable with fines.

As the FCC noted yesterday, the TCPA “restricts the making of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages.” Telemarketers are required “to obtain prior express written consent from consumers before robocalling them. If successfully enacted, this Declaratory Ruling would ensure AI-generated voice calls are also held to those same standards.”

The FCC has been thinking about revising its rules to account for artificial intelligence for at least a few months. In November 2023, it launched an inquiry into AI’s impact on robocalls and robotexts.

Rosenworcel said her proposed ruling will “recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers.

“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” Rosenworcel said. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

FCC to declare AI-generated voices in robocalls illegal under existing law Read More »

elon-musk-proposes-tesla-move-to-texas-after-delaware-judge-voids-$56-billion-pay

Elon Musk proposes Tesla move to Texas after Delaware judge voids $56 billion pay

Don’t mess with Tesla —

Musk is sick of Delaware judges, says shareholders will vote on move to Texas.

Elon Musk speaks at an event while wearing a cowboy hat, sunglasses, and T-shirt.

Enlarge / Tesla CEO Elon Musk speaks at Tesla’s “Cyber Rodeo” on April 7, 2022, in Austin, Texas.

Getty Images | AFP/Suzanne Cordeiro

Tesla CEO Elon Musk has had enough of Delaware after a state court ruling voided his $55.8 billion pay package. Musk said last night that Tesla will hold a shareholder vote on transferring the electric carmaker’s state of incorporation to Texas.

Musk had posted a poll on X (formerly Twitter) asking whether Tesla should “change its state of incorporation to Texas, home of its physical headquarters.” After over 87 percent of people voted yes, Musk wrote, “The public vote is unequivocally in favor of Texas! Tesla will move immediately to hold a shareholder vote to transfer state of incorporation to Texas.”

Tesla was incorporated in 2003 before Musk joined the company. Its founders chose Delaware, a common destination because of the state’s low corporate taxes and business-friendly legal framework. The Delaware government says that over 68 percent of Fortune 500 companies are registered in the state, and 79 percent of US-based initial public offerings in 2022 were registered in Delaware.

One reason for choosing Delaware is the state’s Court of Chancery, where cases are decided not by juries but by judges who specialize in corporate law. On Tuesday, Court of Chancery Judge Kathaleen McCormick ruled that Musk’s $55.8 billion pay package was unfair to shareholders and must be rescinded.

McCormick’s ruling in favor of the plaintiff in a shareholder lawsuit said that most of Tesla’s board members “were beholden to Musk or had compromising conflicts.” McCormick also concluded that the Tesla board gave shareholders inaccurate and misleading information in order to secure approval of Musk’s “unfathomable” pay plan.

Musk a fan of Texas and Nevada

Musk yesterday shared a post claiming that McCormick’s ruling “is another clear example of the Biden administration and its allies weaponizing the American legal system against their political opponents.”

McCormick previously oversaw the Twitter lawsuit that forced Musk to complete a $44 billion purchase despite his attempt to break a merger agreement. After Musk became Twitter’s owner, he merged the company into X Corp., which is registered in Nevada.

“Never incorporate your company in the state of Delaware,” Musk wrote in a post after the Delaware court ruling. “I recommend incorporating in Nevada or Texas if you prefer shareholders to decide matters,” he also wrote.

Last year, Texas enacted a law to create business courts that will hear corporate cases. The courts are slated to begin operating on September 1, 2024. Musk is clearly hoping the new Texas courts will be more deferential to Tesla on executive pay if the company is sued again after his next pay plan is agreed on.

Tesla shareholders who will be asked to vote on a corporate move to Texas “need to take a hard look at how transitioning out of Delaware might impact their rights and the company’s governance,” Reuters quoted business adviser Keith Donovan as saying.

Reuters quoted AJ Bell investment analyst Dan Coatsworth as saying that “Elon Musk’s plan to change Tesla’s state of incorporation from Delaware to Texas is typical behavior for the entrepreneur who always looks for an alternative if he can’t get what he wants.”

Elon Musk proposes Tesla move to Texas after Delaware judge voids $56 billion pay Read More »

cops-bogged-down-by-flood-of-fake-ai-child-sex-images,-report-says

Cops bogged down by flood of fake AI child sex images, report says

“Particularly heinous” —

Investigations tied to harmful AI sex images will grow “exponentially,” experts say.

Cops bogged down by flood of fake AI child sex images, report says

Law enforcement is continuing to warn that a “flood” of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.

Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, quickly every attorney general across the US called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery. Meanwhile, law enforcement continues to struggle with figuring out how to confront bad actors found to be creating and sharing images that, for now, largely exist in a legal gray zone.

“Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,” Steve Grocki, the chief of the Justice Department’s child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm’s way, and making it harder for law enforcement to find actual children being harmed.

In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested “for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft.” A search of Thompson’s iCloud revealed “four additional instances” where Thompson allegedly recorded other minors in the lavatory, as well as “over 50 images of a 9-year-old unaccompanied minor” sleeping in her seat. While police attempted to identify these victims, they also “further alleged that hundreds of images of AI-generated child pornography” were found on Thompson’s phone.

The troubling case seems to illustrate how AI-generated child sex images can be linked to real criminal activity while also showing how police investigations could be bogged down by attempts to distinguish photos of real victims from AI images that could depict real or fake children.

Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes Against Children task force, confirmed to the NYT that due to AI, “investigations are way more challenging.”

And because image generators and AI models that can be trained on photos of children are widely available, “using AI to alter photos” of children online “is becoming more common,” Michael Bourke—a former chief psychologist for the US Marshals Service who spent decades supporting investigations into sex offenses involving children—told the NYT. Richards said that cops don’t know what to do when they find these AI-generated materials.

Currently, there aren’t many cases involving AI-generated child sex abuse materials (CSAM), The NYT reported, but experts expect that number will “grow exponentially,” raising “novel and complex questions of whether existing federal and state laws are adequate to prosecute these crimes.”

Platforms struggle to monitor harmful AI images

At a Senate Judiciary Committee hearing today grilling Big Tech CEOs over child sexual exploitation (CSE) on their platforms, Linda Yaccarino—CEO of X (formerly Twitter)—warned in her opening statement that artificial intelligence is also making it harder for platforms to monitor CSE. Yaccarino suggested that industry collaboration is imperative to get ahead of the growing problem, as is providing more resources to law enforcement.

However, US law enforcement officials have indicated that platforms are also making it harder to police CSAM and CSE online. Platforms relying on AI to detect CSAM are generating “unviable reports” gumming up investigations managed by already underfunded law enforcement teams, The Guardian reported. And the NYT reported that other investigations are being thwarted by adding end-to-end encryption options to messaging services, which “drastically limit the number of crimes the authorities are able to track.”

The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing “virtual” or “computer-generated child pornography.” South Carolina’s attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.

Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month. That includes the Disrupt Explicit Forged Images and Non-Consensual Edits Act, which creates a federal civil remedy for any victims of any age who are identifiable in AI images depicting them as nude or engaged in sexually explicit conduct or sexual scenarios.

There’s also the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years imprisonment for sharing harmful images.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Mani said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Cops bogged down by flood of fake AI child sex images, report says Read More »

comcast-reluctantly-agrees-to-stop-its-misleading-“10g-network”-claims

Comcast reluctantly agrees to stop its misleading “10G Network” claims

10G or not 10G —

Comcast said it will drop “Xfinity 10G Network” brand name after losing appeal.

A Comcast router/modem gateway.

Comcast

Comcast has reluctantly agreed to discontinue its “Xfinity 10G Network” brand name after losing an appeal of a ruling that found the marketing term was misleading. It will keep using the term 10G in other ways, however.

Verizon and T-Mobile both challenged Comcast’s advertising of 10G, a term used by cable companies since it was unveiled in January 2019 by industry lobby group NCTA-The Internet & Television Association. We wrote in 2019 that the cable industry’s 10G marketing was likely to confuse consumers and seemed to be a way of countering 5G hype generated by wireless companies.

10G doesn’t refer to the 10th generation of a technology. It is a reference to potential 10Gbps broadband connections, which would be much faster than the actual speeds on standard cable networks today.

The challenges lodged against Comcast marketing were filed with the advertising industry’s self-regulatory system run by BBB National Programs. BBB’s National Advertising Division (NAD) ruled against Comcast in October 2023, but Comcast appealed to the National Advertising Review Board (NARB).

The NARB announced its ruling today, agreeing with the NAD that “Comcast should discontinue use of the term 10G, both when used in the name of the service itself (‘Xfinity 10G Network’) as well as when used to describe the Xfinity network. The use of 10G in a manner that is not false or misleading and is consistent with the panel decision is not precluded by the panel recommendations.”

“Comcast will discontinue brand name”

Comcast agreed to make the change in an advertiser’s statement that it provided to the NARB. “Although Comcast strongly disagrees with NARB’s analysis and approach, Comcast will discontinue use of the brand name ‘Xfinity 10G Network’ and will not use the term ’10G’ in a manner that misleadingly describes the Xfinity network itself,” Comcast said.

Comcast said it disagrees with “the recommendation to discontinue the brand name” because the company “makes available 10Gbps of Internet speed to 98 percent of its subscribers upon request.” But those 10Gbps speeds aren’t available in Comcast’s typical service plans and require a fiber-to-the-home connection instead of a standard cable installation.

The Comcast “Gigabit Pro” fiber connection that provides 10Gbps speeds costs $299.95 a month plus a $19.95 modem lease fee. It also requires a $500 installation charge and a $500 activation charge.

Comcast said it may still use 10G in ways that are less likely to confuse consumers. “Consistent with the panel’s recommendation… Comcast reserves the right to use the term ’10G’ or ‘Xfinity 10G’ in a manner that does not misleadingly describe the Xfinity network itself,” the company said.

When contacted by Ars, a Comcast spokesperson said, “We disagree with the decision but are pleased that we have confirmed our continued use of 10G in advertising.”

Comcast claims “not supported”

The NARB said the “recent availability of 10G speeds through [the Gigabit Pro] service tier does not support the superior speed claim (or a 10Gbps claim) for the Xfinity network as a whole.” As the NARB noted, there is an “absence” of data showing how many Comcast customers actually use that service.

The NARB also said that 10G is misleading because of the implied comparison to 5G wireless networks. “The NARB panel concluded that 10G expressly communicates at a minimum that users of the Xfinity network will experience significantly faster speeds than are available on 5G networks,” the announcement of the ruling said. “This express claim is not supported because the record does not contain any data comparing speeds experienced by Xfinity network users with speeds experienced by subscribers to 5G networks.”

As the NAD has previously stated, 10G is more of an “aspirational” term rather than something that’s offered over today’s cable networks. Over the past five years, the NCTA has been using the term 10G to describe just about any improvement to cable networks, regardless of the actual speeds.

The NCTA coincidentally issued a press release yesterday hailing the fifth anniversary of its first 10G announcement. “Five years on, the future is even closer… Here in 2024, the promise of 10G is becoming more and more of a reality,” the NCTA said.

The announcement listed some examples of multi-gigabit (but not 10-gigabit) cable speeds, some of which were only achieved in lab testing or demos. NCTA claimed that “10G can change lives” and that the “10G platform will facilitate the next great technological advancements in the coming decades, ensuring fast, reliable, and safe networks continue to power the American economy.”

For all of you cable broadband users, just remember to ignore “10G” in cable-company marketing and check the actual speeds you’re paying for.

Comcast reluctantly agrees to stop its misleading “10G Network” claims Read More »

lawsuit:-citibank-refused-to-reimburse-scam-victims-who-lost-“life-savings”

Lawsuit: Citibank refused to reimburse scam victims who lost “life savings”

Online banking fraud —

Citibank’s poor security helped scammers steal millions, NY AG’s lawsuit says.

A large Citibank logo on the outside of a bank building.

Enlarge / The Citibank logo on a bank in New York City in January 2024.

Citibank has illegally refused to reimburse scam victims who lost money due partly to Citibank’s poor online security practices, New York Attorney General Letitia James alleged in a lawsuit filed today in US District Court for the Southern District of New York.

“The lawsuit alleges that Citi does not implement strong online protections to stop unauthorized account takeovers, misleads account holders about their rights after their accounts are hacked and funds are stolen, and illegally denies reimbursement to victims of fraud,” James’ office said in a press release.

The AG’s office alleged that Citi customers “have lost their life savings, their children’s college funds, or even money needed to support their day-to-day lives as a result of Citi’s illegal and deceptive acts and practices.”

“Defendant Citi has not deployed sufficiently robust data security measures to protect consumer financial accounts, respond appropriately to red flags, or limit theft by scam,” the lawsuit said. “Instead, Citi has overpromised and underdelivered on security, reacted ineffectively to fraud alerts, misled consumers, and summarily denied their claims. Citi’s illegal and deceptive practices have cost New Yorkers millions.”

Citi approved large wire transfers

Describing the case of a New York woman who lost $35,000 to a scammer in July 2022, the AG’s press release stated:

She was reviewing her online account and found a message that her account had been suspended and was instructed to call a phone number. She called the number provided and a scammer told her that he would send her Citi codes to verify recent suspicious activity. The scammer then transferred all of the money in the customer’s three savings accounts into her checking account, changed her online passwords, and attempted a $35,000 wire transfer.

Citi attempted to verify the wire transfer by calling the customer, but she was working and did not see the call at the time. Less than an hour later, the scammer attempted another $35,000 wire transfer, which Citi approved without ever having made direct contact with the customer. She lost nearly everything she had saved, and Citi refused to reimburse her.

In an October 2021 incident, a customer clicked a link in a scammer’s message “but did not provide additional information” and then “called her local branch to report the suspicious activity but was told not to worry about it,” the AG’s office said.

“Three days later, the customer discovered that a scammer changed her banking password, enrolled in online wire transfers, transferred $70,000 from her savings to her checking account, and then electronically executed a $40,000 wire transfer, none of which was consistent with her past account activity,” the AG’s office said. “For weeks, the customer continued to contact the bank and submit affidavits, but in the end, she was told that her claim for fraud was denied.”

Citi: No refunds when people “follow criminals’ instructions”

Citi defended its security and refund practices in a statement provided to Ars.

“Citi closely follows all laws and regulations related to wire transfers and works extremely hard to prevent threats from affecting our clients and to assist them in recovering losses when possible. Banks are not required to make clients whole when those clients follow criminals’ instructions and banks can see no indication the clients are being deceived,” the company said.

Citi acknowledged that there has been an “industry-wide surge in wire fraud during the last several years,” and said it has “taken proactive steps to safeguard our clients’ accounts with leading security protocols, intuitive fraud prevention tools, clear insights about the latest scams, and driving client awareness and education. Our actions have reduced client wire fraud losses significantly, and we remain committed to investing in fraud prevention measures to help our clients secure their accounts against emerging threats.”

James’ lawsuit argues that Citibank must provide reimbursement under the Electronic Fund Transfer Act (EFTA), a US law passed in 1978. “As with credit cards, so long as consumers promptly alert banks to unauthorized activity, the EFTA limits losses and requires reimbursement of stolen funds. These consumer protections cannot be waived or modified by contract… Under the EFTA, Citi’s electronic debits of consumers’ accounts are unauthorized and Citi must reimburse all debited amounts,” the lawsuit said.

The lawsuit seeks a permanent injunction against Citibank, an accounting of customer losses over the last six years, payment of restitution and damages to harmed consumers, and civil penalties.

Lawsuit: Citibank refused to reimburse scam victims who lost “life savings” Read More »

sim-swapping-ring-stole-$400m-in-crypto-from-a-us-company,-officials-allege

SIM-swapping ring stole $400M in crypto from a US company, officials allege

Undetected for years —

Scheme allegedly targeted Apple, AT&T, Verizon, and T-Mobile stores in 13 states.

SIM-swapping ring stole $400M in crypto from a US company, officials allege

The US may have uncovered the nation’s largest “SIM swap” scheme yet, charging a Chicago man and co-conspirators with allegedly stealing $400 million in cryptocurrency by targeting over 50 victims in more than a dozen states, including one company.

A recent indictment alleged that Robert Powell—using online monikers “R,” “R$,” and “ElSwapo1″—was the “head of a SIM swapping group” called the “Powell SIM Swapping Crew.” He allegedly conspired with Indiana man Carter Rohn (aka “Carti” and “Punslayer”) and Colorado woman Emily Hernandez (allegedly aka “Em”) to gain access to victims’ devices and “carry out fraudulent SIM swap attacks” between March 2021 and April 2023.

SIM-swap attacks occur when someone fraudulently induces a wireless carrier to “reassign a cell phone number from the legitimate subscriber or user’s SIM card to a SIM card controlled by a criminal actor,” the indictment said. Once the swap occurs, the bad actor can defeat multi-factor authentication protections and access online accounts to steal data or money.

Powell’s accused crew allegedly used identification card printers to forge documents, then posed as victims visiting Apple, AT&T, Verizon, and T-Mobile retail stores in Minnesota, Illinois, Indiana, Utah, Nebraska, Colorado, Florida, Maryland, Massachusetts, Texas, New Mexico, Tennessee, Virginia, and the District of Columbia.

According to the indictment, many of the alleged victims did not suffer financial losses, but those that did were allegedly hit hard. The hardest hit appears to be an employee of a company whose AT&T device was allegedly commandeered at a Texas retail store, resulting in over $400 million being allegedly transferred from the employee’s company to co-conspirators’ financial accounts. Other individual victims allegedly lost cryptocurrency valued between $15,000 and more than $1 million.

Co-conspirators are accused of masking stolen funds, sometimes by allegedly hiding transfers in unhosted or self-hosted virtual currency wallets. If convicted, all stolen funds must be forfeited, the indictment said.

Powell has been charged with conspiracy to commit wire fraud and conspiracy to commit aggravated identity theft and access device fraud, Special Agent Brent Bledsoe said in the indictment. This Friday, Powell faces a detention hearing, where he has been ordered by the US Marshals Service to appear in person.

Powell’s attorney, Gal Pissetzky, told Ars that Powell has no comment on the indictment at this time.

SIM swaps escalating in US?

When Powell’s alleged scheme began in 2021, the FBI issued a warning, noting that criminals were increasingly using SIM-swap attacks, fueling total losses that year of $68 million.

Since then, US law enforcement has made several arrests, but none of the uncovered schemes come close to the alleged losses from the thefts Powell’s crew are being accused of.

In 2022, a Florida man, Nicholas Truglia, was sentenced to 18 months for stealing more than $20 million from a single victim. On top of forfeiting the stolen funds, Truglia was also ordered to forfeit more than $900,000 as a criminal penalty. According to security blogger Brian Krebs, Truglia was connected to a group that allegedly stole $100 million using SIM-swap attacks.

Last year, there were a few notable arrests. In October, the Department of Justice sentenced a hacker, Jordan Dave Persad, to 30 months for stealing nearly $1 million from “dozens of victims.” And in December, four Florida men received sentences between eight and 27 months for stealing more than $509,475 in SIM-swap attacks.

Ars could not find any FBI warnings since 2021 raising awareness that losses from SIM-swap attacks may be further increasing to amounts as eye-popping as the alleged losses in Powell’s case.

A DOJ official was unable to confirm if this is the biggest SIM-swapping scheme alleged in the US, directing Ars to another office. Ars will update this report with any new information the DOJ provides.

US officials seem aware that some bad actors attempting SIM-swap attacks appear to be getting bolder. Earlier this year, the Securities and Exchange Commission was targeted in an attack that commandeered the agency’s account on X, formerly known as Twitter. That attack led to a misleading X post falsely announcing the approval of bitcoin exchange-traded funds, causing a brief spike in bitcoin’s price.

To protect consumers from SIM-swap attacks, the Federal Communications Commission announced new rules last year to “require wireless providers to adopt secure methods of authenticating a customer before redirecting a customer’s phone number to a new device or provider. The new rules require wireless providers to immediately notify customers whenever a SIM change or port-out request is made on customers’ accounts and take additional steps to protect customers from SIM swap and port-out fraud.” But an Ars review found these new rules may be too vague to be effective.

In 2021, when European authorities busted a SIM-swapping ring allegedly targeting high-profile individuals worldwide, Europol advised consumers to avoid becoming targets. Tips included using multifactor authentication, resisting associating sensitive accounts with mobile phone numbers, keeping devices updated, avoiding replying to suspicious emails or callers requesting sensitive information, and limiting personal data shared online. Consumers can also request the highest security settings possible from mobile carriers and are encouraged to always use stronger, longer security PINs or passwords to protect devices.

SIM-swapping ring stole $400M in crypto from a US company, officials allege Read More »