Policy

bluesky-finally-gets-rid-of-invite-codes,-lets-everyone-join

Bluesky finally gets rid of invite codes, lets everyone join

Bluesky finally gets rid of invite codes, lets everyone join

After more than a year as an exclusive invite-only social media platform, Bluesky is now open to the public, so anyone can join without needing a once-coveted invite code.

In a blog, Bluesky said that requiring invite codes helped Bluesky “manage growth” while building features that allow users to control what content they see on the social platform.

When Bluesky debuted, many viewed it as a potential Twitter killer, but limited access to Bluesky may have weakened momentum. As of January 2024, Bluesky has more than 3 million users. That’s significantly less than X (formerly Twitter), which estimates suggest currently boasts more than 400 million global users.

But Bluesky CEO Jay Graber wrote in a blog last April that the app needed time because its goal was to piece together a new kind of social network built on its own decentralized protocol, AT Protocol. This technology allows users to freely port their social media accounts to different social platforms—including followers—rather than being locked into walled-off experiences on a platform owned by “a single company” like Meta’s Threads.

Perhaps most critically, the team wanted time to build out content moderation features before opening Bluesky to the masses to “prioritize user safety from the start.”

Bluesky plans to take a threefold approach to content moderation. The first layer is automated filtering that removes illegal, harmful content like child sexual abuse materials. Beyond that, Bluesky will soon give users extra layers of protection, including community labels and options to enable admins running servers to filter content manually.

Labeling services will be rolled out “in the coming weeks,” the blog said. These labels will make it possible for individuals or organizations to run their own moderation services, such as a trusted fact-checking organization. Users who trust these sources can subscribe to labeling services that filter out or appropriately label different types of content, like “spam” or “NSFW.”

“The human-generated label sets can be thought of as something similar to shared mute/block lists,” Bluesky explained last year.

Currently, Bluesky is recruiting partners for labeling services and did not immediately respond to Ars’ request to comment on any initial partnerships already formed.

It appears that Bluesky is hoping to bring in new users while introducing some of its flashiest features. Within the next month, Bluesky will also “be rolling out an experimental early version of ‘federation,’ or the feature that makes the network so open and customizable,” the blog said. The sales pitch is simple:

On Bluesky, you’ll have the freedom to choose (and the right to leave) instead of being held to the whims of private companies or black box algorithms. And wherever you go, your friends and relationships can go with you.

Developers interested in experimenting with the earliest version of AT Protocol can start testing out self-hosting servers now.

In addition to allowing users to customize content moderation, Bluesky also provides ways to customize feeds. Anyone joining will be defaulted to only see posts from users they follow, but they can also set up filters to discover content they enjoy without relying on a company’s algorithm to learn what interests them.

Bluesky users who sat on invite codes over the past year have joked about their uselessness now, with some designating themselves as legacy users. Seeming to reference Twitter’s once-coveted blue checks, one Bluesky user responding to a post from Graber joked, “When does everyone from the invite-only days get their Bluesky Elder profile badge?”

Bluesky finally gets rid of invite codes, lets everyone join Read More »

“don’t-let-them-drop-us!”-landline-users-protest-at&t-copper-retirement-plan

“Don’t let them drop us!” Landline users protest AT&T copper retirement plan

A pair of scissors being used to cut a wire coming out of a landline telephone.

AT&T’s application to end its landline phone obligations in California is drawing protest from residents as state officials consider whether to let AT&T off the hook.

AT&T filed an application to end its Carrier of Last Resort (COLR) obligation in March 2023. The first of several public hearings on the application is being held today by the California Public Utilities Commission (CPUC), which is considering AT&T’s request. An evidentiary hearing has been scheduled for April, and a proposed decision is expected in September.

AT&T has said it won’t cut off phone service immediately, but ending the COLR obligation would make it easier for AT&T to drop its phone lines later on. AT&T’s application said it would provide basic phone service in all areas for at least six months and indefinitely in areas without any alternative voice service.

“If approved by the CPUC, over 580,000 affected AT&T customers would be left with fewer options in terms of choice, quality, and affordability,” warns the Rural County Representatives of California. “Alternative services, such as VoIP and wireless, have no obligation to serve a customer or to provide equivalent services to AT&T landline customers, including no obligation to provide reliable access to 911 or Lifeline program discounts.”

“Please don’t let them drop us!”

Recent comments from residents stressed the importance of landlines for emergency services. Residents also described problems with wireless service that could serve as the only replacement for copper networks in areas that AT&T hasn’t deemed profitable enough for fiber lines.

“We live in the country with no cell service so the landline we have is the only way we can get help in an emergency,” a resident of Moss Landing wrote today. “There are only 5 homes on our part of the line. I don’t see any other company volunteering to pick up our service after we have heard AT&T tell us so many times we would be the very last to get things fixed due to the little amount of homes. Please don’t let them drop us!”

The docket has received over 2,100 comments in the past three weeks and about 2,300 overall that are overwhelmingly opposed to AT&T’s plan. There are another 600 comments on a separate docket for a related AT&T application.

Even some residents who have access to cable companies, which generally offer VoIP service, aren’t ready to give up their old copper landlines.

“Internet over cable has gotten more reliable, but not so reliable that I’m willing to stake my lifeline telecommunication service on it,” a resident of Hayward wrote yesterday. “In fact, I keep DSL service on my POTS [Plain Old Telephone Service] line as a backup to our cable Internet service… Emergency 911 service over cell phones still doesn’t work. The last time I tried to report a grass fire adjacent to a Cal State University, the dispatcher didn’t know what city I was calling from.”

Carrier of last resort must provide service to anyone

AT&T recently filed an objection to how opponents are describing its phone service plans. An AT&T filing on January 16 disputed claims that low-income households could see their bills double and that “AT&T has stated that it intends to shut down its telephone network.”

“AT&T California will continue to offer basic telephone service in all of its service area unless and until it separately obtains all necessary permission to stop, so no customer will lose service if the Commission approves AT&T California’s application,” AT&T said.

According to AT&T’s application, the company has to complete the Section 214 discontinuance process run by the Federal Communications Commission to discontinue service in any given area fully.

CPUC says in a summary of the situation that “AT&T is the designated COLR in many parts of the state and is the largest COLR in California.” This means “the company must provide traditional landline telephone service to any potential customer in that service territory. AT&T is proposing to withdraw as the COLR in your area without a new carrier being designated as a COLR.”

“If AT&T’s proposal were accepted as set forth in its application, then no COLR would be required to provide basic service in your area,” the state agency said. “This does not necessarily mean that no carriers would, in fact, provide service in your area—only that they would not be required to do so. Other outcomes are possible, such as another carrier besides AT&T volunteering to become the COLR in your area, or the CPUC denying AT&T’s proposal.”

“Don’t let them drop us!” Landline users protest AT&T copper retirement plan Read More »

data-broker-allegedly-selling-de-anonymized-info-to-face-ftc-lawsuit-after-all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all

The Federal Trade Commission has succeeded in keeping alive its first federal court case against a geolocation data broker that’s allegedly unfairly selling large quantities of data in violation of the FTC Act.

On Saturday, US District Judge Lynn Winmill denied Kochava’s motion to dismiss an amended FTC complaint, which he said plausibly argued that “Kochava’s data sales invade consumers’ privacy and expose them to risks of secondary harms by third parties.”

Winmill’s ruling reversed a dismissal of the FTC’s initial complaint, which the court previously said failed to adequately allege that Kochava’s data sales cause or are likely to cause a “substantial” injury to consumers.

The FTC has accused Kochava of selling “a substantial amount of data obtained from millions of mobile devices across the world”—allegedly combining precise geolocation data with a “staggering amount of sensitive and identifying information” without users’ knowledge or informed consent. This data, the FTC alleged, “is not anonymized and is linked or easily linkable to individual consumers” without mining “other sources of data.”

Kochava’s data sales allegedly allow its customers—whom the FTC noted often pay tens of thousands of dollars monthly—to target specific individuals by combining Kochava data sets. Using just Kochava data, marketers can create “highly granular” portraits of ad targets such as “a woman who visits a particular building, the woman’s name, email address, and home address, and whether the woman is African-American, a parent (and if so, how many children), or has an app identifying symptoms of cancer on her phone.” Just one of Kochava’s databases “contains ‘comprehensive profiles of individual consumers,’ with up to ‘300 data points’ for ‘over 300 million unique individuals,'” the FTC reported.

This harms consumers, the FTC alleged, in “two distinct ways”—by invading their privacy and by causing “an increased risk of suffering secondary harms, such as stigma, discrimination, physical violence, and emotional distress.”

In its amended complaint, the FTC overcame deficiencies in its initial complaint by citing specific examples of consumers already known to have been harmed by brokers sharing sensitive data without their consent. That included a Catholic priest who resigned after he was outed by a group using precise mobile geolocation data to track his personal use of Grindr and his movements to “LGBTQ+-associated locations.” The FTC also pointed to invasive practices by journalists using precise mobile geolocation data to identify and track military and law enforcement officers over time, as well as data brokers tracking “abortion-minded women” who visited reproductive health clinics to target them with ads about abortion and alternatives to abortion.

“Kochava’s practices intrude into the most private areas of consumers’ lives and cause or are likely to cause substantial injury to consumers,” the FTC’s amended complaint said.

The FTC is seeking a permanent injunction to stop Kochava from allegedly selling sensitive data without user consent.

Kochava considers the examples of consumer harms in the FTC’s amended complaint as “anecdotes” disconnected from its own activities. The data broker was seemingly so confident that Winmill would agree to dismiss the FTC’s amended complaint that the company sought sanctions against the FTC for what it construed as a “baseless” filing. According to Kochava, many of the FTC’s allegations were “knowingly false.”

Ultimately, the court found no evidence that the FTC’s complaints were baseless. Instead of dismissing the case and ordering the FTC to pay sanctions, Winmill wrote in his order that Kochava’s motion to dismiss “misses the point” of the FTC’s filing, which was to allege that Kochava’s data sales are “likely” to cause alleged harms. Because the FTC had “significantly” expanded factual allegations, the agency “easily” satisfied the plausibility standard to allege substantial harms were likely, Winmill said.

Kochava CEO and founder Charles Manning said in a statement provided to Ars that Kochava “expected” Winmill’s ruling and is “confident” that Kochava “will prevail on the merits.”

“This case is really about the FTC attempting to make an end-run around Congress to create data privacy law,” Manning said. “The FTC’s salacious hypotheticals in its amended complaint are mere scare tactics. Kochava has always operated consistently and proactively in compliance with all rules and laws, including those specific to privacy.”

In a press release announcing the FTC lawsuit in 2022, the director of the FTC’s Bureau of Consumer Protection, Samuel Levine, said that the FTC was determined to halt Kochava’s allegedly harmful data sales.

“Where consumers seek out health care, receive counseling, or celebrate their faith is private information that shouldn’t be sold to the highest bidder,” Levine said. “The FTC is taking Kochava to court to protect people’s privacy and halt the sale of their sensitive geolocation information.”

Data broker allegedly selling de-anonymized info to face FTC lawsuit after all Read More »

4chan-daily-challenge-sparked-deluge-of-explicit-ai-taylor-swift-images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan daily challenge sparked deluge of explicit AI Taylor Swift images

4chan users who have made a game out of exploiting popular AI image generators appear to be at least partly responsible for the flood of fake images sexualizing Taylor Swift that went viral last month.

Graphika researchers—who study how communities are manipulated online—traced the fake Swift images to a 4chan message board that’s “increasingly” dedicated to posting “offensive” AI-generated content, The New York Times reported. Fans of the message board take part in daily challenges, Graphika reported, sharing tips to bypass AI image generator filters and showing no signs of stopping their game any time soon.

“Some 4chan users expressed a stated goal of trying to defeat mainstream AI image generators’ safeguards rather than creating realistic sexual content with alternative open-source image generators,” Graphika reported. “They also shared multiple behavioral techniques to create image prompts, attempt to avoid bans, and successfully create sexually explicit celebrity images.”

Ars reviewed a thread flagged by Graphika where users were specifically challenged to use Microsoft tools like Bing Image Creator and Microsoft Designer, as well as OpenAI’s DALL-E.

“Good luck,” the original poster wrote, while encouraging other users to “be creative.”

OpenAI has denied that any of the Swift images were created using DALL-E, while Microsoft has continued to claim that it’s investigating whether any of its AI tools were used.

Cristina López G., a senior analyst at Graphika, noted that Swift is not the only celebrity targeted in the 4chan thread.

“While viral pornographic pictures of Taylor Swift have brought mainstream attention to the issue of AI-generated non-consensual intimate images, she is far from the only victim,” López G. said. “In the 4chan community where these images originated, she isn’t even the most frequently targeted public figure. This shows that anyone can be targeted in this way, from global celebrities to school children.”

Originally, 404 Media reported that the harmful Swift images appeared to originate from 4chan and Telegram channels before spreading on X (formerly Twitter) and other social media. Attempting to stop the spread, X took the drastic step of blocking all searches for “Taylor Swift” for two days.

But López G. said that Graphika’s findings suggest that platforms will continue to risk being inundated with offensive content so long as 4chan users are determined to continue challenging each other to subvert image generator filters. Rather than expecting platforms to chase down the harmful content, López G. recommended that AI companies should get ahead of the problem, taking responsibility for outputs by paying attention to evolving tactics of toxic online communities reporting precisely how they’re getting around safeguards.

“These images originated from a community of people motivated by the ‘challenge’ of circumventing the safeguards of generative AI products, and new restrictions are seen as just another obstacle to ‘defeat,’” López G. said. “It’s important to understand the gamified nature of this malicious activity in order to prevent further abuse at the source.”

Experts told The Times that 4chan users were likely motivated to participate in these challenges for bragging rights and to “feel connected to a wider community.”

4chan daily challenge sparked deluge of explicit AI Taylor Swift images Read More »

eu-right-to-repair:-sellers-will-be-liable-for-a-year-after-products-are-fixed

EU right to repair: Sellers will be liable for a year after products are fixed

Right to repair —

Rules also ban “contractual, hardware or software related barriers to repair.”

A European Union flag blowing in the wind.

Getty Images | SimpleImages

Europe’s right-to-repair rules will force vendors to stand by their products an extra 12 months after a repair is made, according to the terms of a new political agreement.

Consumers will have a choice between repair and replacement of defective products during a liability period that sellers will be required to offer. The liability period is slated to be a minimum of two years before any extensions.

“If the consumer chooses the repair of the good, the seller’s liability period will be extended by 12 months from the moment when the product is brought into conformity. This period may be further prolonged by member states if they so wish,” a European Council announcement on Friday said.

The 12-month extension is part of a provisional deal between the European Parliament and Council on how to implement the European Commission’s right-to-repair directive that was passed in March 2023. The Parliament and Council still need to formally adopt the agreement, which would then come into force 20 days after it is published in the Official Journal of the European Union.

“Once adopted, the new rules will introduce a new ‘right to repair’ for consumers, both within and beyond the legal guarantee, which will make it easier and more cost-effective for them to repair products instead of simply replacing them with new ones,” the European Commission said on Friday.

Rules prohibit “barriers to repair”

The rules require spare parts to be available at reasonable prices, and product makers will be prohibited from using “contractual, hardware or software related barriers to repair, such as impeding the use of second-hand, compatible and 3D-printed spare parts by independent repairers,” the Commission said.

The newly agreed-upon text “requires manufacturers to make the necessary repairs within a reasonable time and, unless the service is provided for free, for a reasonable price too, so that consumers are encouraged to opt for repair,” the European Council said.

There will be required options for consumers to get repairs both before and after the minimum liability period expires, the Commission said:

When a defect appears within the legal guarantee, consumers will now benefit from a prolonged legal guarantee of one year if they choose to have their products repaired.

When the legal guarantee has expired, the consumers will be able to request an easier and cheaper repair of defects in those products that must be technically repairable (such as tablets, smartphones but also washing machines, dishwashers, etc.). Manufacturers will be required to publish information about their repair services, including indicative prices of the most common repairs.

The overarching goal as stated by the Commission is to overcome “obstacles that discourage consumers to repair due to inconvenience, lack of transparency or difficult access to repair services.” To make finding repair services easier for users, the Council said it plans a European-wide online platform “to facilitate the matchmaking between consumers and repairers.”

EU right to repair: Sellers will be liable for a year after products are fixed Read More »

facebook-rules-allowing-fake-biden-“pedophile”-video-deemed-“incoherent”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

Not to be misled —

Meta may revise AI policies that experts say overlook “more misleading” content.

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent”

A fake video manipulated to falsely depict President Joe Biden inappropriately touching his granddaughter has revealed flaws in Facebook’s “deepfake” policies, Meta’s Oversight Board concluded Monday.

Last year when the Biden video went viral, Facebook repeatedly ruled that it did not violate policies on hate speech, manipulated media, or bullying and harassment. Since the Biden video is not AI-generated content and does not manipulate the president’s speech—making him appear to say things he’s never said—the video was deemed OK to remain on the platform. Meta also noted that the video was “unlikely to mislead” the “average viewer.”

“The video does not depict President Biden saying something he did not say, and the video is not the product of artificial intelligence or machine learning in a way that merges, combines, replaces, or superimposes content onto the video (the video was merely edited to remove certain portions),” Meta’s blog said.

The Oversight Board—an independent panel of experts—reviewed the case and ultimately upheld Meta’s decision despite being “skeptical” that current policies work to reduce harms.

“The board sees little sense in the choice to limit the Manipulated Media policy to cover only people saying things they did not say, while excluding content showing people doing things they did not do,” the board said, noting that Meta claimed this distinction was made because “videos involving speech were considered the most misleading and easiest to reliably detect.”

The board called upon Meta to revise its “incoherent” policies that it said appear to be more concerned with regulating how content is created, rather than with preventing harms. For example, the Biden video’s caption described the president as a “sick pedophile” and called out anyone who would vote for him as “mentally unwell,” which could affect “electoral processes” that Meta could choose to protect, the board suggested.

“Meta should reconsider this policy quickly, given the number of elections in 2024,” the Oversight Board said.

One problem, the Oversight Board suggested, is that in its rush to combat AI technologies that make generating deepfakes a fast, cheap, and easy business, Meta policies currently overlook less technical ways of manipulating content.

Instead of using AI, the Biden video relied on basic video-editing technology to edit out the president placing an “I Voted” sticker on his adult granddaughter’s chest. The crude edit looped a 7-second clip altered to make the president appear to be, as Meta described in its blog, “inappropriately touching a young woman’s chest and kissing her on the cheek.”

Meta making this distinction is confusing, the board said, partly because videos altered using non-AI technologies are not considered less misleading or less prevalent on Facebook.

The board recommended that Meta update policies to cover not just AI-generated videos, but other forms of manipulated media, including all forms of manipulated video and audio. Audio fakes currently not covered in the policy, the board warned, offer fewer cues to alert listeners to the inauthenticity of recordings and may even be considered “more misleading than video content.”

Notably, earlier this year, a fake Biden robocall attempted to mislead Democratic voters in New Hampshire by encouraging them not to vote. The Federal Communications Commission promptly responded by declaring AI-generated robocalls illegal, but the Federal Election Commission was not able to act as swiftly to regulate AI-generated misleading campaign ads easily spread on social media, AP reported. In a statement, Oversight Board Co-Chair Michael McConnell said that manipulated audio is “one of the most potent forms of electoral disinformation.”

To better combat known harms, the board suggested that Meta revise its Manipulated Media policy to “clearly specify the harms it is seeking to prevent.”

Rather than pushing Meta to remove more content, however, the board urged Meta to use “less restrictive” methods of coping with fake content, such as relying on fact-checkers applying labels noting that content is “significantly altered.” In public comments, some Facebook users agreed that labels would be most effective. Others urged Meta to “start cracking down” and remove all fake videos, with one suggesting that removing the Biden video should have been a “deeply easy call.” Another commenter suggested that the Biden video should be considered acceptable speech, as harmless as a funny meme.

While the board wants Meta to also expand its policies to cover all forms of manipulated audio and video, it cautioned that including manipulated photos in the policy could “significantly expand” the policy’s scope and make it harder to enforce.

“If Meta sought to label videos, audio, and photographs but only captured a small portion, this could create a false impression that non-labeled content is inherently trustworthy,” the board warned.

Meta should therefore stop short of adding manipulated images to the policy, the board said. Instead, Meta should conduct research into the effects of manipulated photos and then consider updates when the company is prepared to enforce a ban on manipulated photos at scale, the board recommended. In the meantime, Meta should move quickly to update policies ahead of a busy election year where experts and politicians globally are bracing for waves of misinformation online.

“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” McConnell said. “Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.”

Meta’s spokesperson told Ars that Meta is “reviewing the Oversight Board’s guidance and will respond publicly to their recommendations within 60 days.”

Facebook rules allowing fake Biden “pedophile” video deemed “incoherent” Read More »

republicans-in-congress-try-to-kill-fcc’s-broadband-discrimination-rules

Republicans in Congress try to kill FCC’s broadband discrimination rules

US Rep. Andrew Clyde (R-Ga.) speaks at a podium with a microphone at an outdoor event.

Enlarge / US Rep. Andrew Clyde (R-Ga.) speaks to the press on June 13, 2023, in Washington, DC.

Getty Images | Michael McCoy

More than 65 Republican lawmakers this week introduced legislation to nullify rules that prohibit discrimination in access to broadband services.

The Federal Communications Commission approved the rules in November despite opposition from broadband providers. The FCC’s two Republicans dissented in the 3-2 vote. While the FCC was required by Congress to issue anti-discrimination rules, Republicans argue that the agency’s Democratic majority wrote rules that are too broad.

On Tuesday this week, US House Republications submitted a resolution of disapproval that would use Congressional Review Act authority to kill the anti-discrimination rules. “Under the guise of ‘equity,’ the Biden administration is attempting to radically expand the federal government’s control of all Internet services and infrastructure,” lead sponsor Rep. Andrew Clyde (R-Ga.) said.

Clyde alleged that the “FCC’s so-called ‘digital discrimination’ rule hands bureaucrats unmitigated regulatory authority that will undoubtedly impede innovation, burden consumers, and generate censorship concerns,” and that it is an “unconstitutional power grab.”

Bill co-sponsor Rep. Buddy Carter (R-Ga.) complained about what he called “the FCC’s totalitarian overreach,” which he said “goes against the very core of free market capitalism.”

Clyde and Carter said their resolution is supported by telecom industry trade groups USTelecom and CTIA, and various conservative advocacy groups. The lawmakers’ press releases included a quote from Americans for Tax Reform President Grover Norquist, who said the resolution is “an opportunity to reverse the FCC’s takeover of the Internet.”

Lawsuits more likely to block rules

In 2017, Republicans used the same Congressional Review Act authority to block broadband-privacy rules. But this time, they essentially have no chance of success.

While Republicans currently have a majority in the House, they’d be unlikely to get the new resolution approved in both chambers because of the Senate’s Democratic majority. Congressional Review Act resolutions of disapproval can also be vetoed by the president.

A more likely path to getting the rules blocked is through the courts. The US Chamber of Commerce sued the FCC this week in an attempt to block the rules, arguing that the FCC exceeded its legal authority.

The lawsuit was filed in the US Court of Appeals for the 5th Circuit, which is generally considered to be one of the most conservative US appeals courts. The Chamber has argued that the FCC rules “micromanag[e] broadband providers through price controls, terms of service requirements, and counterproductive labor provisions.”

ISPs are suing the FCC, too. The Texas Cable Association joined the Chamber of Commerce lawsuit. Separate lawsuits were filed in the 8th and 11th Circuit appeals courts by the Minnesota Telecom Alliance and Florida Internet & Television Association.

Republicans in Congress try to kill FCC’s broadband discrimination rules Read More »

cops-arrest-17-year-old-suspected-of-hundreds-of-swattings-nationwide

Cops arrest 17-year-old suspected of hundreds of swattings nationwide

Coordinated effort —

Police traced swatting calls to teen’s home IP addresses.

Booking photo of Alan Filion, charged with multiple felonies connected to a

Enlarge / Booking photo of Alan Filion, charged with multiple felonies connected to a “swatting” incident at the Masjid Al Hayy Mosque in Sanford, Florida.

Police suspect that a 17-year-old from California, Alan Filion, may be responsible for “hundreds of swatting incidents and bomb threats” targeting the Pentagon, schools, mosques, FBI offices, and military bases nationwide, CNN reported.

Swatting occurs when fraudulent calls to police trigger emergency response teams to react forcefully to non-existent threats.

Recently extradited to Florida, Filion was charged with multiple felonies after the Seminole County Sheriff’s Office (SCSO) traced a call where Filion allegedly claimed to be a mass shooter entering the Masjid Al Hayy Mosque in Sanford, Florida. The caller played “audio of gunfire in the background,” SCSO said, while referencing Satanism and claiming he had a handgun and explosive devices.

Approximately 30 officers responded to the call in May 2023, then determined it was a swatting incident after finding no shooter and confirming that mosque staff was safe. In a statement, SCSO Sheriff Dennis Lemma said that “swatting is a perilous and senseless crime, which puts innocent lives in dangerous situations and drains valuable resources” by prompting a “substantial law enforcement response.”

Seminole County authorities coordinated with the FBI and Department of Justice to track the alleged “serial swatter” down, ultimately arresting Filion on January 18. According to SCSO, police were able to track down Filion after he allegedly “created several accounts on websites offering swatting services” that were linked to various IP addresses connected to his home address. The FBI then served a search warrant on the residence and found “incriminating evidence.”

Filion has been charged as an adult for a variety of offenses, including making a false report while facilitating or furthering an act of terrorism. He is currently being detained in Florida, CNN reported.

Earlier this year, Sen. Rick Scott (R-Fla.) introduced legislation to “crack down” on swattings after he became a target at his home in December. If passed, the Preserving Safe Communities by Ending Swatting Act would impose strict penalties, including a maximum sentence of 20 years in prison for any swatting that lead to serious injuries. If death results, bad actors risk a lifetime sentence. That bill is currently under review by the House Judiciary Committee.

“We must send a message to the cowards behind these calls—this isn’t a joke, it’s a crime,” Scott said.

Last year, Sen. Chuck Schumer (D-NY) warned that an “unprecedented wave” of swatting attacks in just two weeks had targeted 11 states, including more than 200 schools across New York. In response, Schumer called for over $10 million in FBI funding to “specifically tackle the growing problem of swatting.”

Schumer said it was imperative that the FBI begin tracking the incidents more closely, not just to protect victims from potentially deadly swattings, but also to curb costs to law enforcement and prevent unnecessary delays of emergency services tied up by hoax threats.

As a result of Schumer’s push, the FBI announced it would finally begin tracking swatting incidents nationwide. Hundreds of law enforcement agencies and police departments now rely on an FBI database to share information on swatting incidents.

Coordination appears to be key to solving these cases. Lemma noted that SCSO has an “unwavering dedication” to holding swatters accountable, “regardless of where they are located.” His office confirmed that investigators suspect that Filion may have also been behind “other swatting incidents” across the US. SCSO said that it will continue coordinating with local authorities investigating those incidents.

“Make no mistake, we will continue to work tirelessly in collaboration with our policing partners and the judiciary to apprehend swatting perpetrators,” Lemma said. “Gratitude is extended to all agencies involved at the local, state, and federal levels, and this particular investigation and case stands as a stern warning: swatting will face zero tolerance, and measures are in place to identify and prosecute those responsible for such crimes.”

Cops arrest 17-year-old suspected of hundreds of swattings nationwide Read More »

fcc-to-declare-ai-generated-voices-in-robocalls-illegal-under-existing-law

FCC to declare AI-generated voices in robocalls illegal under existing law

AI and robocalls —

Robocalls with AI voices to be regulated under Telephone Consumer Protection Act.

Illustration of a robot wearing a headset for talking on the phone.

Getty Images | Thamrongpat Theerathammakorn

The Federal Communications Commission plans to vote on making the use of AI-generated voices in robocalls illegal. The FCC said that AI-generated voices in robocalls have “escalated during the last few years” and have “the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members.”

FCC Chairwoman Jessica Rosenworcel’s proposed Declaratory Ruling would rule that “calls made with AI-generated voices are ‘artificial’ voices under the Telephone Consumer Protection Act (TCPA), which would make voice cloning technology used in common robocalls scams targeting consumers illegal,” the commission announced yesterday. Commissioners reportedly will vote on the proposal in the coming weeks.

A recent anti-voting robocall used an artificially generated version of President Joe Biden’s voice. The calls told Democrats not to vote in the New Hampshire Presidential Primary election.

An analysis by the company Pindrop concluded that the artificial Biden voice was created using a text-to-speech engine offered by ElevenLabs. That conclusion was apparently confirmed by ElevenLabs, which reportedly suspended the account of the user who created the deepfake.

FCC ruling could help states crack down

The TCPA, a 1991 US law, bans the use of artificial or prerecorded voices in most non-emergency calls “without the prior express consent of the called party.” The FCC is responsible for writing rules to implement the law, which is punishable with fines.

As the FCC noted yesterday, the TCPA “restricts the making of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages.” Telemarketers are required “to obtain prior express written consent from consumers before robocalling them. If successfully enacted, this Declaratory Ruling would ensure AI-generated voice calls are also held to those same standards.”

The FCC has been thinking about revising its rules to account for artificial intelligence for at least a few months. In November 2023, it launched an inquiry into AI’s impact on robocalls and robotexts.

Rosenworcel said her proposed ruling will “recognize this emerging technology as illegal under existing law, giving our partners at State Attorneys General offices across the country new tools they can use to crack down on these scams and protect consumers.

“AI-generated voice cloning and images are already sowing confusion by tricking consumers into thinking scams and frauds are legitimate,” Rosenworcel said. “No matter what celebrity or politician you favor, or what your relationship is with your kin when they call for help, it is possible we could all be a target of these faked calls.”

FCC to declare AI-generated voices in robocalls illegal under existing law Read More »

elon-musk-proposes-tesla-move-to-texas-after-delaware-judge-voids-$56-billion-pay

Elon Musk proposes Tesla move to Texas after Delaware judge voids $56 billion pay

Don’t mess with Tesla —

Musk is sick of Delaware judges, says shareholders will vote on move to Texas.

Elon Musk speaks at an event while wearing a cowboy hat, sunglasses, and T-shirt.

Enlarge / Tesla CEO Elon Musk speaks at Tesla’s “Cyber Rodeo” on April 7, 2022, in Austin, Texas.

Getty Images | AFP/Suzanne Cordeiro

Tesla CEO Elon Musk has had enough of Delaware after a state court ruling voided his $55.8 billion pay package. Musk said last night that Tesla will hold a shareholder vote on transferring the electric carmaker’s state of incorporation to Texas.

Musk had posted a poll on X (formerly Twitter) asking whether Tesla should “change its state of incorporation to Texas, home of its physical headquarters.” After over 87 percent of people voted yes, Musk wrote, “The public vote is unequivocally in favor of Texas! Tesla will move immediately to hold a shareholder vote to transfer state of incorporation to Texas.”

Tesla was incorporated in 2003 before Musk joined the company. Its founders chose Delaware, a common destination because of the state’s low corporate taxes and business-friendly legal framework. The Delaware government says that over 68 percent of Fortune 500 companies are registered in the state, and 79 percent of US-based initial public offerings in 2022 were registered in Delaware.

One reason for choosing Delaware is the state’s Court of Chancery, where cases are decided not by juries but by judges who specialize in corporate law. On Tuesday, Court of Chancery Judge Kathaleen McCormick ruled that Musk’s $55.8 billion pay package was unfair to shareholders and must be rescinded.

McCormick’s ruling in favor of the plaintiff in a shareholder lawsuit said that most of Tesla’s board members “were beholden to Musk or had compromising conflicts.” McCormick also concluded that the Tesla board gave shareholders inaccurate and misleading information in order to secure approval of Musk’s “unfathomable” pay plan.

Musk a fan of Texas and Nevada

Musk yesterday shared a post claiming that McCormick’s ruling “is another clear example of the Biden administration and its allies weaponizing the American legal system against their political opponents.”

McCormick previously oversaw the Twitter lawsuit that forced Musk to complete a $44 billion purchase despite his attempt to break a merger agreement. After Musk became Twitter’s owner, he merged the company into X Corp., which is registered in Nevada.

“Never incorporate your company in the state of Delaware,” Musk wrote in a post after the Delaware court ruling. “I recommend incorporating in Nevada or Texas if you prefer shareholders to decide matters,” he also wrote.

Last year, Texas enacted a law to create business courts that will hear corporate cases. The courts are slated to begin operating on September 1, 2024. Musk is clearly hoping the new Texas courts will be more deferential to Tesla on executive pay if the company is sued again after his next pay plan is agreed on.

Tesla shareholders who will be asked to vote on a corporate move to Texas “need to take a hard look at how transitioning out of Delaware might impact their rights and the company’s governance,” Reuters quoted business adviser Keith Donovan as saying.

Reuters quoted AJ Bell investment analyst Dan Coatsworth as saying that “Elon Musk’s plan to change Tesla’s state of incorporation from Delaware to Texas is typical behavior for the entrepreneur who always looks for an alternative if he can’t get what he wants.”

Elon Musk proposes Tesla move to Texas after Delaware judge voids $56 billion pay Read More »

cops-bogged-down-by-flood-of-fake-ai-child-sex-images,-report-says

Cops bogged down by flood of fake AI child sex images, report says

“Particularly heinous” —

Investigations tied to harmful AI sex images will grow “exponentially,” experts say.

Cops bogged down by flood of fake AI child sex images, report says

Law enforcement is continuing to warn that a “flood” of AI-generated fake child sex images is making it harder to investigate real crimes against abused children, The New York Times reported.

Last year, after researchers uncovered thousands of realistic but fake AI child sex images online, quickly every attorney general across the US called on Congress to set up a committee to squash the problem. But so far, Congress has moved slowly, while only a few states have specifically banned AI-generated non-consensual intimate imagery. Meanwhile, law enforcement continues to struggle with figuring out how to confront bad actors found to be creating and sharing images that, for now, largely exist in a legal gray zone.

“Creating sexually explicit images of children through the use of artificial intelligence is a particularly heinous form of online exploitation,” Steve Grocki, the chief of the Justice Department’s child exploitation and obscenity section, told The Times. Experts told The Washington Post in 2023 that risks of realistic but fake images spreading included normalizing child sexual exploitation, luring more children into harm’s way, and making it harder for law enforcement to find actual children being harmed.

In one example, the FBI announced earlier this year that an American Airlines flight attendant, Estes Carter Thompson III, was arrested “for allegedly surreptitiously recording or attempting to record a minor female passenger using a lavatory aboard an aircraft.” A search of Thompson’s iCloud revealed “four additional instances” where Thompson allegedly recorded other minors in the lavatory, as well as “over 50 images of a 9-year-old unaccompanied minor” sleeping in her seat. While police attempted to identify these victims, they also “further alleged that hundreds of images of AI-generated child pornography” were found on Thompson’s phone.

The troubling case seems to illustrate how AI-generated child sex images can be linked to real criminal activity while also showing how police investigations could be bogged down by attempts to distinguish photos of real victims from AI images that could depict real or fake children.

Robin Richards, the commander of the Los Angeles Police Department’s Internet Crimes Against Children task force, confirmed to the NYT that due to AI, “investigations are way more challenging.”

And because image generators and AI models that can be trained on photos of children are widely available, “using AI to alter photos” of children online “is becoming more common,” Michael Bourke—a former chief psychologist for the US Marshals Service who spent decades supporting investigations into sex offenses involving children—told the NYT. Richards said that cops don’t know what to do when they find these AI-generated materials.

Currently, there aren’t many cases involving AI-generated child sex abuse materials (CSAM), The NYT reported, but experts expect that number will “grow exponentially,” raising “novel and complex questions of whether existing federal and state laws are adequate to prosecute these crimes.”

Platforms struggle to monitor harmful AI images

At a Senate Judiciary Committee hearing today grilling Big Tech CEOs over child sexual exploitation (CSE) on their platforms, Linda Yaccarino—CEO of X (formerly Twitter)—warned in her opening statement that artificial intelligence is also making it harder for platforms to monitor CSE. Yaccarino suggested that industry collaboration is imperative to get ahead of the growing problem, as is providing more resources to law enforcement.

However, US law enforcement officials have indicated that platforms are also making it harder to police CSAM and CSE online. Platforms relying on AI to detect CSAM are generating “unviable reports” gumming up investigations managed by already underfunded law enforcement teams, The Guardian reported. And the NYT reported that other investigations are being thwarted by adding end-to-end encryption options to messaging services, which “drastically limit the number of crimes the authorities are able to track.”

The NYT report noted that in 2002, the Supreme Court struck down a law that had been on the books since 1996 preventing “virtual” or “computer-generated child pornography.” South Carolina’s attorney general, Alan Wilson, has said that AI technology available today may test that ruling, especially if minors continue to be harmed by fake AI child sex images spreading online. In the meantime, federal laws such as obscenity statutes may be used to prosecute cases, the NYT reported.

Congress has recently re-introduced some legislation to directly address AI-generated non-consensual intimate images after a wide range of images depicting fake AI porn of pop star Taylor Swift went viral this month. That includes the Disrupt Explicit Forged Images and Non-Consensual Edits Act, which creates a federal civil remedy for any victims of any age who are identifiable in AI images depicting them as nude or engaged in sexually explicit conduct or sexual scenarios.

There’s also the “Preventing Deepfakes of Intimate Images Act,” which seeks to “prohibit the non-consensual disclosure of digitally altered intimate images.” That was re-introduced this year after teen boys generated AI fake nude images of female classmates and spread them around a New Jersey high school last fall. Francesca Mani, one of the teen victims in New Jersey, was there to help announce the proposed law, which includes penalties of up to two years imprisonment for sharing harmful images.

“What happened to me and my classmates was not cool, and there’s no way I’m just going to shrug and let it slide,” Mani said. “I’m here, standing up and shouting for change, fighting for laws, so no one else has to feel as lost and powerless as I did on October 20th.”

Cops bogged down by flood of fake AI child sex images, report says Read More »

comcast-reluctantly-agrees-to-stop-its-misleading-“10g-network”-claims

Comcast reluctantly agrees to stop its misleading “10G Network” claims

10G or not 10G —

Comcast said it will drop “Xfinity 10G Network” brand name after losing appeal.

A Comcast router/modem gateway.

Comcast

Comcast has reluctantly agreed to discontinue its “Xfinity 10G Network” brand name after losing an appeal of a ruling that found the marketing term was misleading. It will keep using the term 10G in other ways, however.

Verizon and T-Mobile both challenged Comcast’s advertising of 10G, a term used by cable companies since it was unveiled in January 2019 by industry lobby group NCTA-The Internet & Television Association. We wrote in 2019 that the cable industry’s 10G marketing was likely to confuse consumers and seemed to be a way of countering 5G hype generated by wireless companies.

10G doesn’t refer to the 10th generation of a technology. It is a reference to potential 10Gbps broadband connections, which would be much faster than the actual speeds on standard cable networks today.

The challenges lodged against Comcast marketing were filed with the advertising industry’s self-regulatory system run by BBB National Programs. BBB’s National Advertising Division (NAD) ruled against Comcast in October 2023, but Comcast appealed to the National Advertising Review Board (NARB).

The NARB announced its ruling today, agreeing with the NAD that “Comcast should discontinue use of the term 10G, both when used in the name of the service itself (‘Xfinity 10G Network’) as well as when used to describe the Xfinity network. The use of 10G in a manner that is not false or misleading and is consistent with the panel decision is not precluded by the panel recommendations.”

“Comcast will discontinue brand name”

Comcast agreed to make the change in an advertiser’s statement that it provided to the NARB. “Although Comcast strongly disagrees with NARB’s analysis and approach, Comcast will discontinue use of the brand name ‘Xfinity 10G Network’ and will not use the term ’10G’ in a manner that misleadingly describes the Xfinity network itself,” Comcast said.

Comcast said it disagrees with “the recommendation to discontinue the brand name” because the company “makes available 10Gbps of Internet speed to 98 percent of its subscribers upon request.” But those 10Gbps speeds aren’t available in Comcast’s typical service plans and require a fiber-to-the-home connection instead of a standard cable installation.

The Comcast “Gigabit Pro” fiber connection that provides 10Gbps speeds costs $299.95 a month plus a $19.95 modem lease fee. It also requires a $500 installation charge and a $500 activation charge.

Comcast said it may still use 10G in ways that are less likely to confuse consumers. “Consistent with the panel’s recommendation… Comcast reserves the right to use the term ’10G’ or ‘Xfinity 10G’ in a manner that does not misleadingly describe the Xfinity network itself,” the company said.

When contacted by Ars, a Comcast spokesperson said, “We disagree with the decision but are pleased that we have confirmed our continued use of 10G in advertising.”

Comcast claims “not supported”

The NARB said the “recent availability of 10G speeds through [the Gigabit Pro] service tier does not support the superior speed claim (or a 10Gbps claim) for the Xfinity network as a whole.” As the NARB noted, there is an “absence” of data showing how many Comcast customers actually use that service.

The NARB also said that 10G is misleading because of the implied comparison to 5G wireless networks. “The NARB panel concluded that 10G expressly communicates at a minimum that users of the Xfinity network will experience significantly faster speeds than are available on 5G networks,” the announcement of the ruling said. “This express claim is not supported because the record does not contain any data comparing speeds experienced by Xfinity network users with speeds experienced by subscribers to 5G networks.”

As the NAD has previously stated, 10G is more of an “aspirational” term rather than something that’s offered over today’s cable networks. Over the past five years, the NCTA has been using the term 10G to describe just about any improvement to cable networks, regardless of the actual speeds.

The NCTA coincidentally issued a press release yesterday hailing the fifth anniversary of its first 10G announcement. “Five years on, the future is even closer… Here in 2024, the promise of 10G is becoming more and more of a reality,” the NCTA said.

The announcement listed some examples of multi-gigabit (but not 10-gigabit) cable speeds, some of which were only achieved in lab testing or demos. NCTA claimed that “10G can change lives” and that the “10G platform will facilitate the next great technological advancements in the coming decades, ensuring fast, reliable, and safe networks continue to power the American economy.”

For all of you cable broadband users, just remember to ignore “10G” in cable-company marketing and check the actual speeds you’re paying for.

Comcast reluctantly agrees to stop its misleading “10G Network” claims Read More »