Policy

boeing-to-plead-guilty-to-conspiracy-to-defraud-faa-aircraft-evaluation-group

Boeing to plead guilty to conspiracy to defraud FAA Aircraft Evaluation Group

Boeing guilty plea —

Families say deal with US “fails to hold Boeing accountable” for 346 crash deaths.

An American Airlines plane just before making a landing with a body of water in the background

Enlarge / An American Airlines Boeing 737 MAX 8 aircraft approaches San Diego International Airport for a landing on June 28, 2024.

Getty Images | Kevin Carter

Boeing has agreed to plead guilty to a criminal charge and pay $243.6 million for violating a 2021 agreement that was spurred by two fatal crashes. The US government notified a judge of Boeing’s plea agreement in a July 7 filing in US District Court for the Northern District of Texas.

“The parties have agreed that Boeing will plead guilty to the most serious readily provable offense,” the Department of Justice said. If accepted by the court, the deal would allow Boeing to avoid a trial.

Families of victims said in a filing yesterday that they will urge the court to reject the deal at a plea hearing. “The families intend to argue that the plea deal with Boeing unfairly makes concessions to Boeing that other criminal defendants would never receive and fails to hold Boeing accountable for the deaths of 346 persons,” their lawyers wrote.

The deal stems from Boeing 737 Max crashes in 2018 and 2019 in Indonesia and Ethiopia. After the crashes, Boeing was charged with conspiracy to defraud the Federal Aviation Administration in connection with the agency’s evaluation of the 737 Max.

Conspiracy to defraud FAA

“Boeing will plead guilty to the offense charged in the pending one-count Criminal Information, conspiracy to defraud the United States, specifically, the lawful function of the Federal Aviation Administration Aircraft Evaluation Group,” the US government filing said.

In January 2021, Boeing signed a deferred prosecution agreement and agreed to pay $2.5 billion in penalties and compensation to airline customers and the victims’ families. In May 2024, the Justice Department said it determined that Boeing violated the deferred prosecution agreement “by failing to design, implement, and enforce a compliance and ethics program to prevent and detect violations of the US fraud laws throughout its operations.”

The DOJ determined that Boeing violated the 2021 agreement several months after the January 2024 incident in which a 737 Max 9 used by Alaska Airlines made an emergency landing because a door plug blew off during a flight. Boeing initially said it believed that it honored the terms of the agreement but ultimately agreed to plead guilty to the charge for conspiracy to defraud the FAA.

“The parties have agreed in principle to the material terms of a plea agreement that would, among other things, hold Boeing accountable for its material misstatements to the Federal Aviation Administration, require Boeing to pay the statutory maximum fine, require Boeing to invest at least $455 million in its compliance and safety programs, impose an independent compliance monitor, and allow the Court to determine the restitution amount for the families in its discretion, consistent with applicable law,” the DOJ court filing said.

Boeing will agree to a fine of $243.6 million, which will be doubled to $487.2 million, and is “the maximum criminal fine for the charged offense,” the DOJ said. But “the new plea agreement will recommend that when imposing the sentence, the Court credit the $243.6 million criminal monetary penalty Boeing previously paid pursuant to the [deferred prosecution agreement], with the net result being that Boeing will have to pay another $243.6 million fine.”

Boeing hasn’t agreed on further restitution to victims’ families, but the court could order an additional payment. Boeing agreed to be subject to an independent compliance monitor for three years.

Victims’ families oppose plea deal

Families of the victims of Lion Air Flight 610 and Ethiopian Airlines Flight 302 crashes “have expressed their intention to oppose this (or any) plea agreement,” the DOJ noted. The government said it “conferred with the families, airline customers, and their representatives” and “formulated the plea offer based in part on the feedback” it received.

The DOJ court filing said Boeing will not receive immunity for any other “conduct that may be the subject of any ongoing or future Government investigation of the Company.”

There could also be prosecutions of individuals at Boeing. “DOJ is resolving only with the company—and providing no immunity to any individual employees, including corporate executives, for any conduct,” the agency said in a statement quoted by CNN.

Under the plea agreement, restitution for families would be determined by the court. “The plea agreement will allow the Court to determine the restitution amount for the families in its discretion, consistent with applicable legal principles… Boeing will retain the right to appeal any restitution order it believes was not legally imposed,” the government said.

Because families intend to oppose the plea agreement, the government said it will “meet and confer with all stakeholders on a briefing schedule.” A lawyer for victims’ families asked “that the Court schedule a plea hearing no sooner than late July to allow adequate time for the families to make travel arrangements to attend in person,” the DOJ said.

“This sweetheart deal fails to recognize that because of Boeing’s conspiracy, 346 people died. Through crafty lawyering between Boeing and DOJ, the deadly consequences of Boeing’s crime are being hidden,” Paul Cassell, a lawyer for the families, said.

Boeing did not comment on the specifics of the plea deal. “We can confirm that we have reached an agreement in principle on terms of a resolution with the Justice Department, subject to the memorialization and approval of specific terms,” the company said in a statement provided to Ars.

Boeing to plead guilty to conspiracy to defraud FAA Aircraft Evaluation Group Read More »

elon-musk-denies-tweets-misled-twitter-investors-ahead-of-purchase

Elon Musk denies tweets misled Twitter investors ahead of purchase

Elon Musk denies tweets misled Twitter investors ahead of purchase

Just before the Fourth of July holiday, Elon Musk moved to dismiss a lawsuit alleging that he intentionally misled Twitter investors in 2022 by failing to disclose his growing stake in Twitter while tweeting about potentially starting his own social network in the weeks ahead of announcing his plan to buy Twitter.

Allegedly, Musk devised this fraudulent scheme to reduce the Twitter purchase price by $200 million, a proposed class action filed by an Oklahoma Firefighters pension fund on behalf of all Twitter investors allegedly harmed claimed. But in another court filing this week, Musk insisted that “all indications”—including those referenced in the firefighters’ complaint—”point to mistake,” not fraud.

According to Musk, evidence showed that he simply misunderstood the Securities Exchange Act when he delayed filing a Rule 13 disclosure of his nearly 10 percent ownership stake in Twitter in March 2022. Musk argued that he believed he was required to disclose this stake at the end of the year, rather than within 10 days after the month in which he amassed a 5 percent stake. He said that previously he’d only filed Rule 13 disclosures as the owner of a company—not as someone suddenly acquiring 5 percent stake.

Musk claimed that as soon as his understanding of the law was corrected—on April 1, when he’d already missed the deadline by about seven days—he promptly stopped trading and filed the disclosure on the next trading day.

“Such prompt and corrective disclosure—within seven trading days of the purported deadline—is not the stuff of a fraudulent scheme to manipulate the market,” Musk’s court filing said.

As Musk sees it, the firefighters’ suit “makes no sense” because it basically alleged that Musk always intended to disclose the supposedly fraudulent scheme, which in the context of his extraordinary wealth, barely saved him any meaningful amount of money when purchasing Twitter.

The idea that Musk “engaged in intentional securities fraud in order to save $200 million is illogical in light of Musk’s eventual $44 billion purchase of Twitter,” Musk’s court filing said. “It defies logic that Musk would commit fraud to save less than 0.5 percent of Twitter’s total purchase price, and 0.1 percent of his net worth, all while knowing that there would be ‘an inevitable day of reckoning’ when he would disclose the truth—which was always his intent.”

It’s much more likely, Musk argued, that “Musk’s acknowledgement of his tardiness is that he was expressly acknowledging a mistake, not publicly conceding a purportedly days-old fraudulent scheme.”

Arguing that all firefighters showed was “enough to adequately plead a material omission and misstatement”—which he said would not be an actionable claim under the Securities Exchange Act—Musk has asked for the lawsuit to be dismissed with prejudice. At most, Musk is guilty of neglect, his court filing said, not deception. Allegedly Musk never “had any intention of avoiding reporting requirements,” his court filing said.

The firefighters pension fund has until August 12 to defend its claims and keep the suit alive, Musk’s court filing noted. In their complaint, the fighterfighteres had asked the court to award damages covering losses, plus interest, for all Twitter shareholders determined to be “cheated out of the true value of their securities” by Musk’s alleged scheme.

Ars could not immediately reach lawyers for Musk or the firefighters pension fund for comment.

Elon Musk denies tweets misled Twitter investors ahead of purchase Read More »

the-“netflix-of-anime”-piracy-site-abruptly-shuts-down,-shocking-users

The “Netflix of anime” piracy site abruptly shuts down, shocking users

Disney+ promotional art for <em>The Fable</em>, an anime series that triggered Animeflix takedown notices.” src=”https://cdn.arstechnica.net/wp-content/uploads/2024/07/The-Fable-press-image-800×450.jpeg”></img><figcaption>
<p><a data-height=Enlarge / Disney+ promotional art for The Fable, an anime series that triggered Animeflix takedown notices.

Disney+

Thousands of anime fans were shocked Thursday when the popular piracy site Animeflix voluntarily shut down without explaining why, TorrentFreak reported.

“It is with a heavy heart that we announce the closure of Animeflix,” the site’s operators told users in a Discord with 35,000 members. “After careful consideration, we have decided to shut down our service effective immediately. We deeply appreciate your support and enthusiasm over the years.”

Prior to its shutdown, Animeflix attracted millions of monthly visits, TorrentFreak reported. It was preferred by some anime fans for its clean interface, with one fan on Reddit describing Animeflix as the “Netflix of anime.”

“Deadass this site was clean,” one Reddit user wrote. “The best I’ve ever seen. Sad to see it go.”

Although Animeflix operators did not connect the dots for users, TorrentFreak suggested that the piracy site chose to shut down after facing “considerable legal pressure in recent months.”

Back in December, an anti-piracy group, Alliance for Creativity and Entertainment (ACE), sought to shut down Animeflix. Then in mid-May, rightsholders—including Netflix, Disney, Universal, Paramount, and Warner Bros.—won an injunction through the High Court of India against several piracy sites, including Animeflix. This briefly caused Animeflix to be unavailable until Animeflix simply switched to another domain and continued serving users, TorrentFreak reported.

Although Animeflix is not telling users why it’s choosing to shut down now, TorrentFreak—which, as its name suggests, focuses much of its coverage on copyright issues impacting file sharing online—noted that “when a pirate site shuts down, voluntarily or not, copyright issues typically play a role.”

For anime fans, the abrupt closure was disappointing because of difficulty accessing the hottest new anime titles and delays as studios work to offer translations to various regions. The delays are so bad that some studios are considering combating piracy by using AI to push out translated versions more quickly. But fans fear this will only result in low-quality subtitles, CBR reported.

On Reddit, some fans also complained after relying exclusively on Animeflix to keep track of where they left off on anime shows that often span hundreds of episodes.

Others begged to be turned onto other anime piracy sites, while some speculated whether Animeflix might eventually pop up at a new domain. TorrentFreak noted that Animeflix shut down once previously several years ago but ultimately came back. One Redditor wrote, “another hero has passed away but the will, will be passed.” On another Reddit thread asking “will Animeflix be gone forever or maybe create a new site,” one commenter commiserated, writing, “We don’t know for sure. Only time will tell.”

It’s also possible that someone else may pick up the torch and operate a new piracy site under the same name. According to TorrentFreak, this is “likely.”

Animeflix did not reassure users that it may be back, instead urging them to find other sources for their favorite shows and movies.

“We hope the joy and excitement of anime continue to brighten your days through other wonderful platforms,” Animeflix’s Discord message said.

ACE did not immediately respond to Ars’ request for comment.

The “Netflix of anime” piracy site abruptly shuts down, shocking users Read More »

judge-says-ftc-lacks-authority-to-issue-rule-banning-noncompete-agreements

Judge says FTC lacks authority to issue rule banning noncompete agreements

Noncompete ban —

Authority cited by FTC just a “housekeeping statute,” US judge in Texas rules.

FTC Chair Lina Khan sitting at a Congressional hearing

Enlarge / FTC Chair Lina Khan testifies before the House Appropriations Subcommittee on May 15, 2024, in Washington, DC.

Getty Images | Kevin Dietsch

A US judge ruled against the Federal Trade Commission in a challenge to its rule banning noncompete agreements, saying the FTC lacks “substantive” rulemaking authority.

The preliminary ruling only blocks enforcement of the noncompete ban against the plaintiff and other groups that intervened in the case, but it signals that the judge believes the FTC cannot enforce the rule. The case is in US District Court for the Northern District of Texas, so appeals would be heard in the US Court of Appeals for the 5th Circuit—which is generally regarded as one of the most conservative appeals courts in the country.

In April, the FTC issued a rule that would render the vast majority of current noncompete clauses unenforceable and ban future ones. The agency said that noncompete clauses are “an unfair method of competition and therefore a violation of Section 5 of the FTC Act,” calling them “a widespread and often exploitative practice imposing contractual conditions that prevent workers from taking a new job or starting a new business.”

A tax services firm called Ryan, LLC sued the FTC in an attempt to block the rule. The lawsuit was joined by the US Chamber of Commerce, two Texas business groups, and a lobbyist association that represents chief executive officers at US businesses.

In a ruling on Wednesday, US District Judge Ada Brown granted a preliminary injunction and postponed the effective date of the rule as it applies to the plaintiffs. The rule is scheduled to take effect on September 4, 2024. As of now, the FTC’s ban on noncompetes is slated to apply to everyone except the entities involved in the lawsuit.

“FTC lacks substantive rulemaking authority”

“The issue presented is whether the FTC’s ability to promulgate rules concerning unfair methods of competition include the authority to create substantive rules regarding unfair methods of competition,” Brown, a Trump appointee, wrote.

Brown acknowledged that “the FTC has some authority to promulgate rules to preclude unfair methods of competition.” But “the text, structure, and history of the FTC Act reveal that the FTC lacks substantive rulemaking authority with respect to unfair methods of competition under Section 6(g),” she wrote.

The FTC has argued it can impose the rule using authority under sections 5 and 6(g) of the FTC Act. “Alongside section 5, Congress adopted section 6(g) of the Act, in which it authorized the Commission to ‘make rules and regulations for the purpose of carrying out the provisions of’ the FTC Act, which include the Act’s prohibition of unfair methods of competition,” the FTC said when it issued the rule.

“The FTC stands by our clear authority, supported by statute and precedent, to issue this rule,” an FTC spokesperson told Ars today. “We will keep fighting to free hardworking Americans from unlawful noncompetes, which reduce innovation, inhibit economic growth, trap workers, and undermine Americans’ economic liberty.”

Consumer advocacy group Public Knowledge called Brown’s ruling “the latest in a series of attacks on the administrative state, which only further embolden judges without subject matter expertise to seize power from federal agencies and prevent them from effectively serving the American people.”

The Supreme Court last week overturned the 40-year-old Chevron precedent, which gave agencies leeway to interpret ambiguous laws as long as the agency’s conclusion was reasonable. The SCOTUS ruling effectively gives courts more power to block federal rules.

FTC’s cited authority just a “housekeeping statute”

Brown concluded that section 6(g) is merely a “housekeeping statute,” authorizing “rules of agency organization procedure or practice” but not “substantive rules.”

“Plaintiffs next contend the lack of a statutory penalty for violating rules promulgated under Section 6(g) demonstrates its lack of substantive rulemaking power. The Court agrees,” Brown wrote. “When authorizing legislative rulemaking, Congress also historically prescribes sanctions for violations of the agency’s rules—confirming that those rules create substantive obligations for regulated parties.”

The judge said the plaintiffs are likely to succeed on the merits and would be harmed if the rule takes effect. Brown intends to issue a ruling on the merits by August 30.

The preliminary injunction does not apply nationwide, as Brown chose to limit “the scope of the injunctive relief herein to named Plaintiff Ryan, LLC and Plaintiff-Intervenors Chamber of Commerce of the United States of America; Business Roundtable; Texas Association of Business; and Longview Chamber of Commerce.”

The business trade groups wanted the injunction to apply to all of their member entities but could not convince Brown to extend the injunction that far. “Plaintiff-Intervenors have directed the Court to neither sufficient evidence of their respective associational member(s) for which they seek standing, nor any of the three elements that must be met regarding associational standing. Without such developed briefing, the Court declines to extend injunctive relief to members of Plaintiff-Intervenors,” Brown wrote.

Judge says FTC lacks authority to issue rule banning noncompete agreements Read More »

tool-preventing-ai-mimicry-cracked;-artists-wonder-what’s-next

Tool preventing AI mimicry cracked; artists wonder what’s next

Tool preventing AI mimicry cracked; artists wonder what’s next

Aurich Lawson | Getty Images

For many artists, it’s a precarious time to post art online. AI image generators keep getting better at cheaply replicating a wider range of unique styles, and basically every popular platform is rushing to update user terms to seize permissions to scrape as much data as possible for AI training.

Defenses against AI training exist—like Glaze, a tool that adds a small amount of imperceptible-to-humans noise to images to stop image generators from copying artists’ styles. But they don’t provide a permanent solution at a time when tech companies appear determined to chase profits by building ever-more-sophisticated AI models that increasingly threaten to dilute artists’ brands and replace them in the market.

In one high-profile example just last month, the estate of Ansel Adams condemned Adobe for selling AI images stealing the famous photographer’s style, Smithsonian reported. Adobe quickly responded and removed the AI copycats. But it’s not just famous artists who risk being ripped off, and lesser-known artists may struggle to prove AI models are referencing their works. In this largely lawless world, every image uploaded risks contributing to an artist’s downfall, potentially watering down demand for their own work each time they promote new pieces online.

Unsurprisingly, artists have increasingly sought protections to diminish or dodge these AI risks. As tech companies update their products’ terms—like when Meta suddenly announced that it was training AI on a billion Facebook and Instagram user photos last December—artists frantically survey the landscape for new defenses. That’s why, counting among those offering scarce AI protections available today, The Glaze Project recently reported a dramatic surge in requests for its free tools.

Designed to help prevent style mimicry and even poison AI models to discourage data scraping without an artist’s consent or compensation, The Glaze Project’s tools are now in higher demand than ever. University of Chicago professor Ben Zhao, who created the tools, told Ars that the backlog for approving a “skyrocketing” number of requests for access is “bad.” And as he recently posted on X (formerly Twitter), an “explosion in demand” in June is only likely to be sustained as AI threats continue to evolve. For the foreseeable future, that means artists searching for protections against AI will have to wait.

Even if Zhao’s team did nothing but approve requests for WebGlaze, its invite-only web-based version of Glaze, “we probably still won’t keep up,” Zhao said. He’s warned artists on X to expect delays.

Compounding artists’ struggles, at the same time as demand for Glaze is spiking, the tool has come under attack by security researchers who claimed it was not only possible but easy to bypass Glaze’s protections. For security researchers and some artists, this attack calls into question whether Glaze can truly protect artists in these embattled times. But for thousands of artists joining the Glaze queue, the long-term future looks so bleak that any promise of protections against mimicry seems worth the wait.

Attack cracking Glaze sparks debate

Millions have downloaded Glaze already, and many artists are waiting weeks or even months for access to WebGlaze, mostly submitting requests for invites on social media. The Glaze Project vets every request to verify that each user is human and ensure bad actors don’t abuse the tools, so the process can take a while.

The team is currently struggling to approve hundreds of requests submitted daily through direct messages on Instagram and Twitter in the order they are received, and artists requesting access must be patient through prolonged delays. Because these platforms’ inboxes aren’t designed to sort messages easily, any artist who follows up on a request gets bumped to the back of the line—as their message bounces to the top of the inbox and Zhao’s team, largely volunteers, continues approving requests from the bottom up.

“This is obviously a problem,” Zhao wrote on X while discouraging artists from sending any follow-ups unless they’ve already gotten an invite. “We might have to change the way we do invites and rethink the future of WebGlaze to keep it sustainable enough to support a large and growing user base.”

Glaze interest is likely also spiking due to word of mouth. Reid Southen, a freelance concept artist for major movies, is advocating for all artists to use Glaze. Reid told Ars that WebGlaze is especially “nice” because it’s “available for free for people who don’t have the GPU power to run the program on their home machine.”

Tool preventing AI mimicry cracked; artists wonder what’s next Read More »

apple-vision-pro,-new-cameras-fail-user-repairability-analysis

Apple Vision Pro, new cameras fail user-repairability analysis

Apple's Vision Pro scored 0 points in US PIRG's self-repairability analysis.

Enlarge / Apple’s Vision Pro scored 0 points in US PIRG’s self-repairability analysis.

Kyle Orland

In December, New York became the first state to enact a “Right to Repair” law for electronics. Since then, other states, including Oregon and Minnesota, have passed similar laws. However, a recent analysis of some recently released gadgets shows that self-repair still has a long way to go before it becomes ubiquitous.

On Monday, the US Public Interest Research Group (PIRG) released its Leaders and Laggards report that examined user repairability of 21 devices subject to New York’s electronics Right to Repair law. The nonprofit graded devices “based on the quality and accessibility of repair manuals, spare parts, and other critical repair materials.”

Nathan Proctor, one of the report’s authors and senior director for the Campaign for the Right to Repair for the US PIRG Education Fund, told Ars Technica via email that PIRG focused on new models since the law only applies to new products, adding that PIRG “tried to include a range of covered devices from well-known brands.”

While all four smartphones included on the list received an A-minus or A, many other types of devices got disappointing grades. The HP Spectre Fold foldable laptop, for example, received a D-minus due to low parts (2 out of 10) and manual (4 out of 10) scores.

The report examined four camera models—Canon’s EOS r100, Fujifilm’s GFX 100 ii, Nikon’s Zf, and Sony’s Alpha 6700—and all but one received an F. The outlier, the Sony camera, managed a D-plus.

Two VR headsets were also among the losers. US PIRG gave Apple’s Vision Pro and Meta’s Quest 3 an F.

You can see PIRG’s full score breakdown below:

Repair manuals are still hard to access

New York’s Digital Fair Repair Act requires consumer electronics brands to allow consumers access to the same diagnostic tools, parts, and repair manuals that its own repair technicians use. However, the PIRG organization struggled to access manuals for some recently released tech that’s subject to the law.

For example, Sony’s PlayStation 5 Slim received a 1/10 score. PIRG’s report includes an apparent screenshot of an online chat with Sony customer support, where a rep said that the company doesn’t have a copy of the console’s service manual available and that “if the unit needs repair, we recommend/refer customers to the service center.”

Apple’s Vision Pro, meanwhile, got a 0/10 manual score, while the Meta Quest 3 got a 1/10.

According to the report, “only 12 of 21 products provided replacement procedures, and 11 listed which tools are required to disassemble the product.”

The report suggests difficulties in easily accessing repair manuals, with the report’s authors stating that reaching out to customer service representatives “often” proved “unhelpful.” The group also pointed to a potential lack of communication between customer service reps and the company’s repairability efforts.

For example, Apple launched its Self Service Repair Store in April 2022. But PIRG’s report said:

 … our interaction with their customer service team seemed to imply that there was no self-repair option for [Apple] phones. We were told by an Apple support representative that ‘only trained Apple Technician[s]’ would be able to replace our phone screen or battery, despite a full repair manual and robust parts selection available on the Apple website.

Apple didn’t immediately respond to Ars Technica’s request for comment.

Apple Vision Pro, new cameras fail user-repairability analysis Read More »

“everything’s-frozen”:-ransomware-locks-credit-union-users-out-of-bank-accounts

“Everything’s frozen”: Ransomware locks credit union users out of bank accounts

Ransomware attack —

Patelco Credit Union in Calif. shut down numerous banking services after attack.

An automated teller machine with a logo for Patelco Credit Union.

Enlarge / ATM at a Patelco Credit Union branch in Dublin, California, on July 23, 2018.

Getty Images | Smith Collection/Gado

A California-based credit union with over 450,000 members said it suffered a ransomware attack that is disrupting account services and could take weeks to recover from.

“The next few days—and coming weeks—may present challenges for our members, as we continue to navigate around the limited functionality we are experiencing due to this incident,” Patelco Credit Union CEO Erin Mendez told members in a July 1 message that said the security problem was caused by a ransomware attack. Online banking and several other services are unavailable, while several other services and types of transactions have limited functionality.

Patelco Credit Union was hit by the attack on June 29 and has been posting updates on this page, which says the credit union “proactively shut down some of our day-to-day banking systems to contain and remediate the issue… As a result of our proactive measures, transactions, transfers, payments, and deposits are unavailable at this time. Debit and credit cards are working with limited functionality.”

Patelco Credit Union is a nonprofit cooperative in Northern California with $9 billion in assets and 37 local branches. “Our priority is the safe and secure restoration of our banking systems,” a July 2 update said. “We continue to work alongside leading third-party cybersecurity experts in support of this effort. We have also been cooperating with regulators and law enforcement.”

“Everything’s frozen”

Patelco member Enrique Juarez said he was having trouble accessing his Social Security payment, according to the Mercury News. “I’ve never had a problem before,” Juarez told the news organization. “Everything’s frozen, I can’t even check my balance until this is resolved—and they don’t know [when that will happen].”

Patelco says that check and cash deposits should be working, but direct deposits have limited functionality.

Security expert Ahmed Banafa “said Tuesday that it looks likely that hackers infiltrated the bank’s internal databases via a phishing email and encrypted its contents, locking out the bank from its own systems,” the Mercury News reported. Banafa was paraphrased as saying that it is “likely the hackers will demand an amount of money from the credit union to restore its systems back to normal, and will continue to hold the bank’s accounts hostage until either the bank finds a way around the hack or until the hackers are paid.”

Change Healthcare, a health payment processing company hit by ransomware this year, told lawmakers that it paid a ransom of $22 million in bitcoin. Change Healthcare owner UnitedHealth failed to use multifactor authentication on critical systems.

Patelco hasn’t revealed details about how it will recover from the ransomware attack but acknowledged to customers that their personal information could be at risk. “The investigation into the nature and scope of the incident is ongoing,” the credit union said. “If the investigation determines that individuals’ information is involved as a result of this incident, we will of course notify those individuals and provide resources to help protect their information in accordance with applicable laws.”

Patelco waives fees, warns of more outages

Patelco said it is waiving overdraft, late payment, and ATM fees “until we are back up and running.” Members who need to access funds from direct deposits can do so by writing a check, using an ATM card to get cash, or by making a purchase, Patelco said.

As of yesterday, members could expect to “experience short, intermittent outages at Patelco ATMs,” the organization said. “This is normal and to be expected during our recovery process. Access to shared ATMs will not be interrupted as part of this process and they remain available for cash withdrawals and deposits.”

A chart on the security update page says the services that remain unavailable include online banking, the mobile app, outgoing wire transfers, monthly statements, Zelle, balance inquiries, and online bill payments.

Patelco branches, call center services, and live chats have “limited functionality,” as do debit card transactions, credit card transactions, and direct deposits, according to the chart. Services that are listed as available include check and cash deposits, ATM withdrawals, ACH transfers, ACH for bill payments, and in-branch loan payments.

“Everything’s frozen”: Ransomware locks credit union users out of bank accounts Read More »

japan-wins-2-year-“war-on-floppy-disks,”-kills-regulations-requiring-old-tech

Japan wins 2-year “war on floppy disks,” kills regulations requiring old tech

Farewell, floppy —

But what about fax machines?

floppy disks on white background

About two years after the country’s digital minister publicly declared a “war on floppy discs,” Japan reportedly stopped using floppy disks in governmental systems as of June 28.

Per a Reuters report on Wednesday, Japan’s government “eliminated the use of floppy disks in all its systems.” The report notes that by mid-June, Japan’s Digital Agency (a body set up during the COVID-19 pandemic and aimed at updating government technology) had “scrapped all 1,034 regulations governing their use, except for one environmental stricture related to vehicle recycling.” That suggests that there’s up to one government use that could still turn to floppy disks, though more details weren’t available.

Digital Minister Taro Kono, the politician behind the modernization of the Japanese government’s tech, has made his distaste for floppy disks and other old office tech, like fax machines, quite public. Kono, who’s reportedly considering a second presidential run, told Reuters in a statement today:

We have won the war on floppy disks on June 28!

Although Kono only announced plans to eradicate floppy disks from the government two years ago, it’s been 20 years since floppy disks were in their prime and 53 years since they debuted. It was only in January 2024 that the Japanese government stopped requiring physical media, like floppy disks and CD-ROMs, for 1,900 types of submissions to the government, such as business filings and submission forms for citizens.

The timeline may be surprising, considering that the last company to make floppy disks, Sony, stopped doing so in 2011. As a storage medium, of course, floppies can’t compete with today’s options since most floppies max out at 1.44MB (2.88MB floppies were also available). And you’ll be hard-pressed to find a modern system that can still read the disks. There are also basic concerns around the old storage format, such as Tokyo police reportedly losing a pair of floppy disks with information on dozens of public housing applicants in 2021.

But Japan isn’t the only government body with surprisingly recent ties to the technology. For example, San Francisco’s Muni Metro light rail uses a train control system that uses software that runs off floppy disks and plans to keep doing so until 2030. The US Air Force used using 8-inch floppies until 2019.

Outside of the public sector, floppy disks remain common in numerous industries, including embroidery, cargo airlines, and CNC machines. We reported on Chuck E. Cheese using floppy disks for its animatronics as recently as January 2023.

Modernization resistance

Now that the Japanese government considers its reliance on floppy disks over, eyes are on it to see what, if any, other modernization overhauls it will make.

Despite various technological achievements, the country has a reputation for holding on to dated technology. The Institute for Management Development’s (IMD) 2023 World Digital Competitiveness Ranking listed Japan as number 32 out of 64 economies. The IMD says its rankings measure the “capacity and readiness of 64 economies to adopt and explore digital technologies as a key driver for economic transformation in business, government, and wider society.”

It may be a while before the government is ready to let go of some older technologies. For example, government officials have reportedly resisted moving to the cloud for administrative systems. Kono urged government offices to quit requiring hanko personal stamps in 2020, but per The Japan Times, movement from the seal is occurring at a “glacial pace.”

Many workplaces in Japan also opt for fax machines over emails, and 2021 plans to remove fax machines from government offices have been tossed due to resistance.

Some believe Japan’s reliance on older technology stems from the comfort and efficiencies associated with analog tech as well as governmental bureaucracy.

Japan wins 2-year “war on floppy disks,” kills regulations requiring old tech Read More »

millions-of-onlyfans-paywalls-make-it-hard-to-detect-child-sex-abuse,-cops-say

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say

OnlyFans’ paywalls make it hard for police to detect child sexual abuse materials (CSAM) on the platform, Reuters reported—especially new CSAM that can be harder to uncover online.

Because each OnlyFans creator posts their content behind their own paywall, five specialists in online child sexual abuse told Reuters that it’s hard to independently verify just how much CSAM is posted. Cops would seemingly need to subscribe to each account to monitor the entire platform, one expert who aids in police CSAM investigations, Trey Amick, suggested to Reuters.

OnlyFans claims that the amount of CSAM on its platform is extremely low. Out of 3.2 million accounts sharing “hundreds of millions of posts,” OnlyFans only removed 347 posts as suspected CSAM in 2023. Each post was voluntarily reported to the CyberTipline of the National Center for Missing and Exploited Children (NCMEC), which OnlyFans told Reuters has “full access” to monitor content on the platform.

However, that intensified monitoring seems to have only just begun. NCMEC just got access to OnlyFans in late 2023, the child safety group told Reuters. And NCMEC seemingly can’t scan the entire platform at once, telling Reuters that its access was “limited” exclusively “to OnlyFans accounts reported to its CyberTipline or connected to a missing child case.”

Similarly, OnlyFans told Reuters that police do not have to subscribe to investigate a creator’s posts, but the platform only grants free access to accounts when there’s an active investigation. That means once police suspect that CSAM is being exchanged on an account, they get “full access” to review “account details, content, and direct messages,” Reuters reported.

But that access doesn’t aid police hoping to uncover CSAM shared on accounts not yet flagged for investigation. That’s a problem, a Reuters investigation found, because it’s easy for creators to make a new account, where bad actors can mask their identities to avoid OnlyFans’ “controls meant to hold account holders responsible for their own content,” one detective, Edward Scoggins, told Reuters.

Evading OnlyFans’ CSAM detection seems easy

OnlyFans told Reuters that “would-be creators must provide at least nine pieces of personally identifying information and documents, including bank details, a selfie while holding a government photo ID, and—in the United States—a Social Security number.”

“All this is verified by human judgment and age-estimation technology that analyzes the selfie,” OnlyFans told Reuters. On OnlyFans’ site, the platform further explained that “we continuously scan our platform to prevent the posting of CSAM. All our content moderators are trained to identify and swiftly report any suspected CSAM.”

However, Reuters found that none of these controls worked 100 percent of the time to stop bad actors from sharing CSAM. And the same seemingly holds true for some minors motivated to post their own explicit content. One girl told Reuters that she evaded age verification first by using an adult’s driver’s license to sign up, then by taking over an account of an adult user.

An OnlyFans spokesperson told Ars that low amounts of CSAM reported to NCMEC is a “testament to the rigorous safety controls OnlyFans has in place.”

“OnlyFans is proud of the work we do to aggressively target, report, and support the investigations and prosecutions of anyone who seeks to abuse our platform in this way,” OnlyFans’ spokesperson told Ars. “Unlike many other platforms, the lack of anonymity and absence of end-to-end encryption on OnlyFans means that reports are actionable by law enforcement and prosecutors.”

Millions of OnlyFans paywalls make it hard to detect child sex abuse, cops say Read More »

ai-trains-on-kids’-photos-even-when-parents-use-strict-privacy-settings

AI trains on kids’ photos even when parents use strict privacy settings

“Outrageous” —

Even unlisted YouTube videos are used to train AI, watchdog warns.

AI trains on kids’ photos even when parents use strict privacy settings

Human Rights Watch (HRW) continues to reveal how photos of real children casually posted online years ago are being used to train AI models powering image generators—even when platforms prohibit scraping and families use strict privacy settings.

Last month, HRW researcher Hye Jung Han found 170 photos of Brazilian kids that were linked in LAION-5B, a popular AI dataset built from Common Crawl snapshots of the public web. Now, she has released a second report, flagging 190 photos of children from all of Australia’s states and territories, including indigenous children who may be particularly vulnerable to harms.

These photos are linked in the dataset “without the knowledge or consent of the children or their families.” They span the entirety of childhood, making it possible for AI image generators to generate realistic deepfakes of real Australian children, Han’s report said. Perhaps even more concerning, the URLs in the dataset sometimes reveal identifying information about children, including their names and locations where photos were shot, making it easy to track down children whose images might not otherwise be discoverable online.

That puts children in danger of privacy and safety risks, Han said, and some parents thinking they’ve protected their kids’ privacy online may not realize that these risks exist.

From a single link to one photo that showed “two boys, ages 3 and 4, grinning from ear to ear as they hold paintbrushes in front of a colorful mural,” Han could trace “both children’s full names and ages, and the name of the preschool they attend in Perth, in Western Australia.” And perhaps most disturbingly, “information about these children does not appear to exist anywhere else on the Internet”—suggesting that families were particularly cautious in shielding these boys’ identities online.

Stricter privacy settings were used in another image that Han found linked in the dataset. The photo showed “a close-up of two boys making funny faces, captured from a video posted on YouTube of teenagers celebrating” during the week after their final exams, Han reported. Whoever posted that YouTube video adjusted privacy settings so that it would be “unlisted” and would not appear in searches.

Only someone with a link to the video was supposed to have access, but that didn’t stop Common Crawl from archiving the image, nor did YouTube policies prohibiting AI scraping or harvesting of identifying information.

Reached for comment, YouTube’s spokesperson, Jack Malon, told Ars that YouTube has “been clear that the unauthorized scraping of YouTube content is a violation of our Terms of Service, and we continue to take action against this type of abuse.” But Han worries that even if YouTube did join efforts to remove images of children from the dataset, the damage has been done, since AI tools have already trained on them. That’s why—even more than parents need tech companies to up their game blocking AI training—kids need regulators to intervene and stop training before it happens, Han’s report said.

Han’s report comes a month before Australia is expected to release a reformed draft of the country’s Privacy Act. Those reforms include a draft of Australia’s first child data protection law, known as the Children’s Online Privacy Code, but Han told Ars that even people involved in long-running discussions about reforms aren’t “actually sure how much the government is going to announce in August.”

“Children in Australia are waiting with bated breath to see if the government will adopt protections for them,” Han said, emphasizing in her report that “children should not have to live in fear that their photos might be stolen and weaponized against them.”

AI uniquely harms Australian kids

To hunt down the photos of Australian kids, Han “reviewed fewer than 0.0001 percent of the 5.85 billion images and captions contained in the data set.” Because her sample was so small, Han expects that her findings represent a significant undercount of how many children could be impacted by the AI scraping.

“It’s astonishing that out of a random sample size of about 5,000 photos, I immediately fell into 190 photos of Australian children,” Han told Ars. “You would expect that there would be more photos of cats than there are personal photos of children,” since LAION-5B is a “reflection of the entire Internet.”

LAION is working with HRW to remove links to all the images flagged, but cleaning up the dataset does not seem to be a fast process. Han told Ars that based on her most recent exchange with the German nonprofit, LAION had not yet removed links to photos of Brazilian kids that she reported a month ago.

LAION declined Ars’ request for comment.

In June, LAION’s spokesperson, Nathan Tyler, told Ars that, “as a nonprofit, volunteer organization,” LAION is committed to doing its part to help with the “larger and very concerning issue” of misuse of children’s data online. But removing links from the LAION-5B dataset does not remove the images online, Tyler noted, where they can still be referenced and used in other AI datasets, particularly those relying on Common Crawl. And Han pointed out that removing the links from the dataset doesn’t change AI models that have already trained on them.

“Current AI models cannot forget data they were trained on, even if the data was later removed from the training data set,” Han’s report said.

Kids whose images are used to train AI models are exposed to a variety of harms, Han reported, including a risk that image generators could more convincingly create harmful or explicit deepfakes. In Australia last month, “about 50 girls from Melbourne reported that photos from their social media profiles were taken and manipulated using AI to create sexually explicit deepfakes of them, which were then circulated online,” Han reported.

For First Nations children—”including those identified in captions as being from the Anangu, Arrernte, Pitjantjatjara, Pintupi, Tiwi, and Warlpiri peoples”—the inclusion of links to photos threatens unique harms. Because culturally, First Nations peoples “restrict the reproduction of photos of deceased people during periods of mourning,” Han said the AI training could perpetuate harms by making it harder to control when images are reproduced.

Once an AI model trains on the images, there are other obvious privacy risks, including a concern that AI models are “notorious for leaking private information,” Han said. Guardrails added to image generators do not always prevent these leaks, with some tools “repeatedly broken,” Han reported.

LAION recommends that, if troubled by the privacy risks, parents remove images of kids online as the most effective way to prevent abuse. But Han told Ars that’s “not just unrealistic, but frankly, outrageous.”

“The answer is not to call for children and parents to remove wonderful photos of kids online,” Han said. “The call should be [for] some sort of legal protections for these photos, so that kids don’t have to always wonder if their selfie is going to be abused.”

AI trains on kids’ photos even when parents use strict privacy settings Read More »

scotus-agrees-to-review-texas-law-that-caused-pornhub-to-leave-the-state

SCOTUS agrees to review Texas law that caused Pornhub to leave the state

A Texas flag painted on very old boards and hanging on a barn.

Getty Images | Kathryn8

The US Supreme Court today agreed to hear a challenge to the Texas law that requires age verification on porn sites. A list of orders released this morning shows that the court granted a petition for certiorari filed by the Free Speech Coalition, an adult-industry lobby group.

In March, the US Court of Appeals for the 5th Circuit ruled that Texas could continue enforcing the law while litigation continues. In a 2-1 decision, 5th Circuit judges wrote that “the age-verification requirement is rationally related to the government’s legitimate interest in preventing minors’ access to pornography. Therefore, the age-verification requirement does not violate the First Amendment.”

The dissenting judge faulted the 5th Circuit majority for reviewing the law under the “rational-basis” standard instead of the more stringent strict scrutiny. The Supreme Court “has unswervingly applied strict scrutiny to content-based regulations that limit adults’ access to protected speech,” Judge Patrick Higginbotham wrote at the time.

Though the 5th Circuit majority upheld the age-verification rule, it also found that a requirement to display health warnings about pornography “unconstitutionally compel[s] speech” and cannot be enforced.

While the Supreme Court could eventually overturn the age-verification law, it is being enforced in the meantime. In April, the Supreme Court declined a request to temporarily block the Texas law.

Pornhub disabled site in Texas

After losing that April decision, the Free Speech Coalition said: “[We] remain hopeful that the Supreme Court will grant our petition for certiorari and reaffirm its lengthy line of cases applying strict scrutiny to content-based restrictions on speech like those in the Texas statute we’ve challenged.”

The Texas law, which took effect in September 2023, applies to websites in which more than one-third of the content “is sexual material harmful to minors.” Those websites must “use reasonable age verification methods” to limit their material to adults.

In February 2024, Texas Attorney General Ken Paxton alleged in a lawsuit that Pornhub owner Aylo (formerly MindGeek) violated the law. Pornhub disabled its website in Texas after the 5th Circuit ruling and has gone dark in other states in response to similar age laws.

The Free Speech Coalition’s petition for certiorari said that the Supreme Court “has repeatedly held that States may rationally restrict minors’ access to sexual materials, but such restrictions must withstand strict scrutiny if they burden adults’ access to constitutionally protected speech.” The group asked the court to determine whether the 5th Circuit “erred as a matter of law in applying rational-basis review to a law burdening adults’ access to protected speech, instead of strict scrutiny as this Court and other circuits have consistently done.”

“While purportedly seeking to limit minors’ access to online sexual content, the Act imposes significant burdens on adults’ access to constitutionally protected expression,” the petition said. “Of central relevance here, it requires every user, including adults, to submit personally identifying information to access sensitive, intimate content over a medium—the Internet—that poses unique security and privacy concerns.”

SCOTUS agrees to review Texas law that caused Pornhub to leave the state Read More »

biden-rushes-to-avert-labor-shortage-with-chips-act-funding-for-workers

Biden rushes to avert labor shortage with CHIPS act funding for workers

Less than one month to apply —

To dodge labor shortage, US finally aims CHIPS Act funding at training workers.

US President Joe Biden (C) speaks during a tour of the TSMC Semiconductor Manufacturing Facility in Phoenix, Arizona, on December 6, 2022.

Enlarge / US President Joe Biden (C) speaks during a tour of the TSMC Semiconductor Manufacturing Facility in Phoenix, Arizona, on December 6, 2022.

In the hopes of dodging a significant projected worker shortage in the next few years, the Biden administration will finally start funding workforce development projects to support America’s ambitions to become the world’s leading chipmaker through historic CHIPS and Science Act investments.

The Workforce Partner Alliance (WFPA) will be established through the CHIPS Act’s first round of funding focused on workers, officials confirmed in a press release. The program is designed to “focus on closing workforce and skills gaps in the US for researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a program requirements page said.

Bloomberg reported that the US risks a technician shortage reaching 90,000 by 2030. This differs slightly from Natcast’s forecast, which found that out of “238,000 jobs the industry is projected to create by 2030,” the semiconductor industry “will be unable to fill more than 67,000.”

Whatever the industry demand will actually be, with a projected tens of thousands of jobs needing to be filled just as the country is hoping to produce more chips than ever, the Biden administration is hoping to quickly train enough workers to fill openings for “researchers, engineers, and technicians across semiconductor design, manufacturing, and production,” a WFPA site said.

To do this, a “wide range of workforce solution providers” are encouraged to submit “high-impact” WFPA project proposals that can be completed within two years, with total budgets of between $500,000 and $2 million per award, the press release said.

Examples of “evidence-based workforce development strategies and methodologies that may be considered for this program” include registered apprenticeship and pre-apprenticeship programs, colleges or universities offering semiconductor industry-relevant degrees, programs combining on-the-job training with effective education or mentorship, and “experiential learning opportunities such as co-ops, externships, internships, or capstone projects.” While programs supporting construction activities will not be considered, programs designed to “reduce barriers” to entry in the semiconductor industry can use funding to support workers’ training, such as for providing childcare or transportation for workers.

“Making investments in the US semiconductor workforce is an opportunity to serve underserved communities, to connect individuals to good-paying sustainable jobs across the country, and to develop a robust workforce ecosystem that supports an industry essential to the national and economic security of the US,” Natcast said.

Between four to 10 projects will be selected, providing opportunities for “established programs with a track record of success seeking to scale,” as well as for newer programs “that meet a previously unaddressed need, opportunity, or theory of change” to be launched or substantially expanded.

The deadline to apply for funding is July 26, which gives applicants less than one month to get their proposals together. Applicants must have a presence in the US but can include for-profit organizations, accredited education institutions, training programs, state and local government agencies, and nonprofit organizations, Natcast’s eligibility requirements said.

Natcast—the nonprofit entity created to operate the National Semiconductor Technology Center Consortium—will manage the WFPA. An FAQ will be provided soon, Natcast said, but in the meantime, the agency is giving a brief window to submit questions about the program. Curious applicants can send questions to [email protected] until 11: 59 pm ET on July 9.

Awardees will be notified by early fall, Natcast said.

Planning the future of US chip workforce

In Natcast’s press release, Deirdre Hanford, Natcast’s CEO, said that the WFPA will “accelerate progress in the US semiconductor industry by tackling its most critical challenges, including the need for a highly skilled workforce that can meet the evolving demands of the industry.”

And the senior manager of Natcast’s workforce development programs, Michael Barnes, said that the WFPA will be critical to accelerating the industry’s growth in the US.

“It is imperative that we develop a domestic semiconductor workforce ecosystem that can support the industry’s anticipated growth and strengthen American national security, economic prosperity, and global competitiveness,” Barnes said.

Biden rushes to avert labor shortage with CHIPS act funding for workers Read More »