Policy

report:-boeing-may-reacquire-spirit-at-higher-price-despite-hating-optics

Report: Boeing may reacquire Spirit at higher price despite hating optics

Still up in the air —

Spirit was initially spun out from Boeing Commercial Airplanes in 2005.

Report: Boeing may reacquire Spirit at higher price despite hating optics

Amid safety scandals involving “many loose bolts” and widespread problems with Boeing’s 737 Max 9s, Boeing is apparently considering buying back Spirit AeroSystems, the key supplier behind some of Boeing’s current manufacturing problems, sources told The Wall Street Journal.

Spirit was initially spun out from Boeing Commercial Airplanes in 2005, and Boeing had planned to keep it that way. Last year, Boeing CEO Dave Calhoun sought to dispel rumors that Boeing might reacquire Spirit as federal regulators launched investigations into both companies. But now Calhoun appears to be “softening that stance,” the WSJ reported.

According to the WSJ’s sources, no deal has formed yet, but Spirit has initiated talks with Boeing and “hired bankers to explore strategic options.” Sources also confirmed that Spirit is weighing whether to sell its operations in Ireland, which manufactures parts for Boeing rival Airbus.

Perhaps paving the way for these talks, Spirit replaced its CEO last fall with a former Boeing executive, Patrick Shanahan. In a press release noting that Spirit relies “on Boeing for a significant portion of our revenues,” Spirit touted Shanahan as a “seasoned executive” with 31 years at Boeing, and Shanahan promised to “stabilize” Spirit’s operations.

If Boeing reacquired Spirit, it might help reduce backlash over Boeing outsourcing manufacturing of its planes, but it likely wouldn’t help Boeing escape the ongoing scrutiny. While the WSJ reported that “Spirit parts frequently arrive” at the Boeing factory “with defects,” it was “a snafu at Boeing’s factory” that led Alaska Airlines to ground 65 Boeing aircraft over safety concerns after a mid-aircraft door detached mid-flight, endangering passengers and crew.

Sources later revealed that it was Boeing employees who failed to put bolts back in when they reinstalled a door plug, reportedly causing the malfunction that forced Alaska Airlines to make an emergency landing. As a result, Boeing withdrew from a safety exemption that it had requested “to prematurely allow the 737 Max 7 to enter commercial service.” At that time, US Sen. Tammy Duckworth (D-Ill.) accused Boeing of a “bold-face attempt to put profits over the safety of the flying public.”

Purchasing Spirit would appear to be a last resort for Boeing, the WSJ reported, noting that so far, “Boeing has done everything short of acquiring Spirit in an effort to gain control over the supplier.”

But Reuters confirmed the WSJ’s report with an industry source, so it seems like perhaps Boeing increasingly feels it has no other options left despite working closely with Shanahan for the past few months to keep Spirit’s troubles from impacting Boeing’s bottom line. One industry source told Reuters that in the time since Boeing spun off Spirit, “the optics of buying at a higher price were among the factors that discouraged such a move.”

For Spirit, which attributes nearly two-thirds of its revenues to Boeing, the WSJ reported, being brought back into the Boeing fold could be the only way to survive these turbulent times. Currently valued at about $3.3 billion, Spirit has struggled for months to shore up a commercial agreement with Airbus and notably failed to stabilize after receiving a “$100 million cash infusion from Boeing” last year, the WSJ reported.

But for Boeing, the obvious downside of the purchase would be taking on Spirit’s mess at the same time Boeing is trying to clean up its own image.

Report: Boeing may reacquire Spirit at higher price despite hating optics Read More »

whatsapp-finally-forces-pegasus-spyware-maker-to-share-its-secret-code

WhatsApp finally forces Pegasus spyware maker to share its secret code

In on the secret —

Israeli spyware maker loses fight to only share information on installation.

WhatsApp finally forces Pegasus spyware maker to share its secret code

WhatsApp will soon be granted access to explore the “full functionality” of the NSO Group’s Pegasus spyware—sophisticated malware the Israeli Ministry of Defense has long guarded as a “highly sought” state secret, The Guardian reported.

Since 2019, WhatsApp has pushed for access to the NSO’s spyware code after alleging that Pegasus was used to spy on 1,400 WhatsApp users over a two-week period, gaining unauthorized access to their sensitive data, including encrypted messages. WhatsApp suing the NSO, Ars noted at the time, was “an unprecedented legal action” that took “aim at the unregulated industry that sells sophisticated malware services to governments around the world.”

Initially, the NSO sought to block all discovery in the lawsuit “due to various US and Israeli restrictions,” but that blanket request was denied. Then, last week, the NSO lost another fight to keep WhatsApp away from its secret code.

As the court considered each side’s motions to compel discovery, a US district judge, Phyllis Hamilton, rejected the NSO’s argument that it should only be required to hand over information about Pegasus’ installation layer.

Hamilton sided with WhatsApp, granting the Meta-owned app’s request for “information concerning the full functionality of the relevant spyware,” writing that “information showing the functionality of only the installation layer of the relevant spyware would not allow plaintiffs to understand how the relevant spyware performs the functions of accessing and extracting data.”

WhatsApp has alleged that Pegasus can “intercept communications sent to and from a device, including communications over iMessage, Skype, Telegram, WeChat, Facebook Messenger, WhatsApp, and others” and that it could also be “customized for different purposes, including to intercept communications, capture screenshots, and exfiltrate browser history.”

To prove this, WhatsApp needs access to “all relevant spyware”—specifically “any NSO spyware targeting or directed at WhatsApp servers, or using WhatsApp in any way to access Target Devices”—for “a period of one year before the alleged attack to one year after the alleged attack,” Hamilton concluded.

The NSO has so far not commented on the order, but WhatsApp was pleased with this outcome.

“The recent court ruling is an important milestone in our long running goal of protecting WhatsApp users against unlawful attacks,” WhatsApp’s spokesperson told The Guardian. “Spyware companies and other malicious actors need to understand they can be caught and will not be able to ignore the law.”

But Hamilton did not grant all of WhatsApp’s requests for discovery, sparing the NSO from sharing specific information regarding its server architecture because WhatsApp “would be able to glean the same information from the full functionality of the alleged spyware.”

Perhaps more significantly, the NSO also won’t be compelled to identify its clients. While the NSO does not publicly name the governments that purchase its spyware, reports indicate that Poland, Saudi Arabia, Rwanda, India, Hungary, and the United Arab Emirates have used it to target dissidents, The Guardian reported. In 2021, the US blacklisted the NSO for allegedly spreading “digital tools used for repression.”

In the same order, Hamilton also denied the NSO’s request to compel WhatsApp to share its post-complaint communications with the Citizen Lab, which served as a third-party witness in the case to support WhatsApp’s argument that “Pegasus is misused by NSO’s customers against ‘civil society.’”

It appeared that the NSO sought WhatsApp’s post-complaint communications with Citizen Lab as a way to potentially pressure WhatsApp into dropping Citizen Lab’s statement from the record. Hamilton quoted a court filing from the NSO that curiously noted: “If plaintiffs would agree to withdraw from their case Citizen Lab’s contention that Pegasus was used against members of ‘civil society’ rather than to investigate terrorism and serious crime, there would be much less need for this discovery.”

Ultimately, Hamilton denied the NSO’s request because “the court fails to see the relevance of the requested discovery.”

As discovery in the case proceeds, the court expects to receive expert disclosures from each side on August 30 before the trial, which is expected to start on March 3, 2025.

WhatsApp finally forces Pegasus spyware maker to share its secret code Read More »

judge-mocks-x-for-“vapid”-argument-in-musk’s-hate-speech-lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit

It looks like Elon Musk may lose X’s lawsuit against hate speech researchers who encouraged a major brand boycott after flagging ads appearing next to extremist content on X, the social media site formerly known as Twitter.

X is trying to argue that the Center for Countering Digital Hate (CCDH) violated the site’s terms of service and illegally accessed non-public data to conduct its reporting, allegedly posing a security risk for X. The boycott, X alleged, cost the company tens of millions of dollars by spooking advertisers, while X contends that the CCDH’s reporting is misleading and ads are rarely served on extremist content.

But at a hearing Thursday, US district judge Charles Breyer told the CCDH that he would consider dismissing X’s lawsuit, repeatedly appearing to mock X’s decision to file it in the first place.

Seemingly skeptical of X’s entire argument, Breyer appeared particularly focused on how X intended to prove that the CCDH could have known that its reporting would trigger such substantial financial losses, as the lawsuit hinges on whether the alleged damages were “foreseeable,” NPR reported.

X’s lawyer, Jon Hawk, argued that when the CCDH joined Twitter in 2019, the group agreed to terms of service that noted those terms could change. So when Musk purchased Twitter and updated rules to reinstate accounts spreading hate speech, the CCDH should have been able to foresee those changes in terms and therefore anticipate that any reporting on spikes in hate speech would cause financial losses.

According to CNN, this is where Breyer became frustrated, telling Hawk, “I’m trying to figure out in my mind how that’s possibly true, because I don’t think it is.”

“What you have to tell me is, why is it foreseeable?” Breyer said. “That they should have understood that, at the time they entered the terms of service, that Twitter would then change its policy and allow this type of material to be disseminated?

“That, of course, reduces foreseeability to one of the most vapid extensions of law I’ve ever heard,” Breyer added. “‘Oh, what’s foreseeable is that things can change, and therefore, if there’s a change, it’s ‘foreseeable.’ I mean, that argument is truly remarkable.”

According to NPR, Breyer suggested that X was trying to “shoehorn” its legal theory by using language from a breach of contract claim, when what the company actually appeared to be alleging was defamation.

“You could’ve brought a defamation case; you didn’t bring a defamation case,” Breyer said. “And that’s significant.”

Breyer directly noted that one reason why X might not bring a defamation suit was if the CCDH’s reporting was accurate, NPR reported.

CCDH’s CEO and founder, Imran Ahmed, provided a statement to Ars, confirming that the group is “very pleased with how yesterday’s argument went, including many of the questions and comments from the court.”

“We remain confident in the strength of our arguments for dismissal,” Ahmed said.

Judge mocks X for “vapid” argument in Musk’s hate speech lawsuit Read More »

elon-musk-sues-openai-and-sam-altman,-accusing-them-of-chasing-profits

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

YA Musk lawsuit —

OpenAI is now a “closed-source de facto subsidiary” of Microsoft, says lawsuit.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits

Elon Musk has sued OpenAI and its chief executive Sam Altman for breach of contract, alleging they have compromised the start-up’s original mission of building artificial intelligence systems for the benefit of humanity.

In the lawsuit, filed to a San Francisco court on Thursday, Musk’s lawyers wrote that OpenAI’s multibillion-dollar alliance with Microsoft had broken an agreement to make a major breakthrough in AI “freely available to the public.”

Instead, the lawsuit said, OpenAI was working on “proprietary technology to maximise profits for literally the largest company in the world.”

The legal fight escalates a long-running dispute between Musk, who has founded his own AI company, known as xAI, and OpenAI, which has received a $13 billion investment from Microsoft.

Musk, who helped co-found OpenAI in 2015, said in his legal filing he had donated $44 million to the group, and had been “induced” to make contributions by promises, “including in writing,” that it would remain a non-profit organisation.

He left OpenAI’s board in 2018 following disagreements with Altman on the direction of research. A year later, the group established the for-profit arm that Microsoft has invested into.

Microsoft’s president Brad Smith told the Financial Times this week that while the companies were “very important partners,” “Microsoft does not control OpenAI.”

Musk’s lawsuit alleges that OpenAI’s latest AI model, GPT4, released in March last year, breached the threshold for artificial general intelligence (AGI), at which computers function at or above the level of human intelligence.

The Microsoft deal only gives the tech giant a licence to OpenAI’s pre-AGI technology, the lawsuit said, and determining when this threshold is reached is key to Musk’s case.

The lawsuit seeks a court judgment over whether GPT4 should already be considered to be AGI, arguing that OpenAI’s board was “ill-equipped” to make such a determination.

The filing adds that OpenAI is also building another model, Q*, that will be even more powerful and capable than GPT4. It argues that OpenAI is committed under the terms of its founding agreement to make such technology available publicly.

“Mr. Musk has long recognised that AGI poses a grave threat to humanity—perhaps the greatest existential threat we face today,” the lawsuit says.

“To this day, OpenAI, Inc.’s website continues to profess that its charter is to ensure that AGI ‘benefits all of humanity’,” it adds. “In reality, however, OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft.”

OpenAI maintains it has not yet achieved AGI, despite its models’ success in language and reasoning tasks. Large language models like GPT4 still generate errors, fabrications and so-called hallucinations.

The lawsuit also seeks to “compel” OpenAI to adhere to its founding agreement to build technology that does not simply benefit individuals such as Altman and corporations such as Microsoft.

Musk’s own xAI company is a direct competitor to OpenAI and launched its first product, a chatbot named Grok, in December.

OpenAI declined to comment. Representatives for Musk have been approached for comment. Microsoft did not immediately respond to a request for comment.

The Microsoft-OpenAI alliance is being reviewed by competition watchdogs in the US, EU and UK.

The US Securities and Exchange Commission issued subpoenas to OpenAI executives in November as part of an investigation into whether Altman had misled its investors, according to people familiar with the move.

That investigation came shortly after OpenAI’s board fired Altman as chief executive only to reinstate him days later. A new board has since been instituted including former Salesforce co-chief executive Bret Taylor as chair.

There is an ongoing internal review of the former board’s allegations against Altman by independent law firm WilmerHale.

© 2024 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Elon Musk sues OpenAI and Sam Altman, accusing them of chasing profits Read More »

tesla-must-face-racism-class-action-from-6,000-black-workers,-judge-rules

Tesla must face racism class action from 6,000 Black workers, judge rules

Aerial view of a Tesla factory shows a giant Tesla logo on the side of the building, and a parking lot filled with cars.

Enlarge / Tesla factory in Fremont, California, on September 18, 2023.

Getty Images | Justin Sullivan

Tesla must face a class-action lawsuit from nearly 6,000 Black people who allege that they faced discrimination and harassment while working at the company’s Fremont factory, a California judge ruled.

The tentative ruling from Alameda County Superior Court “certifies a class defined as the specific approximately 5,977 persons self-identified as Black/African-American who worked at Tesla during the class period from November 9, 2016, through the date of the entry of this order to prosecute the claims in the complaint.”

The tentative ruling was issued Tuesday by Judge Noël Wise. Tesla can contest the ruling at a hearing on Friday, but tentative rulings are generally finalized without major changes.

The case started years ago. An amended complaint in 2017 alleged that Tesla “created an intimidating, hostile, and offensive work environment for Black and/or African-American employees that includes a routine use of the terms ‘Nr’ and ‘Na’ and other racially derogatory terms, and racist treatment and images at Tesla’s production facility in Fremont, California.”

The plaintiffs’ motion was not approved in its entirety. A request for class certification was denied for all people who are not on the list of class members.

However, plaintiffs will have five days to provide an updated list of class members. Anyone not on the list “may if they wish seek individual remedies through filing civil actions, through arbitration, or otherwise,” the ruling said.

Plaintiffs “heard the n-word” at factory

A class-action trial is scheduled to begin on October 14, 2024, the same day as a separate case against Tesla brought by the California Civil Rights Department (CRD).

As Wise’s ruling noted, “The CRD has filed and is pursuing a parallel law enforcement action that is alleging a pattern and practice of failing to prevent discrimination and harassment and seeking an injunction that would require Tesla to institute policies and procedures that will do a better job of preventing and redressing discrimination and harassment at Tesla. The EEOC [US Equal Employment Opportunity Commission] has filed a similar action.”

In the class action, plaintiffs submitted “declarations from 240 persons who stated that they observed discrimination or harassment at the Tesla Fremont facility and that some complained about it,” Wise wrote. “Of the 240 plaintiff declarations, all stated that they heard the n-word at the Tesla Fremont facility, 112 state that they complained to a supervisor, manager or HR about discrimination, but only 16 made written complaints.”

Tesla submitted declarations from 228 people “who generally stated that they did not observe discrimination or harassment at the Tesla Fremont facility or that if they observed it then Tesla took ‘immediate and appropriate corrective action,'” Wise wrote.

Tesla also said it “created a centralized internal tracking system to document complaints and investigations” in 2017 and will rely on this database “to demonstrate that Tesla was aware of complaints about race discrimination and harassment and how it responded to the complaints.”

Tesla must face racism class action from 6,000 Black workers, judge rules Read More »

centurylink-left-customers-without-internet-for-39-days—until-ars-stepped-in

CenturyLink left customers without Internet for 39 days—until Ars stepped in

A

Aurich Lawson | Getty Images

When a severe winter storm hit Oregon on January 13, Nicholas Brown’s CenturyLink fiber Internet service stopped working at his house in Portland.

The initial outage was understandable amid the widespread damage caused by the storm, but CenturyLink’s response was poor. It took about 39 days for CenturyLink to restore broadband service to Brown and even longer to restore service to one of his neighbors. Those reconnections only happened after Ars Technica contacted the telco firm on the customers’ behalf last week.

Brown had never experienced any lengthy outage in over four years of subscribing to CenturyLink, so he figured the telco firm would restore his broadband connection within a reasonable amount of time. “It had practically never gone down at all up to this point. I’ve been quite happy with it,” he said.

While CenturyLink sent trucks to his street to reconnect most of his neighbors after the storm and Brown regularly contacted CenturyLink to plead for a fix, his Internet connection remained offline. Brown had also lost power, but the electricity service was reconnected within about 48 hours, while the broadband service remained offline for well over a month.

Fearing he had exhausted his options, Brown contacted Ars. We sent an email to CenturyLink’s media department on February 21 to seek information on why the outage lasted so long.

Telco finally springs into action

Roughly four hours after we contacted the firm, a CenturyLink technician arrived at the Portland house Brown shares with his partner, Jolene Edwards. The technician was able to reconnect them that day.

“At 4: 30 pm, a CenturyLink tech showed up unannounced,” Brown told us. “No one was home at the time, but he said he would wait. I get the idea that he was told not to come back until it was fixed.”

Brown’s neighbor, Leonard Bentz, also lost Internet access on January 13 and remained offline for two days longer than Brown. The technician who arrived on February 21 didn’t reconnect Bentz’s house.

“My partner gently tried to egg him to go over there and fix them too, and he more or less said, ‘That’s not the ticket that I have,'” Brown said.

After getting Bentz’s name and address, we contacted CenturyLink again on February 22 to notify them that he also needed to be reconnected. CenturyLink later confirmed to us that it restored his Internet service on February 23.

“They kept putting me off and putting me off”

Bentz told Ars that during the month-plus outage, he called CenturyLink several times. Customer service reps and a supervisor told him the company would send someone to fix his service, but “they kept putting me off and putting me off and putting me off,” Bentz said.

On one of those calls, Bentz said that CenturyLink promised him seven free months of service in exchange for the long outage. Brown told us he received a refund for the entire length of his outage, plus a bit extra. He pays $65 a month for gigabit service.

Brown said he is “happy enough with the resolution,” at least financially since he “got all the money for the non-service.” But those 39 days without Internet service will remain a bad memory.

Unfortunately, Internet service providers like CenturyLink have a history of failing to fix problems until media coverage exposes their poor customer service. CenturyLink is officially called Lumen these days, but it still uses the CenturyLink brand name.

After fixing Brown’s service in Portland, a CenturyLink spokesperson gave us the following statement:

It’s frustrating to have your services down and for that we apologize. We’ve brought in additional resources to assist in restoring service that was knocked out due to severe storms and multiple cases of vandalism. Some services are back, and we are working diligently to completely restore everything. In fact, we have technicians there now. We appreciate our customers’ patience and understanding, and we welcome calls from our customers to discuss their service.

CenturyLink left customers without Internet for 39 days—until Ars stepped in Read More »

a-big-boost-to-europe’s-climate-change-goals

A big boost to Europe’s climate-change goals

carbon-neutral continent —

A new policy called CBAM will assist Europe’s ambition to become carbon-neutral.

Steelworker starting molten steel pour in steelworks facility.

Enlarge / Materials such as steel, cement, aluminum, electricity, fertilizer, hydrogen, and iron will soon be subject to greenhouse gas emissions fees when imported into Europe.

Monty Rakusen/Getty

The year 2023 was a big one for climate news, from record heat to world leaders finally calling for a transition away from fossil fuels. In a lesser-known milestone, it was also the year the European Union soft-launched an ambitious new initiative that could supercharge its climate policies.

Wrapped in arcane language studded with many a “thereof,” “whereas” and “having regard to” is a policy that could not only help fund the European Union’s pledge to become the world’s first carbon-neutral continent, but also push industries all over the world to cut their carbon emissions.

It’s the establishment of a carbon price that will force many heavy industries to pay for each ton of carbon dioxide, or equivalent emissions of other greenhouse gases, that they emit. But what makes this fee revolutionary is that it will apply to emissions that don’t happen on European soil. The EU already puts a price on many of the emissions created by European firms; now, through the new Carbon Border Adjustment Mechanism, or CBAM, the bloc will charge companies that import the targeted products—cement, aluminum, electricity, fertilizer, hydrogen, iron, and steel—into the EU, no matter where in the world those products are made.

These industries are often large and stubborn sources of greenhouse gas emissions, and addressing them is key in the fight against climate change, says Aaron Cosbey, an economist at the International Institute for Sustainable Development, an environmental think tank. If those companies want to continue doing business with European firms, they’ll have to clean up or pay a fee. That creates an incentive for companies worldwide to reduce emissions.

In CBAM’s first phase, which started in October 2023, companies importing those materials into the EU must report on the greenhouse gas emissions involved in making the products. Beginning in 2026, they’ll have to pay a tariff.

Even having to supply emissions data will be a big step for some producers and could provide valuable data for climate researchers and policymakers, says Cosbey.

“I don’t know how many times I’ve gone through this exercise of trying to identify, at a product level, the greenhouse gas intensity of exports from particular countries and had to go through the most amazing, torturous processes to try to do those estimates,” he says. “And now it’s going to be served to me on a plate.”

CBAM will apply to a set of products that are linked to heavy greenhouse gas emissions.

Enlarge / CBAM will apply to a set of products that are linked to heavy greenhouse gas emissions.

Side benefits at home

While this new carbon price targets companies abroad, it will also help the EU to pursue its climate ambitions at home. For one thing, the extra revenues could go toward financing climate-friendly projects and promising new technologies.

But it also allows the EU to tighten up on domestic pollution. Since 2005, the EU has set a maximum, or cap, on the emissions created by a range of industrial “installations” such as oil and metal refineries. It makes companies within the bloc use credits, or allowances, for each ton of carbon dioxide—or equivalent discharges of other greenhouse gases—that they emit, up to that cap. Some allowances are currently granted for free, but others are bought at auction or traded with other companies in a system known as a carbon market.

But this idea—of making it expensive to harm the planet—creates a conundrum. If doing business in Europe becomes too expensive, European industry could flee the continent for countries that don’t have such high fees or strict regulations. That would damage the European economy and do nothing to solve the environmental crisis. The greenhouse gases would still be emitted—perhaps more than if the products had been made in Europe—and climate change would careen forward on its destructive path.

The Carbon Border Adjustment Mechanism aims to impose the same carbon price for products made abroad as domestic producers must pay under the EU’s system. In theory, that keeps European businesses competitive with imports from international rivals. It also addresses environmental concerns by nudging companies overseas toward reducing greenhouse gas emissions rather than carrying on as usual.

This means the EU can further tighten up its carbon market system at home. With international competition hopefully less of a concern, it plans to phase out some leniencies, such as some of the free emission allowances, that existed to help keep domestic industries competitive.

That’s a big deal, says Cosbey. Dozens of countries have carbon pricing systems, but they all create exceptions to keep heavy industry from getting obliterated by international competition. The carbon border tariff could allow the EU to truly force its industries—and consumers—to pay the price, he says.

“That is ambitious; nobody in the world is doing that.”

A big boost to Europe’s climate-change goals Read More »

openai-accuses-nyt-of-hacking-chatgpt-to-set-up-copyright-suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit

OpenAI is now boldly claiming that The New York Times “paid someone to hack OpenAI’s products” like ChatGPT to “set up” a lawsuit against the leading AI maker.

In a court filing Monday, OpenAI alleged that “100 examples in which some version of OpenAI’s GPT-4 model supposedly generated several paragraphs of Times content as outputs in response to user prompts” do not reflect how normal people use ChatGPT.

Instead, it allegedly took The Times “tens of thousands of attempts to generate” these supposedly “highly anomalous results” by “targeting and exploiting a bug” that OpenAI claims it is now “committed to addressing.”

According to OpenAI this activity amounts to “contrived attacks” by a “hired gun”—who allegedly hacked OpenAI models until they hallucinated fake NYT content or regurgitated training data to replicate NYT articles. NYT allegedly paid for these “attacks” to gather evidence to support The Times’ claims that OpenAI’s products imperil its journalism by allegedly regurgitating reporting and stealing The Times’ audiences.

“Contrary to the allegations in the complaint, however, ChatGPT is not in any way a substitute for a subscription to The New York Times,” OpenAI argued in a motion that seeks to dismiss the majority of The Times’ claims. “In the real world, people do not use ChatGPT or any other OpenAI product for that purpose. Nor could they. In the ordinary course, one cannot use ChatGPT to serve up Times articles at will.”

In the filing, OpenAI described The Times as enthusiastically reporting on its chatbot developments for years without raising any concerns about copyright infringement. OpenAI claimed that it disclosed that The Times’ articles were used to train its AI models in 2020, but The Times only cared after ChatGPT’s popularity exploded after its debut in 2022.

According to OpenAI, “It was only after this rapid adoption, along with reports of the value unlocked by these new technologies, that the Times claimed that OpenAI had ‘infringed its copyright[s]’ and reached out to demand ‘commercial terms.’ After months of discussions, the Times filed suit two days after Christmas, demanding ‘billions of dollars.'”

Ian Crosby, Susman Godfrey partner and lead counsel for The New York Times, told Ars that “what OpenAI bizarrely mischaracterizes as ‘hacking’ is simply using OpenAI’s products to look for evidence that they stole and reproduced The Times’s copyrighted works. And that is exactly what we found. In fact, the scale of OpenAI’s copying is much larger than the 100-plus examples set forth in the complaint.”

Crosby told Ars that OpenAI’s filing notably “doesn’t dispute—nor can they—that they copied millions of The Times’ works to build and power its commercial products without our permission.”

“Building new products is no excuse for violating copyright law, and that’s exactly what OpenAI has done on an unprecedented scale,” Crosby said.

OpenAI argued that the court should dismiss claims alleging direct copyright, contributory infringement, Digital Millennium Copyright Act violations, and misappropriation, all of which it describes as “legally infirm.” Some fail because they are time-barred—seeking damages on training data for OpenAI’s older models—OpenAI claimed. Others allegedly fail because they misunderstand fair use or are preempted by federal laws.

If OpenAI’s motion is granted, the case would be substantially narrowed.

But if the motion is not granted and The Times ultimately wins—and it might—OpenAI may be forced to wipe ChatGPT and start over.

“OpenAI, which has been secretive and has deliberately concealed how its products operate, is now asserting it’s too late to bring a claim for infringement or hold them accountable. We disagree,” Crosby told Ars. “It’s noteworthy that OpenAI doesn’t dispute that it copied Times works without permission within the statute of limitations to train its more recent and current models.”

OpenAI did not immediately respond to Ars’ request to comment.

OpenAI accuses NYT of hacking ChatGPT to set up copyright suit Read More »

kagan:-florida-social-media-law-seems-like-“classic-first-amendment-violation”

Kagan: Florida social media law seems like “classic First Amendment violation”

The US Supreme Court building is seen on a sunny day. Kids mingle around a small pool on the grounds in front of the building.

Enlarge / The Supreme Court of the United States in Washington, DC, in May 2023.

Getty Images | NurPhoto

The US Supreme Court today heard oral arguments on Florida and Texas state laws that impose limits on how social media companies can moderate user-generated content.

The Florida law prohibits large social media sites like Facebook and Twitter (aka X) from banning politicians and says they must “apply censorship, deplatforming, and shadow banning standards in a consistent manner among its users on the platform.” The Texas statute prohibits large social media companies from moderating posts based on a user’s “viewpoint.” The laws were supported by Republican officials from 20 other states.

The tech industry says both laws violate the companies’ First Amendment right to use editorial discretion in deciding what kinds of user-generated content to allow on their platforms and how to present that content. The Supreme Court will decide whether the laws can be enforced while the industry lawsuits against Florida and Texas continue in lower courts.

How the Supreme Court rules at this stage in these two cases could give one side or the other a big advantage in the ongoing litigations. Paul Clement, a lawyer for Big Tech trade group NetChoice, today urged justices to reject the idea that content moderation conducted by private companies is censorship.

“I really do think that censorship is only something that the government can do to you,” Clement said. “And if it’s not the government, you really shouldn’t label it ‘censorship.’ It’s just a category mistake.”

Companies use editorial discretion to make websites useful for users and advertisers, he said, arguing that content moderation is an expressive activity protected by the First Amendment.

Justice Kagan talks anti-vaxxers, insurrectionists

Henry Whitaker, Florida’s solicitor general, said that social media platforms marketed themselves as neutral forums for free speech but now claim to be “editors of their users’ speech, rather like a newspaper.”

“They contend that they possess a broad First Amendment right to censor anything they host on their sites, even when doing so contradicts their own representations to consumers,” he said. Social media platforms should not be allowed to censor speech any more than phone companies are allowed to, he argued.

Contending that social networks don’t really act as editors, he said that “it is a strange kind of editor that does not actually look at the material” before it is posted. He also said that “upwards of 99 percent of what goes on the platforms is basically passed through without review.”

Justice Elena Kagan replied, “But that 1 percent seems to have gotten some people extremely angry.” Describing the platforms’ moderation practices, she said the 1 percent of content that is moderated is “like, ‘we don’t want anti-vaxxers on our site or we don’t want insurrectionists on our site.’ I mean, that’s what motivated these laws, isn’t it? And that’s what’s getting people upset about them is that other people have different views about what it means to provide misinformation as to voting and things like that.”

Later, Kagan said, “I’m taking as a given that YouTube or Facebook or whatever has expressive views. There are particular kinds of expression defined by content that they don’t want anywhere near their site.”

Pointing to moderation of hate speech, bullying, and misinformation about voting and public health, Kagan asked, “Why isn’t that a classic First Amendment violation for the state to come in and say, ‘we’re not going to allow you to enforce those sorts of restrictions?'”

Whitaker urged Kagan to “look at the objective activity being regulated, namely censoring and deplatforming, and ask whether that expresses a message. Because they [the social networks] host so much content, an objective observer is not going to readily attribute any particular piece of content that appears on their site to some decision to either refrain from or to censor or deplatform.”

Thomas: Who speaks when an algorithm moderates?

Justice Clarence Thomas expressed doubts about whether content moderation conveys an editorial message. “Tell me again what the expressive conduct is that, for example, YouTube engages in when it or Twitter deplatforms someone. What is the expressive conduct and to whom is it being communicated?” Thomas asked.

Clement said the platforms “are sending a message to that person and to their broader audience that that material” isn’t allowed. As a result, users are “not going to see material that violates the terms of use. They’re not going to see a bunch of material that glorifies terrorism. They’re not going to see a bunch of material that glorifies suicide,” Clement said.

Thomas asked who is doing the “speaking” when an algorithm performs content moderation, particularly when “it’s a deep-learning algorithm which teaches itself and has very little human intervention.”

“So who’s speaking then, the algorithm or the person?” Thomas asked.

Clement said that Facebook and YouTube are “speaking, because they’re the ones that are using these devices to run their editorial discretion across these massive volumes.” The need to use algorithms to automate moderation demonstrates “the volume of material on these sites, which just shows you the volume of editorial discretion,” he said.

Kagan: Florida social media law seems like “classic First Amendment violation” Read More »

how-your-sensitive-data-can-be-sold-after-a-data-broker-goes-bankrupt

How your sensitive data can be sold after a data broker goes bankrupt

playing fast and loose —

Sensitive location data could be sold off to the highest bidder.

Blue tone city scape and network connection concept , Map pin business district

In 2021, a company specializing in collecting and selling location data called Near bragged that it was “The World’s Largest Dataset of People’s Behavior in the Real-World,” with data representing “1.6B people across 44 countries.” Last year the company went public with a valuation of $1 billion (via a SPAC). Seven months later it filed for bankruptcy and has agreed to sell the company.

But for the “1.6B people” that Near said its data represents, the important question is: What happens to Near’s mountain of location data? Any company could gain access to it through purchasing the company’s assets.

The prospect of this data, including Near’s collection of location data from sensitive locations such as abortion clinics, being sold off in bankruptcy has raised alarms in Congress. Last week, Sen. Ron Wyden (D-Ore.) wrote the Federal Trade Commission (FTC) urging the agency to “protect consumers and investors from the outrageous conduct” of Near, citing his office’s investigation into the India-based company.

Wyden’s letter also urged the FTC “to intervene in Near’s bankruptcy proceedings to ensure that all location and device data held by Near about Americans is promptly destroyed and is not sold off, including to another data broker.” The FTC took such an action in 2010 to block the use of 11 years worth of subscriber personal data during the bankruptcy proceedings of the XY Magazine, which was oriented to young gay men. The agency requested that the data be destroyed to prevent its misuse.

Wyden’s investigation was spurred by a May 2023 Wall Street Journal report that Near had licensed location data to the anti-abortion group Veritas Society so it could target ads to visitors of Planned Parenthood clinics and attempt to dissuade women from seeking abortions. Wyden’s investigation revealed that the group’s geofencing campaign focused on 600 Planned Parenthood clinics in 48 states. The Journal also revealed that Near had been selling its location data to the Department of Defense and intelligence agencies.

As of publication, Near has not responded to requests for comment.

According to Near’s privacy policy, all of the data they have collected can be transferred to the new owners. Under the heading of “Who do you share my personal data with?” It lists “Prospective buyers of our business.”

This type of clause is common in privacy policies, and is a regular part of businesses being bought and sold. Where it gets complicated is when the company being sold owns data containing sensitive information.

This week, a new bankruptcy court filing showed that Wyden’s requests were granted. The order placed restrictions on the use, sale, licensing, or transfer of location data collected from sensitive locations in the US and requires any company that purchases the data to establish a “sensitive location data program” with detailed policies for such data and ensure ongoing monitoring and compliance, including the creation of a list of sensitive locations such as reproductive health care facilities, doctor’s offices, houses of worship, mental health care providers, corrections facilities and shelters among others. The order demands that unless consumers have explicitly provided consent, the company must cease any collection, use, or transfer of location data.

In a statement emailed to The Markup, Wyden wrote, “I commend the FTC for stepping in—at my request—to ensure that this data broker’s stockpile of Americans’ sensitive location data isn’t abused, again.”

Wyden called for protecting sensitive location data from data brokers, citing the new legal threats to women since the Supreme Court’s June 2022 decision to overturn the abortion-rights ruling Roe v. Wade. Wyden wrote, “The threat posed by the sale of location data is clear, particularly to women who are seeking reproductive care.”

The bankruptcy order also provided a rare glimpse into how data brokers license data to one another. Near’s list of contracts included agreements with several location brokers, ad platforms, universities, retailers, and city governments.

It is not clear from the filing if the agreements covered Near data being licensed, Near licensing the data from the companies, or both.

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

How your sensitive data can be sold after a data broker goes bankrupt Read More »

at&t’s-botched-network-update-caused-yesterday’s-major-wireless-outage

AT&T’s botched network update caused yesterday’s major wireless outage

AT&T outage cause —

AT&T blamed itself for “incorrect process used as we were expanding our network.”

A picture of two cellular towers. Trees and aerial power lines are also in the photo.

Enlarge / Cellular towers in Redondo Beach, California on February 22, 2024.

Getty Images | Eric Thayer

AT&T said a botched update related to a network expansion caused the wireless outage that disrupted service for many mobile customers yesterday.

“Based on our initial review, we believe that today’s outage was caused by the application and execution of an incorrect process used as we were expanding our network, not a cyber attack,” AT&T said on its website last night. “We are continuing our assessment of today’s outage to ensure we keep delivering the service that our customers deserve.”

While “incorrect process” is a bit vague, an ABC News report that cited anonymous sources said it was a software update that went wrong. AT&T hasn’t said exactly how many cellular customers were affected, but there were over 70,000 problem reports on the DownDetector website yesterday morning.

The outage began early in the morning, and AT&T said at 11: 15 am ET yesterday that “three-quarters of our network has been restored.” By 3: 10 pm ET, AT&T said it had “restored wireless service to all our affected customers.”

We asked AT&T for more information on the extent of the outage and its cause today, but a spokesperson said the company had no further comment.

FCC investigates

The outage was big enough that the Federal Communications Commission said its Public Safety and Homeland Security Bureau was actively investigating. The FCC also said it was in touch with FirstNet, the nationwide public safety network that was built by AT&T. Some FirstNet users reported frustrations related to the outage.

The San Francisco Fire Department said it was monitoring the outage because it appeared to be preventing “AT&T wireless customers from making and receiving any phone calls (including to 911).” The FCC sometimes issues fines to telcos over 911 outages.

The US Cybersecurity and Infrastructure Security Agency reportedly said it was looking into the outage, and a White House spokesperson said the FBI was checking on it, too. But it was determined pretty quickly that the outage wasn’t caused by cyber-attackers.

AT&T’s botched network update caused yesterday’s major wireless outage Read More »

yelp:-it’s-gotten-worse-since-google-made-changes-to-comply-with-eu-rules

Yelp: It’s gotten worse since Google made changes to comply with EU rules

illustration of google and yelp logos

Anjali Nair; Getty Images

To comply with looming rules that ban tech giants from favoring their own services, Google has been testing new look search results for flights, trains, hotels, restaurants, and products in Europe. The EU’s Digital Markets Act is supposed to help smaller companies get more traffic from Google, but reviews service Yelp says that when it tested Google’s design tweaks with consumers it had the opposite effect—making people less likely to click through to Yelp or another Google competitor.

The results, which Yelp shared with European regulators in December and WIRED this month, put some numerical backing behind complaints from Google rivals in travel, shopping, and hospitality that its efforts to comply with the DMA are insufficient—and potentially more harmful than the status quo. Yelp and thousands of others have been demanding that the EU hold a firm line against the giant companies including Apple and Amazon that are subject to what’s widely considered the world’s strictest antitrust law, violations of which can draw fines of up to 10 percent of global annual sales.

“All the gatekeepers are trying to hold on as long as possible to the status quo and make the new world unattractive,” says Richard Stables, CEO of shopping comparison site Kelkoo, which is unhappy with how Google has tweaked shopping results to comply with the DMA. “That’s really the game plan.”

Google spokesperson Rory O’Donoghue says the more than 20 changes made to search in response to the DMA are providing more opportunities for services such as Yelp to show up in results. “To suggest otherwise is plain wrong,” he says. Overall, Google’s tests of various DMA-inspired designs show clicks to review and comparison websites are up, O’Donoghue says—at the cost of users losing shortcuts to Google tools and individual businesses like airlines and restaurants facing a drop in visits from Google search. “We’ve been seeking feedback from a range of stakeholders over many months as we try to balance the needs of different types of websites while complying with the law,” he says.

Google, which generates 30 percent of its sales from Europe, the Middle East, and Africa, views the DMA as disrespecting its expertise in what users want. Critics such as Yelp argue that Google sometimes siphons users away from the more reliable content they offer. Yelp competes with Google for advertisers but generated less than 1 percent of its record sales of $1.3 billion last year from outside the US. An increase in European traffic could significantly boost its business.

To study search changes, Yelp worked with user-research company Lyssna to watch how hundreds of consumers from around the world interacted with Google’s new EU search results page when asked to find a dinner spot in Paris. For searches like that or for other “local” businesses, as Google calls them, one new design features results from Google Maps data at the top of the page below the search bar but adds a new box widget lower down containing images from and links to reviews websites like Yelp.

The experiments found that about 73 percent of about 500 people using that new design clicked results that kept them inside Google’s ecosystem—an increase over the 55 percent who did so when the design Google is phasing out in Europe was tested with a smaller pool of roughly 250 people.

Yelp also tested a variation of the new design. In this version, which Google has shared with regulators, the new box featuring review websites is placed above the maps widget. It was more successful in drawing people to try alternatives to Google, with only about 44 percent of consumers in the experiment sticking with the search giant. Though the box and widget will be treated equally by Google’s search algorithms, the order the features appear in will vary based on those calculations. Yelp’s concern is that Google will win out too often.

Yelp proposed to EU regulators that to produce more fair outcomes, Google should instead amend the map widget on results pages to include business listings and ratings from numerous providers, placing data from Google’s directory right alongside Yelp and others.

Companies such as Yelp that are critical of the changes in testing have called on the European Commission to immediately open an investigation into Google on March 7, when enforcement of the DMA begins.

“Yelp urges regulators to compel Google to fully comply with both the letter and spirit of the DMA,” says Yelp’s vice president of public policy, David Segal. “Google will soon be in violation of both, because if you look at what Google has put forth, it’s pretty clear that its services still have the best real estate.”

Yelp: It’s gotten worse since Google made changes to comply with EU rules Read More »